Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
Robust Methods for Sensing and Reconstructing Sparse Signals
ERIC Educational Resources Information Center
Carrillo, Rafael E.
2012-01-01
Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…
Exponential Decay of Reconstruction Error from Binary Measurements of Sparse Signals
2014-08-01
Exponential decay of reconstruction error from binary measurements of sparse signals Richard Baraniukr, Simon Foucartg, Deanna Needellc, Yaniv Planb...Church Street, Ann Arbor, MI 48109, USA. Email: wootters@umich.edu August 1, 2014 Abstract Binary measurements arise naturally in a variety of...greatly improve the ability to reconstruct a signal from binary measurements. This is exemplified by one- bit compressed sensing, which takes the
Sparse reconstruction of blade tip-timing signals for multi-mode blade vibration monitoring
NASA Astrophysics Data System (ADS)
Lin, Jun; Hu, Zheng; Chen, Zhong-Sheng; Yang, Yong-Min; Xu, Hai-Long
2016-12-01
Severe blade vibrations may reduce the useful life of the high-speed blade. Nowadays, non-contact measurement using blade tip-timing (BTT) technology is becoming promising in blade vibration monitoring. However, blade tip-timing signals are typically under-sampled. How to extract characteristic features of unknown multi-mode blade vibrations by analyzing these under-sampled signals becomes a big challenge. In this paper, a novel BTT analysis method for reconstructing unknown multi-mode blade vibration signals is proposed. The method consists of two key steps. First, a sparse representation (SR) mathematical model for sparse blade tip-timing signals is built. Second, a multi-mode blade vibration reconstruction algorithm is proposed to solve this SR problem. Experiments are carried out to validate the feasibility of the proposed method. The main advantage of this method is its ability to reconstruct unknown multi-mode blade vibration signals with high accuracy. The minimal requirements of probe number are also presented to provide guidelines for BTT system design.
Atomic library optimization for pulse ultrasonic sparse signal decomposition and reconstruction
NASA Astrophysics Data System (ADS)
Song, Shoupeng; Li, Yingxue; Dogandžić, Aleksandar
2016-02-01
Compressive sampling of pulse ultrasonic NDE signals could bring significant savings in the data acquisition process. Sparse representation of these signals using an atomic library is key to their interpretation and reconstruction from compressive samples. However, the obstacles to practical applicability of such representations are: large size of the atomic library and computational complexity of the sparse decomposition and reconstruction. To help solve these problems, we develop a method for optimizing the ranges of parameters of traditional Gabor-atom library to match a real pulse ultrasonic signal in terms of correlation. As a result of atomic-library optimization, the number of the atoms is greatly reduced. Numerical simulations compare the proposed approach with the traditional method. Simulation results show that both the time efficiency and signal reconstruction energy error are superior to the traditional one even with small-scale atomic library. The performance of the proposed method is also explored under different noise levels. Finally, we apply the proposed method to real pipeline ultrasonic testing data, and the results indicate that our reduced atomic library outperforms the traditional library.
Sparse Representation for Signal Reconstruction in Calorimeters Operating in High Luminosity
NASA Astrophysics Data System (ADS)
Barbosa, Davis P.; de A. Filho, Luciano M.; Peralva, Bernardo S.; Cerqueira, Augusto S.; de Seixas, José M.
2017-07-01
A calorimeter signal reconstruction method, based on sparse representation (SR) of redundant data, is proposed for energy reconstruction in particle colliders operating in high-luminosity conditions. The signal overlapping is first modeled as an underdetermined linear system, leading to a convex set of feasible solutions. The solution with the smallest number of superimposed signals (the SR) that represents the recorded data is obtained through the use of an interior-point (IP) optimization procedure. From a signal processing point-of-view, the procedure performs a source separation, where the information of the amplitude of each convoluted signal is obtained. In the simulation results, a comparison of the proposed method with standard signal reconstruction one was performed. For this, a toy Monte Carlo simulation was developed, focusing in calorimeter front-end signal generation only, where the different levels of pileup and signal-to-noise ratio were used to qualify the proposed method. The results show that the method may be competitive in high-luminosity environments.
A fast algorithm for reconstruction of spectrally sparse signals in super-resolution
NASA Astrophysics Data System (ADS)
Cai, Jian-Feng; Liu, Suhui; Xu, Weiyu
2015-08-01
We propose a fast algorithm to reconstruct spectrally sparse signals from a small number of randomly observed time domain samples. Different from conventional compressed sensing where frequencies are discretized, we consider the super-resolution case where the frequencies can be any values in the normalized continuous frequency domain [0; 1). We first convert our signal recovery problem into a low rank Hankel matrix completion problem, for which we then propose an efficient feasible point algorithm named projected Wirtinger gradient algorithm(PWGA). The algorithm can be further accelerated by a scheme inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Numerical experiments are provided to illustrate the effectiveness of our proposed algorithm. Different from earlier approaches, our algorithm can solve problems of large scale efficiently.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-03-26
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
Demonstration of Sparse Signal Reconstruction for Radar Imaging of Ice Sheets
NASA Astrophysics Data System (ADS)
Heister, Anton; Scheiber, Rolf
2017-04-01
Conventional processing of ice-sounder data produces 2-D images of the ice sheet and bed, where the two dimensions are along-track and depth, while the across-track direction is fixed to nadir. The 2-D images contain information about the topography and radar reflectivity of the ice sheet's surface, bed, and internal layers in the along-track direction. Having multiple antenna phase centers in the across-track direction enables the production of 3-D images of the ice sheet and bed. Compared to conventional 2-D images, these contain additional information about the surface and bed topography, and orientation of the internal layers over a swath in the across-track direction. We apply a 3-D SAR tomographic ice-sounding method based on sparse signal reconstruction [1] to the data collected by Center for Remote Sensing of Ice Sheets (CReSIS) in 2008 in Greenland [2] using their multichannel coherent radar depth sounder (MCoRDS). The MCoRDS data have 16 effective phase centers which allows us to better understand the performance of the method. Lastly we offer sparsity improvement by including wavelet dictionaries into the reconstruction.The results show improved scene feature resolvability in across-track direction compared to MVDR beamformer. References: [1] A. Heister, R. Scheiber, "First Analysis of Sparse Signal Reconstruction for Radar Imaging of Ice Sheets". In: Proceedings of EUSAR, pp. 788-791, June 2016. [2] X. Wu, K. C. Jezek, E. Rodriguez, S. Gogineni, F. Rodriguez-Morales, and A. Freeman, "Ice sheet bed mapping with airborne SAR tomography". IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 10 Part 1, pp. 3791-3802, 2011.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
RBF-network based sparse signal recovery algorithm for compressed sensing reconstruction.
Vidya, L; Vivekanand, V; Shyamkumar, U; Mishra, Deepak
2015-03-01
The approach of applying a cascaded network consisting of radial basis function nodes and least square error minimization block to Compressed Sensing for recovery of sparse signals is analyzed in this paper to improve the computation time and convergence of an existing ANN based recovery algorithm. The proposed radial basis function-least square error projection cascade network for sparse signal Recovery (RASR) utilizes the smoothed L0 norm optimization, L2 least square error projection and feedback network model to improve the signal recovery performance over the existing CSIANN algorithm. The use of ANN architecture in the recovery algorithm gives a marginal reduction in computational time compared to an existing L0 relaxation based algorithm SL0. The simulation results and experimental evaluation of the algorithm performance are presented here. Copyright © 2014 Elsevier Ltd. All rights reserved.
Gu, Renliang; Dogandžić, Aleksandar
2014-02-18
We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer’s law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ{sub 1}-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2014-02-01
We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer's law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ1-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.
A unified approach to sparse signal processing
NASA Astrophysics Data System (ADS)
Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon
2012-12-01
A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally
Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging
Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan
2016-01-01
Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l1-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging. PMID:27783040
Sparse Reconstruction for Micro Defect Detection in Acoustic Micro Imaging.
Zhang, Yichun; Shi, Tielin; Su, Lei; Wang, Xiao; Hong, Yuan; Chen, Kepeng; Liao, Guanglan
2016-10-24
Acoustic micro imaging has been proven to be sufficiently sensitive for micro defect detection. In this study, we propose a sparse reconstruction method for acoustic micro imaging. A finite element model with a micro defect is developed to emulate the physical scanning. Then we obtain the point spread function, a blur kernel for sparse reconstruction. We reconstruct deblurred images from the oversampled C-scan images based on l₁-norm regularization, which can enhance the signal-to-noise ratio and improve the accuracy of micro defect detection. The method is further verified by experimental data. The results demonstrate that the sparse reconstruction is effective for micro defect detection in acoustic micro imaging.
Digitized tissue microarray classification using sparse reconstruction
NASA Astrophysics Data System (ADS)
Xing, Fuyong; Liu, Baiyang; Qi, Xin; Foran, David J.; Yang, Lin
2012-02-01
In this paper, we propose a novel image classification method based on sparse reconstruction errors to discriminate cancerous breast tissue microarray (TMA) discs from benign ones. Sparse representation is employed to reconstruct the samples and separate the benign and cancer discs. The method consists of several steps including mask generation, dictionary learning, and data classification. Mask generation is performed using multiple scale texton histogram, integral histogram and AdaBoost. Two separate cancer and benign TMA dictionaries are learned using K-SVD. Sparse coefficients are calculated using orthogonal matching pursuit (OMP), and the reconstructive error of each testing sample is recorded. The testing image will be divided into many small patches. Each small patch will be assigned to the category which produced the smallest reconstruction error. The final classification of each testing sample is achieved by calculating the total reconstruction errors. Using standard RGB images, and tested on a dataset with 547 images, we achieved much better results than previous literature. The binary classification accuracy, sensitivity, and specificity are 88.0%, 90.6%, and 70.5%, respectively.
Robust Reconstruction of Complex Networks from Sparse Data
NASA Astrophysics Data System (ADS)
Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Di, Zengru
2015-01-01
Reconstructing complex networks from measurable data is a fundamental problem for understanding and controlling collective dynamics of complex networked systems. However, a significant challenge arises when we attempt to decode structural information hidden in limited amounts of data accompanied by noise and in the presence of inaccessible nodes. Here, we develop a general framework for robust reconstruction of complex networks from sparse and noisy data. Specifically, we decompose the task of reconstructing the whole network into recovering local structures centered at each node. Thus, the natural sparsity of complex networks ensures a conversion from the local structure reconstruction into a sparse signal reconstruction problem that can be addressed by using the lasso, a convex optimization method. We apply our method to evolutionary games, transportation, and communication processes taking place in a variety of model and real complex networks, finding that universal high reconstruction accuracy can be achieved from sparse data in spite of noise in time series and missing data of partial nodes. Our approach opens new routes to the network reconstruction problem and has potential applications in a wide range of fields.
Compressed Sensing Doppler Ultrasound Reconstruction Using Block Sparse Bayesian Learning.
Lorintiu, Oana; Liebgott, Herve; Friboulet, Denis
2016-04-01
In this paper we propose a framework for using duplex Doppler ultrasound systems. These type of systems need to interleave the acquisition and display of a B-mode image and of the pulsed Doppler spectrogram. In a recent study (Richy , 2013), we have shown that compressed sensing-based reconstruction of Doppler signal allowed reducing the number of Doppler emissions and yielded better results than traditional interpolation and at least equivalent or even better depending on the configuration than the study estimating the signal from sparse data sets given in Jensen, 2006. We propose here to improve over this study by using a novel framework for randomly interleaving Doppler and US emissions. The proposed method reconstructs the Doppler signal segment by segment using a block sparse Bayesian learning (BSBL) algorithm based CS reconstruction. The interest of using such framework in the context of duplex Doppler is linked to the unique ability of BSBL to exploit block-correlated signals and to recover non-sparse signals. The performance of the technique is evaluated from simulated data as well as experimental in vivo data and compared to the recent results in Richy , 2013.
Compressed sensing sparse reconstruction for coherent field imaging
NASA Astrophysics Data System (ADS)
Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen
2016-04-01
Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).
Surface reconstruction from sparse fringe contours
Cong, G.; Parvin, B.
1998-08-10
A new approach for reconstruction of 3D surfaces from 2D cross-sectional contours is presented. By using the so-called ''Equal Importance Criterion,'' we reconstruct the surface based on the assumption that every point in the region contributes equally to the surface reconstruction process. In this context, the problem is formulated in terms of a partial differential equation (PDE), and we show that the solution for dense contours can be efficiently derived from distance transform. In the case of sparse contours, we add a regularization term to insure smoothness in surface recovery. The proposed technique allows for surface recovery at any desired resolution. The main advantage of the proposed method is that inherent problems due to correspondence, tiling, and branching are avoided. Furthermore, the computed high resolution surface is better represented for subsequent geometric analysis. We present results on both synthetic and real data.
Image reconstruction from photon sparse data
Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J.
2017-01-01
We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected. PMID:28169363
Image reconstruction from photon sparse data
NASA Astrophysics Data System (ADS)
Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J.
2017-02-01
We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.
Image reconstruction from photon sparse data.
Mertens, Lena; Sonnleitner, Matthias; Leach, Jonathan; Agnew, Megan; Padgett, Miles J
2017-02-07
We report an algorithm for reconstructing images when the average number of photons recorded per pixel is of order unity, i.e. photon-sparse data. The image optimisation algorithm minimises a cost function incorporating both a Poissonian log-likelihood term based on the deviation of the reconstructed image from the measured data and a regularization-term based upon the sum of the moduli of the second spatial derivatives of the reconstructed image pixel intensities. The balance between these two terms is set by a bootstrapping technique where the target value of the log-likelihood term is deduced from a smoothed version of the original data. When compared to the original data, the processed images exhibit lower residuals with respect to the true object. We use photon-sparse data from two different experimental systems, one system based on a single-photon, avalanche photo-diode array and the other system on a time-gated, intensified camera. However, this same processing technique could most likely be applied to any low photon-number image irrespective of how the data is collected.
Guided wavefield reconstruction from sparse measurements
NASA Astrophysics Data System (ADS)
Mesnil, Olivier; Ruzzene, Massimo
2016-02-01
Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave
Multiband signal reconstruction for random equivalent sampling.
Zhao, Y J; Liu, C J
2014-10-01
The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method.
Sparse reconstruction of correlated multichannel activity.
Peelman, Sem; Van der Herten, Joachim; De Vos, Maarten; Lee, Wen-Shin; Van Huffel, Sabine; Cuyt, Annie
2013-01-01
Parametric methods for modeling sinusoidal signals with line spectra have been studied for decades. In general, these methods start by representing each sinusoidal component by means of two complex exponential functions, thereby doubling the number of unknown parameters. Recently, a Hankel-plus-Toeplitz matrix pencil method was proposed which directly models sinusoidal signals with discrete spectral content. Compared to its counterpart, which uses a Hankel matrix pencil, it halves the required number of time-domain samples and reduces the size of the involved linear systems. The aim of this paper is twofold. Firstly, to show that this Hankel-plus-Toeplitz matrix pencil also applies to continuous spectra. Secondly, to explore its use in the reconstruction of real-life signals. Promising preliminary results in the reconstruction of correlated multichannel electroencephalographic (EEG) activity are presented. A principal component analysis preprocessing step is carried out to exploit the redundancy in the channel domain. Then the reduced signal representation is successfully reconstructed from fewer samples using the Hankel-plus-Toeplitz matrix pencil. The obtained results encourage the future development of this matrix pencil method along the lines of well-established spectral analysis methods.
Time-frequency manifold sparse reconstruction: A novel method for bearing fault feature extraction
NASA Astrophysics Data System (ADS)
Ding, Xiaoxi; He, Qingbo
2016-12-01
In this paper, a novel transient signal reconstruction method, called time-frequency manifold (TFM) sparse reconstruction, is proposed for bearing fault feature extraction. This method introduces image sparse reconstruction into the TFM analysis framework. According to the excellent denoising performance of TFM, a more effective time-frequency (TF) dictionary can be learned from the TFM signature by image sparse decomposition based on orthogonal matching pursuit (OMP). Then, the TF distribution (TFD) of the raw signal in a reconstructed phase space would be re-expressed with the sum of learned TF atoms multiplied by corresponding coefficients. Finally, one-dimensional signal can be achieved again by the inverse process of TF analysis (TFA). Meanwhile, the amplitude information of the raw signal would be well reconstructed. The proposed technique combines the merits of the TFM in denoising and the atomic decomposition in image sparse reconstruction. Moreover, the combination makes it possible to express the nonlinear signal processing results explicitly in theory. The effectiveness of the proposed TFM sparse reconstruction method is verified by experimental analysis for bearing fault feature extraction.
Sparse image reconstruction on the sphere: implications of a new sampling theorem.
McEwen, Jason D; Puy, Gilles; Thiran, Jean-Philippe; Vandergheynst, Pierre; Van De Ville, Dimitri; Wiaux, Yves
2013-06-01
We study the impact of sampling theorems on the fidelity of sparse image reconstruction on the sphere. We discuss how a reduction in the number of samples required to represent all information content of a band-limited signal acts to improve the fidelity of sparse image reconstruction, through both the dimensionality and sparsity of signals. To demonstrate this result, we consider a simple inpainting problem on the sphere and consider images sparse in the magnitude of their gradient. We develop a framework for total variation inpainting on the sphere, including fast methods to render the inpainting problem computationally feasible at high resolution. Recently a new sampling theorem on the sphere was developed, reducing the required number of samples by a factor of two for equiangular sampling schemes. Through numerical simulations, we verify the enhanced fidelity of sparse image reconstruction due to the more efficient sampling of the sphere provided by the new sampling theorem.
Sparse representation for the ISAR image reconstruction
NASA Astrophysics Data System (ADS)
Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.
2016-05-01
In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.
Time-frequency signature sparse reconstruction using chirp dictionary
NASA Astrophysics Data System (ADS)
Nguyen, Yen T. H.; Amin, Moeness G.; Ghogho, Mounir; McLernon, Des
2015-05-01
This paper considers local sparse reconstruction of time-frequency signatures of windowed non-stationary radar returns. These signals can be considered instantaneously narrow-band, thus the local time-frequency behavior can be recovered accurately with incomplete observations. The typically employed sinusoidal dictionary induces competing requirements on window length. It confronts converse requests on the number of measurements for exact recovery, and sparsity. In this paper, we use chirp dictionary for each window position to determine the signal instantaneous frequency laws. This approach can considerably mitigate the problems of sinusoidal dictionary, and enable the utilization of longer windows for accurate time-frequency representations. It also reduces the picket fence by introducing a new factor, the chirp rate α. Simulation examples are provided, demonstrating the superior performance of local chirp dictionary over its sinusoidal counterpart.
Pruning-Based Sparse Recovery for Electrocardiogram Reconstruction from Compressed Measurements
Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong
2017-01-01
Due to the necessity of the low-power implementation of newly-developed electrocardiogram (ECG) sensors, exact ECG data reconstruction from the compressed measurements has received much attention in recent years. Our interest lies in improving the compression ratio (CR), as well as the ECG reconstruction performance of the sparse signal recovery. To this end, we propose a sparse signal reconstruction method by pruning-based tree search, which attempts to choose the globally-optimal solution by minimizing the cost function. In order to achieve low complexity for the real-time implementation, we employ a novel pruning strategy to avoid exhaustive tree search. Through the restricted isometry property (RIP)-based analysis, we show that the exact recovery condition of our approach is more relaxed than any of the existing methods. Through the simulations, we demonstrate that the proposed approach outperforms the existing sparse recovery methods for ECG reconstruction. PMID:28067856
NASA Astrophysics Data System (ADS)
Gan, Ling; Zhu, Linhua; Guo, Qianwen
2017-05-01
A new image super-resolution reconstruction method based on adaptive sparse domain and total variation regularization of wavelet transform is proposed to solve the problem that the reconstruction image is not clear enough and the edge detail is natural in the reconstruction of single image super resolution. And to solve the problem of the influence of the abnormal data to sub-dictionary quality in the process of neutron training set clustering in the adaptive sparse domain model, the method of an improved K-means is present. Meanwhile, because the sparse decomposition is NP-problem, which will cause the edge of the reconstructed image is not smooth; we propose the total variation regularization based on wavelet transform to enhance the edge of the image without damaging the texture. The experiments show that the reconstructed image can obtain better visual effect and the Peak Signal to Noise Ratio (PSNR) and Structural Similarity (SSIM) are also improved.
An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging
Valente, Solivan A.; Zibetti, Marcelo V. W.; Pipa, Daniel R.; Maia, Joaquim M.; Schneider, Fabio K.
2017-01-01
Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable. PMID:28282862
Sparse-view Reconstruction of Dynamic Processes by Neutron Tomography
NASA Astrophysics Data System (ADS)
Wang, Hu; Kaestner, Anders; Zou, Yubin; Lu, Yuanrong; Guo, Zhiyu
As for neutron tomography, hundreds of projections over the range of 0-180 degrees are required to reconstruct the attenuation matrix with the traditional filtered back projection (FBP) algorithm, and the total acquisition time can reach several hours. This poor temporal resolution constrains that neutron tomography is only feasible to investigate static or quasi-static process. Reducing the number of projections is a possible way to improve the temporal resolution, which however highly relies on sparse-view reconstruction algorithms. To assess the feasibility of sparse-view reconstruction for neutron tomography, both simulation and an experiment of water uptake from a piece of wood composite were studied, and the results indicated that temporal resolution of neutron tomography can be improved when combining the Golden Ratio scan strategy with a prior image-constrained sparse-view reconstruction algorithm-PICCS.
Beam hardening correction for sparse-view CT reconstruction
NASA Astrophysics Data System (ADS)
Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing
2015-03-01
Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant's modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging.
Wavelet sparse transform optimization in image reconstruction based on compressed sensing
NASA Astrophysics Data System (ADS)
Ziran, Wei; Huachuang, Wang; Jianlin, Zhang
2017-06-01
The high image sparsity is very important to improve the accuracy of compressed sensing reconstruction image, and the wavelet transform can make the image sparse obviously. This paper is the optimization method based on wavelet sparse transform in image reconstruction based on compressed sensing, and we have designed a restraining matrix to optimize the wavelet sparse transform. Firstly, the wavelet coefficients are obtained by wavelet transform of the original signal data, and the wavelet coefficients have a tendency of decreasing gradually. The restraining matrix is used to restrain the small coefficients and is a part of image sparse transform, so as to make the wavelet coefficients more sparse. When the sampling rate is between 0. 15 and 0. 45, the simulation results show that the quality promotion of the reconstructed image is the best, and the peak signal to noise ratio (PSNR) is increased by about 0.5dB to 1dB. At the same time, it is more obvious to improve the reconstruction accuracy of the fingerprint texture image, which to some extent makes up for the shortcomings that reconstruction of texture image by compressed sensing based on the wavelet transform has the low accuracy.
Sparse image reconstruction for molecular imaging.
Ting, Michael; Raich, Raviv; Hero, Alfred O
2009-06-01
The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.
Electrocardiograph signal denoising based on sparse decomposition.
Zhu, Junjiang; Li, Xiaolu
2017-08-01
Noise in ECG signals will affect the result of post-processing if left untreated. Since ECG is highly subjective, the linear denoising method with a specific threshold working well on one subject could fail on another. Therefore, in this Letter, sparse-based method, which represents every segment of signal using different linear combinations of atoms from a dictionary, is used to denoise ECG signals, with a view to myoelectric interference existing in ECG signals. Firstly, a denoising model for ECG signals is constructed. Then the model is solved by matching pursuit algorithm. In order to get better results, four kinds of dictionaries are investigated with the ECG signals from MIT-BIH arrhythmia database, compared with wavelet transform (WT)-based method. Signal-noise ratio (SNR) and mean square error (MSE) between estimated signal and original signal are used as indicators to evaluate the performance. The results show that by using the present method, the SNR is higher while the MSE between estimated signal and original signal is smaller.
An evaluation of GPU acceleration for sparse reconstruction
NASA Astrophysics Data System (ADS)
Braun, Thomas R.
2010-04-01
Image processing applications typically parallelize well. This gives a developer interested in data throughput several different implementation options, including multiprocessor machines, general purpose computation on the graphics processor, and custom gate-array designs. Herein, we will investigate these first two options for dictionary learning and sparse reconstruction, specifically focusing on the K-SVD algorithm for dictionary learning and the Batch Orthogonal Matching Pursuit for sparse reconstruction. These methods have been shown to provide state of the art results for image denoising, classification, and object recognition. We'll explore the GPU implementation and show that GPUs are not significantly better or worse than CPUs for this application.
Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging
Sheen, David M.; Hall, Thomas E.
2014-06-09
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Multi-shell diffusion signal recovery from sparse measurements
Rathi, Y.; Michailovich, O.; Laun, F.; Setsompop, K.; Grant, P. E.; Westin, C-F
2014-01-01
For accurate estimation of the ensemble average diffusion propagator (EAP), traditional multi-shell diffusion imaging (MSDI) approaches require acquisition of diffusion signals for a range of b-values. However, this makes the acquisition time too long for several types of patients, making it difficult to use in a clinical setting. In this work, we propose a new method for the reconstruction of diffusion signals in the entire q-space from highly under-sampled sets of MSDI data, thus reducing the scan time significantly. In particular, to sparsely represent the diffusion signal over multiple q-shells, we propose a novel extension to the framework of spherical ridgelets by accurately modeling the monotonically decreasing radial component of the diffusion signal. Further, we enforce the reconstructed signal to have smooth spatial regularity in the brain, by minimizing the total variation (TV) norm. We combine these requirements into a novel cost function and derive an optimal solution using the Alternating Directions Method of Multipliers (ADMM) algorithm. We use a physical phantom data set with known fiber crossing angle of 45° to determine the optimal number of measurements (gradient directions and b-values) needed for accurate signal recovery. We compare our technique with a state-of-the-art sparse reconstruction method (i.e., the SHORE method of (Cheng et al., 2010)) in terms of angular error in estimating the crossing angle, incorrect number of peaks detected, normalized mean squared error in signal recovery as well as error in estimating the return-to-origin probability (RTOP). Finally, we also demonstrate the behavior of the proposed technique on human in-vivo data sets. Based on these experiments, we conclude that using the proposed algorithm, at least 60 measurements (spread over three b-value shells) are needed for proper recovery of MSDI data in the entire q-space. PMID:25047866
Robust compressive sensing of sparse signals: a review
NASA Astrophysics Data System (ADS)
Carrillo, Rafael E.; Ramirez, Ana B.; Arce, Gonzalo R.; Barner, Kenneth E.; Sadler, Brian M.
2016-12-01
Compressive sensing generally relies on the ℓ 2 norm for data fidelity, whereas in many applications, robust estimators are needed. Among the scenarios in which robust performance is required, applications where the sampling process is performed in the presence of impulsive noise, i.e., measurements are corrupted by outliers, are of particular importance. This article overviews robust nonlinear reconstruction strategies for sparse signals based on replacing the commonly used ℓ 2 norm by M-estimators as data fidelity functions. The derived methods outperform existing compressed sensing techniques in impulsive environments, while achieving good performance in light-tailed environments, thus offering a robust framework for CS.
Source reconstruction for neutron coded-aperture imaging: A sparse method.
Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang
2017-08-01
Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.
Source reconstruction for neutron coded-aperture imaging: A sparse method
NASA Astrophysics Data System (ADS)
Wang, Dongming; Hu, Huasi; Zhang, Fengna; Jia, Qinggang
2017-08-01
Neutron coded-aperture imaging has been developed as an important diagnostic for inertial fusion studies in recent decades. It is used to measure the distribution of neutrons produced in deuterium-tritium plasma. Source reconstruction is an essential part of the coded-aperture imaging. In this paper, we applied a sparse reconstruction method to neutron source reconstruction. This method takes advantage of the sparsity of the source image. Monte Carlo neutron transport simulations were performed to obtain the system response. An interpolation method was used while obtaining the spatially variant point spread functions on each point of the source in order to reduce the number of point spread functions that needs to be calculated by the Monte Carlo method. Source reconstructions from simulated images show that the sparse reconstruction method can result in higher signal-to-noise ratio and less distortion at a relatively high statistical noise level.
Smoothed l0 Norm Regularization for Sparse-View X-Ray CT Reconstruction
Li, Ming; Peng, Chengtao; Guan, Yihui; Xu, Pin
2016-01-01
Low-dose computed tomography (CT) reconstruction is a challenging problem in medical imaging. To complement the standard filtered back-projection (FBP) reconstruction, sparse regularization reconstruction gains more and more research attention, as it promises to reduce radiation dose, suppress artifacts, and improve noise properties. In this work, we present an iterative reconstruction approach using improved smoothed l0 (SL0) norm regularization which is used to approximate l0 norm by a family of continuous functions to fully exploit the sparseness of the image gradient. Due to the excellent sparse representation of the reconstruction signal, the desired tissue details are preserved in the resulting images. To evaluate the performance of the proposed SL0 regularization method, we reconstruct the simulated dataset acquired from the Shepp-Logan phantom and clinical head slice image. Additional experimental verification is also performed with two real datasets from scanned animal experiment. Compared to the referenced FBP reconstruction and the total variation (TV) regularization reconstruction, the results clearly reveal that the presented method has characteristic strengths. In particular, it improves reconstruction quality via reducing noise while preserving anatomical features. PMID:27725935
Moving target detection for frequency agility radar by sparse reconstruction
NASA Astrophysics Data System (ADS)
Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi
2016-09-01
Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.
Moving target detection for frequency agility radar by sparse reconstruction.
Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi
2016-09-01
Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.
A Comparison of Methods for Ocean Reconstruction from Sparse Observations
NASA Astrophysics Data System (ADS)
Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.
2014-12-01
We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.
Reconstruction techniques for sparse multistatic linear array microwave imaging
NASA Astrophysics Data System (ADS)
Sheen, David M.; Hall, Thomas E.
2014-06-01
Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.
Point-source reconstruction with a sparse light-sensor array for optical TPC readout
NASA Astrophysics Data System (ADS)
Rutter, G.; Richards, M.; Bennieston, A. J.; Ramachers, Y. A.
2011-07-01
A reconstruction technique for sparse array optical signal readout is introduced and applied to the generic challenge of large-area readout of a large number of point light sources. This challenge finds a prominent example in future, large volume neutrino detector studies based on liquid argon. It is concluded that the sparse array option may be ruled out for reasons of required number of channels when compared to a benchmark derived from charge readout on wire-planes. Smaller-scale detectors, however, could benefit from this technology.
Katkovnik, V; Shevkunov, I A; Petrov, N V; Egiazarian, K
2015-05-15
This work presents the new method for wavefront reconstruction from a digital hologram recorded in off-axis configuration. The main feature of the proposed algorithm is a good ability for noise filtration due to the original formulation of the problem taking into account the presence of noise in the recorded intensity distribution and the sparse phase and amplitude reconstruction approach with the data-adaptive block-matching 3D technique. Basically, the sparsity assumes that low dimensional models can be used for phase and amplitude approximations. This low dimensionality enables strong suppression of noisy components and accurate revealing of the main features of the signals of interest. The principal point is that dictionaries of these sparse models are not known in advance and reconstructed from given noisy observations in a multiobjective optimization procedure. We show experimental results demonstrating the effectiveness of our approach.
An adaptive L1/2 sparse regularization algorithm for super-resolution image reconstruction
NASA Astrophysics Data System (ADS)
Xiong, Jiongtao; Liu, Yijun; Ye, Xiangrong
2017-05-01
In order to solve the ill-posed problem in super-resolution image reconstruction, this paper proposes an adaptive regularization way use sparse representation. We build a new L1/2 non-convex optimization model and apply reweighted L2 Norm for the adaptive algorithm in this paper. Experimental results show the significant effect in denoising and preserving edge details. It outperforms some traditional methods in the value of peak signal to noise ratio and structural similarity.
SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction
NASA Astrophysics Data System (ADS)
Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo
2017-03-01
State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.
A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing
Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst
2015-01-01
Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700
NASA Astrophysics Data System (ADS)
Wang, Qi; Lian, Zhijie; Wang, Jianming; Chen, Qingliang; Sun, Yukuan; Li, Xiuyan; Duan, Xiaojie; Cui, Ziqiang; Wang, Huaxiang
2016-11-01
Electrical impedance tomography (EIT) reconstruction is a nonlinear and ill-posed problem. Exact reconstruction of an EIT image inverts a high dimensional mathematical model to calculate the conductivity field, which causes significant problems regarding that the computational complexity will reduce the achievable frame rate, which is considered as a major advantage of EIT imaging. The single-step method, state estimation method, and projection method were always used to accelerate reconstruction process. The basic principle of these methods is to reduce computational complexity. However, maintaining high resolution in space together with not much cost is still challenging, especially for complex conductivity distribution. This study proposes an idea to accelerate image reconstruction of EIT based on compressive sensing (CS) theory, namely, CSEIT method. The novel CSEIT method reduces the sampling rate through minimizing redundancy in measurements, so that detailed information of reconstruction is not lost. In order to obtain sparse solution, which is the prior condition of signal recovery required by CS theory, a novel image reconstruction algorithm based on patch-based sparse representation is proposed. By applying the new framework of CSEIT, the data acquisition time, or the sampling rate, is reduced by more than two times, while the accuracy of reconstruction is significantly improved.
Tomographic image reconstruction via estimation of sparse unidirectional gradients.
Polak, Adam G; Mroczka, Janusz; Wysoczański, Dariusz
2017-02-01
Since computed tomography (CT) was developed over 35 years ago, new mathematical ideas and computational algorithms have been continuingly elaborated to improve the quality of reconstructed images. In recent years, a considerable effort can be noticed to apply the sparse solution of underdetermined system theory to the reconstruction of CT images from undersampled data. Its significance stems from the possibility of obtaining good quality CT images from low dose projections. Among diverse approaches, total variation (TV) minimizing 2D gradients of an image, seems to be the most popular method. In this paper, a new method for CT image reconstruction via sparse gradients estimation (SGE), is proposed. It consists in estimating 1D gradients specified in four directions using the iterative reweighting algorithm. To investigate its properties and to compare it with TV and other related methods, numerical simulations were performed according to the Monte Carlo scheme, using the Shepp-Logan and more realistic brain phantoms scanned at 9-60 directions in the range from 0 to 179°, with measurement data disturbed by additive Gaussians noise characterized by the relative level of 0.1%, 0.2%, 0.5%, 1%, 2% and 5%. The accuracy of image reconstruction was assessed in terms of the relative root-mean-square (RMS) error. The results show that the proposed SGE algorithm has returned more accurate images than TV for the cases fulfilling the sparsity conditions. Particularly, it preserves sharp edges of regions representing different tissues or organs and yields images of much better quality reconstructed from a small number of projections disturbed by relatively low measurement noise.
NASA Astrophysics Data System (ADS)
Yuan, J.; Xiao, H.; Cai, Z. M.; Xi, C.
2017-01-01
The port-starboard ambiguity in the conventional single towed linear array sonar is one of the most deceiving obstacles which exist in the way of development of spatial spectrum estimation. In order to improve the performance of target detection and Direction of Arrival (DOA) estimation, this paper proposes a novel spatial spectrum sparse reconstruction method based on multiple beamspace measurements (MBM-SR). An array sparse signal model for manoeuvring towed array is established. Then the Mutual Incoherent Property (MIP) is analyzed to ensure the proposed algorithm possessing better spatial spectrum reconstruction property. Simulation results demonstrate that, compared with Conventional Beam Forming (CBF) algorithm, the proposed algorithm has evident advantage in ambiguity suppression ratio (ASR) and estimation performance.
Chen, Shuhang; Liu, Huafeng; Hu, Zhenghui; Zhang, Heye; Shi, Pengcheng; Chen, Yunmei
2015-07-01
Although of great clinical value, accurate and robust reconstruction and segmentation of dynamic positron emission tomography (PET) images are great challenges due to low spatial resolution and high noise. In this paper, we propose a unified framework that exploits temporal correlations and variations within image sequences based on low-rank and sparse matrix decomposition. Thus, the two separate inverse problems, PET image reconstruction and segmentation, are accomplished in a simultaneous fashion. Considering low signal to noise ratio and piece-wise constant assumption of PET images, we also propose to regularize low-rank and sparse matrices with vectorial total variation norm. The resulting optimization problem is solved by augmented Lagrangian multiplier method with variable splitting. The effectiveness of proposed approach is validated on realistic Monte Carlo simulation datasets and the real patient data.
Model-based imaging of damage with Lamb waves via sparse reconstruction.
Levine, Ross M; Michaels, Jennifer E
2013-03-01
Ultrasonic guided waves are gaining acceptance for structural health monitoring and nondestructive evaluation of plate-like structures. One configuration of interest is a spatially distributed array of fixed piezoelectric devices. Typical operation consists of recording signals from all transmit-receive pairs and subtracting pre-recorded baselines to detect changes, possibly due to damage or other effects. While techniques such as delay-and-sum imaging as applied to differential signals are both simple and capable of detecting flaws, their performance is limited, particularly when there are multiple damage sites. Here a very different approach to imaging is considered that exploits the expected sparsity of structural damage; i.e., the structure is mostly damage-free. Differential signals are decomposed into a sparse linear combination of location-based components, which are pre-computed from a simple propagation model. The sparse reconstruction techniques of basis pursuit denoising and orthogonal matching pursuit are applied to achieve this decomposition, and a hybrid reconstruction method is also proposed and evaluated. Noisy simulated data and experimental data recorded on an aluminum plate with artificial damage are considered. Results demonstrate the efficacy of all three methods by producing very sparse indications of damage at the correct locations even in the presence of model mismatch and significant noise.
Recursive Recovery of Sparse Signal Sequences From Compressive Measurements: A Review
NASA Astrophysics Data System (ADS)
Vaswani, Namrata; Zhan, Jinchun
2016-07-01
In this article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.
Efficient Reconstruction of Block-Sparse Signals
2011-01-26
orthononnal columns span the entries associated with the kth block, and deC l’A - 1 {1 (!.!:.f B ’I’A- l 0’ :::: ZolT - B Zon , - Uou , the norm...timation in regression wilh grouped variables," Tech. Rep. 1095, Universit y of \\Viscoll sin , Nov 2004. [71 1.Tropp, A.C . Gilbert , and M.1. Strauss
Multiple sparse volumetric priors for distributed EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-10-15
We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.
Machinery vibration signal denoising based on learned dictionary and sparse representation
NASA Astrophysics Data System (ADS)
Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen
2015-07-01
Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Signal Separation of Helicopter Radar Returns Using Wavelet-Based Sparse Signal Optimisation
2016-10-01
UNCLASSIFIED Signal Separation Of Helicopter Radar Returns Using Wavelet-Based Sparse Signal Optimisation Si Tran Nguyen Nguyen 1, Sandun Kodituwakku...RR–0436 ABSTRACT A novel wavelet-based sparse signal representation technique is used to separate the main and tail rotor blade components of a... separation techniques cannot be applied. A sparse signal representation technique is now proposed for this problem with the tunable Q wavelet transform
Sun, Jiedi; Yu, Yang; Wen, Jiangtao
2017-01-01
Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring. PMID:28635623
Sun, Jiedi; Yu, Yang; Wen, Jiangtao
2017-06-21
Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring.
Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing
Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi
2015-01-01
Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.
Sparse Reconstruction Techniques in MRI: Methods, Applications, and Challenges to Clinical Adoption
Yang, Alice Chieh-Yu; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-01-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in Magnetic Resonance Imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be employed to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MR imaging, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold-standards, are discussed. PMID:27003227
Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole
2016-06-01
The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed.
Sparse representation of group-wise FMRI signals.
Lv, Jinglei; Li, Xiang; Zhu, Dajiang; Jiang, Xi; Zhang, Xin; Hu, Xintao; Zhang, Tuo; Guo, Lei; Liu, Tianming
2013-01-01
The human brain function involves complex processes with population codes of neuronal activities. Neuroscience research has demonstrated that when representing neuronal activities, sparsity is an important characterizing property. Inspired by this finding, significant amount of efforts from the scientific communities have been recently devoted to sparse representations of signals and patterns, and promising achievements have been made. However, sparse representation of fMRI signals, particularly at the population level of a group of different brains, has been rarely explored yet. In this paper, we present a novel group-wise sparse representation of task-based fMRI signals from multiple subjects via dictionary learning methods. Specifically, we extract and pool task-based fMRI signals for a set of cortical landmarks, each of which possesses intrinsic anatomical correspondence, from a group of subjects. Then an effective online dictionary learning algorithm is employed to learn an over-complete dictionary from the pooled population of fMRI signals based on optimally determined dictionary size. Our experiments have identified meaningful Atoms of Interests (AOI) in the learned dictionary, which correspond to consistent and meaningful functional responses of the brain to external stimulus. Our work demonstrated that sparse representation of group-wise fMRI signals is naturally suitable and effective in recovering population codes of neuronal signals conveyed in fMRI data.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
Sound-speed image reconstruction in sparse-aperture 3-D ultrasound transmission tomography.
Jirík, Radovan; Peterlík, Igor; Ruiter, Nicole; Fousek, Jan; Dapp, Robin; Zapf, Michael; Jan, Jirí
2012-02-01
The paper is focused on sound-speed image reconstruction in 3-D ultrasound transmission tomography. Along with ultrasound reflectivity and the attenuation coefficient, sound speed is an important parameter which is related to the type and pathological state of the imaged tissue. This is important in the intended application, breast cancer diagnosis. In contrast to 2-D ultrasound transmission tomography systems, a 3-D system can provide an isotropic spatial resolution in the x-, y-, and z-directions in reconstructed 3-D images of ultrasound parameters. Several challenges must, however, be addressed for 3-D systems-namely, a sparse transducer distribution, low signal-to-noise ratio, and higher computational complexity. These issues are addressed in terms of sound-speed image reconstruction, using edge-preserving regularized algebraic reconstruction in combination with synthetic aperture focusing. The critical points of the implementation are also discussed, because they are crucial to enable a complete 3-D image reconstruction. The methods were tested on a synthetic data set and on data sets measured with the Karlsruhe 3-D ultrasound computer tomography (USCT) I prototype using phantoms. The sound-speed estimates in the reconstructed volumes agreed with the reference values. The breast-phantom outlines and the lesion-mimicking objects were also detectable in the resulting sound-speed volumes.
Task-based data-acquisition optimization for sparse image reconstruction systems
NASA Astrophysics Data System (ADS)
Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.
2017-03-01
Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.
Surface Reconstruction via Fusing Sparse-Sequence of Depth Images.
Yang, Long; Yan, Qingan; Fu, Yanping; Xiao, Chunxia
2017-01-25
Handheld scanning using commodity depth cameras provides a flexible and low-cost manner to get 3D models. The existing methods scan a target by densely fusing all the captured depth images, yet most frames are redundant. The jittering frames inevitably embedded in handheld scanning process will cause feature blurring on the reconstructed model and even trigger the scan failure (i.e., camera tracking losing). To address these problems, in this paper, we propose a novel sparse-sequence fusion (SSF) algorithm for handheld scanning using commodity depth cameras. It first extracts related measurements for analyzing camera motion. Then based on these measurements, we progressively construct a supporting subset for the captured depth image sequence to decrease the data redundancy and the interference from jittering frames. Since SSF will reveal the intrinsic heavy noise of the original depth images, our method introduces a refinement process to eliminate the raw noise and recover geometric features for the depth images selected into the supporting subset. We finally obtain the fused result by integrating the refined depth images into the truncated signed distance field (TSDF) of the target. Multiple comparison experiments are conducted and the results verify the feasibility and validity of SSF for handheld scanning with a commodity depth camera.
Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo
2016-01-01
Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer–Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition. PMID:27649191
Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo
2016-09-15
Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer-Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition.
Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.
Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo
2015-05-01
It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.
An enhanced sparse representation strategy for signal classification
NASA Astrophysics Data System (ADS)
Zhou, Yin; Gao, Jinglun; Barner, Kenneth E.
2012-06-01
Sparse representation based classification (SRC) has achieved state-of-the-art results on face recognition. It is hence desired to extend its power to a broader range of classification tasks in pattern recognition. SRC first encodes a query sample as a linear combination of a few atoms from a predefined dictionary. It then identifies the label by evaluating which class results in the minimum reconstruction error. The effectiveness of SRC is limited by an important assumption that data points from different classes are not distributed along the same radius direction. Otherwise, this approach will lose their discrimination ability, even though data from different classes are actually well-separated in terms of Euclidean distance. This assumption is reasonable for face recognition as images of the same subject under different intensity levels are still considered to be of same-class. However, the assumption is not always satisfied when dealing with many other real-world data, e.g., the Iris dataset, where classes are stratified along the radius direction. In this paper, we propose a new coding strategy, called Nearest- Farthest Neighbors based SRC (NF-SRC), to effectively overcome the limitation within SRC. The dictionary is composed of both the Nearest Neighbors and the Farthest Neighbors. While the Nearest Neighbors are used to narrow the selection of candidate samples, the Farthest Neighbors are employed to make the dictionary more redundant. NF-SRC encodes each query signal in a greedy way similar to OMP. The proposed approach is evaluated over extensive experiments. The encouraging results demonstrate the feasibility of the proposed method.
Lee, Heui Chang; Song, Bongyong; Kim, Jin Sung; Jung, James J.; Li, H. Harold; Mutic, Sasa; Park, Justin C.
2016-01-01
The purpose of this study is to develop a fast and convergence proofed CBCT reconstruction framework based on the compressed sensing theory which not only lowers the imaging dose but also is computationally practicable in the busy clinic. We simplified the original mathematical formulation of gradient projection for sparse reconstruction (GPSR) to minimize the number of forward and backward projections for line search processes at each iteration. GPSR based algorithms generally showed improved image quality over the FDK algorithm especially when only a small number of projection data were available. When there were only 40 projections from 360 degree fan beam geometry, the quality of GPSR based algorithms surpassed FDK algorithm within 10 iterations in terms of the mean squared relative error. Our proposed GPSR algorithm converged as fast as the conventional GPSR with a reasonably low computational complexity. The outcomes demonstrate that the proposed GPSR algorithm is attractive for use in real time applications such as on-line IGRT. PMID:27894103
CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization
Chen, Zijia; Zhou, Linghong
2015-01-01
Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV) minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods. PMID:26089962
NASA Astrophysics Data System (ADS)
Mejia, Yuri H.; Arguello, Henry
2016-05-01
Compressive sensing state-of-the-art proposes random Gaussian and Bernoulli as measurement matrices. Nev- ertheless, often the design of the measurement matrix is subject to physical constraints, and therefore it is frequently not possible that the matrix follows a Gaussian or Bernoulli distribution. Examples of these lim- itations are the structured and sparse matrices of the compressive X-Ray, and compressive spectral imaging systems. A standard algorithm for recovering sparse signals consists in minimizing an objective function that includes a quadratic error term combined with a sparsity-inducing regularization term. This problem can be solved using the iterative algorithms for solving linear inverse problems. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity. However, current algorithms are slow for getting a high quality image reconstruction because they do not exploit the structured and sparsity characteristics of the compressive measurement matrices. This paper proposes the development of a gradient-based algorithm for compressive sensing reconstruction by including a filtering step that yields improved quality using less iterations. This algorithm modifies the iterative solution such that it forces to converge to a filtered version of the residual AT y, where y is the measurement vector and A is the compressive measurement matrix. We show that the algorithm including the filtering step converges faster than the unfiltered version. We design various filters that are motivated by the structure of AT y. Extensive simulation results using various sparse and structured matrices highlight the relative performance gain over the existing iterative process.
Flutter signal extracting technique based on FOG and self-adaptive sparse representation algorithm
NASA Astrophysics Data System (ADS)
Lei, Jian; Meng, Xiangtao; Xiang, Zheng
2016-10-01
Due to various moving parts inside, when a spacecraft runs in orbits, its structure could get a minor angular vibration, which results in vague image formation of space camera. Thus, image compensation technique is required to eliminate or alleviate the effect of movement on image formation and it is necessary to realize precise measuring of flutter angle. Due to the advantages such as high sensitivity, broad bandwidth, simple structure and no inner mechanical moving parts, FOG (fiber optical gyro) is adopted in this study to measure minor angular vibration. Then, movement leading to image degeneration is achieved by calculation. The idea of the movement information extracting algorithm based on self-adaptive sparse representation is to use arctangent function approximating L0 norm to construct unconstrained noisy-signal-aimed sparse reconstruction model and then solve the model by a method based on steepest descent algorithm and BFGS algorithm to estimate sparse signal. Then taking the advantage of the principle of random noises not able to be represented by linear combination of elements, useful signal and random noised are separated effectively. Because the main interference of minor angular vibration to image formation of space camera is random noises, sparse representation algorithm could extract useful information to a large extent and acts as a fitting pre-process method of image restoration. The self-adaptive sparse representation algorithm presented in this paper is used to process the measured minor-angle-vibration signal of FOG used by some certain spacecraft. By component analysis of the processing results, we can find out that the algorithm could extract micro angular vibration signal of FOG precisely and effectively, and can achieve the precision degree of 0.1".
NASA Astrophysics Data System (ADS)
Zeng, Jian; Yang, Jungang; Wu, Hanyang
2017-02-01
Super-resolution method based on sparse reconstruction is an effective way to deal with the closely spaced objects problem, but when the targets in a noisy environment, the noise will cover over the entire field, leading to the sparsity feature of the original scene is destroyed. Aiming at this phenomenon, this paper proposed a super-resolution method which has the adaptative reconstruction ability in noisy environments, this method takes full advantage of the structural characteristics of the sensor and the reconstruction algorithm parameters, through the establishment of infrared imaging model of the observed signals and pixel meshing, establishment of the position and amplitude of the closely spaced objects of sparse representation, and using the point spread function of the optical system to construct over-complete dictionary, the last step is making the reconstruction parameters in a reasonable range through controlling the ratio of non-zero elements in the rebuilt scene, so as to achieve the purpose of removing noise interference and reconstruction of sparse targets accurately. Simulation results show that the proposed method with adaptive reconfiguration in noisy environments.
2015-06-01
Wide- Band Antennas from Sparse Measurements by Patrick Debroux and Berenice Verdin Approved for public release...Army Research Laboratory The Use of Compressive Sensing to Reconstruct Radiation Characteristics of Wide- Band Antennas from Sparse Measurements ...Compressive Sensing to Reconstruct Radiation Characteristics of Wide-Band Antennas from Sparse Measurements 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c
Universal Collaboration Strategies for Signal Detection: A Sparse Learning Approach
NASA Astrophysics Data System (ADS)
Khanduri, Prashant; Kailkhura, Bhavya; Thiagarajan, Jayaraman J.; Varshney, Pramod K.
2016-10-01
This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration). We assume that only a small subset of nodes communicate with the Fusion Center (FC). We design optimal collaboration strategies which are universal for a class of deterministic signals. By establishing the equivalence between the collaboration strategy design problem and sparse PCA, we solve the problem efficiently and evaluate the impact of collaboration on detection performance.
Accuracy of femur reconstruction from sparse geometric data using a statistical shape model.
Zhang, Ju; Besier, Thor F
2017-04-01
Sparse geometric information from limited field-of-view medical images is often used to reconstruct the femur in biomechanical models of the hip and knee. However, the full femur geometry is needed to establish boundary conditions such as muscle attachment sites and joint axes which define the orientation of joint loads. Statistical shape models have been used to estimate the geometry of the full femur from varying amounts of sparse geometric information. However, the effect that different amounts of sparse data have on reconstruction accuracy has not been systematically assessed. In this study, we compared shape model and linear scaling reconstruction of the full femur surface from varying proportions of proximal and distal partial femur geometry in combination with morphometric and landmark data. We quantified reconstruction error in terms of surface-to-surface error as well as deviations in the reconstructed femur's anatomical coordinate system which is important for biomechanical models. Using a partial proximal femur surface, mean shape model-based reconstruction surface error was 1.8 mm with 0.15° or less anatomic axis error, compared to 19.1 mm and 2.7-5.6° for linear scaling. Similar results were found when using a partial distal surface. However, varying amounts of proximal or distal partial surface data had a negligible effect on reconstruction accuracy. Our results show that given an appropriate set of sparse geometric data, a shape model can reconstruct full femur geometry with far greater accuracy than simple scaling.
Clutter Mitigation in Echocardiography Using Sparse Signal Separation
Turek, Javier S.; Elad, Michael; Yavneh, Irad
2015-01-01
In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622
Sparse signal representation and its applications in ultrasonic NDE.
Zhang, Guang-Ming; Zhang, Cheng-Zhong; Harvey, David M
2012-03-01
Many sparse signal representation (SSR) algorithms have been developed in the past decade. The advantages of SSR such as compact representations and super resolution lead to the state of the art performance of SSR for processing ultrasonic non-destructive evaluation (NDE) signals. Choosing a suitable SSR algorithm and designing an appropriate overcomplete dictionary is a key for success. After a brief review of sparse signal representation methods and the design of overcomplete dictionaries, this paper addresses the recent accomplishments of SSR for processing ultrasonic NDE signals. The advantages and limitations of SSR algorithms and various overcomplete dictionaries widely-used in ultrasonic NDE applications are explored in depth. Their performance improvement compared to conventional signal processing methods in many applications such as ultrasonic flaw detection and noise suppression, echo separation and echo estimation, and ultrasonic imaging is investigated. The challenging issues met in practical ultrasonic NDE applications for example the design of a good dictionary are discussed. Representative experimental results are presented for demonstration. Copyright © 2011 Elsevier B.V. All rights reserved.
Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang
2015-01-01
Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055
Optimization-based reconstruction of sparse images from few-view projections
NASA Astrophysics Data System (ADS)
Han, Xiao; Bian, Junguo; Ritman, Erik L.; Sidky, Emil Y.; Pan, Xiaochuan
2012-08-01
In this work, we investigate optimization-based image reconstruction from few-view (i.e. less than ten views) projections of sparse objects such as coronary-artery specimens. Using optimization programs as a guide, we formulate constraint programs as reconstruction programs and develop algorithms to reconstruct images through solving the reconstruction programs. Characterization studies are carried out for elucidating the algorithm properties of ‘convergence’ (relative to designed solutions) and ‘utility’ (relative to desired solutions) by using simulated few-view data calculated from a discrete FORBILD coronary-artery phantom, and real few-view data acquired from a human coronary-artery specimen. Study results suggest that carefully designed reconstruction programs and algorithms can yield accurate reconstructions of sparse images from few-view projections.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Reconstructing cortical current density by exploring sparseness in the transform domain.
Ding, Lei
2009-05-07
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
NASA Astrophysics Data System (ADS)
Deng, Shuangcheng; Jiang, Lipei; Cao, Yingyu; Zhang, Junwen; Zheng, Haiyang
2012-01-01
The 3D reconstruction for freehand 3D ultrasound is a challenging issue because the recorded B-scans are not only sparse, but also non-parallel (actually they may intersect each other). Conventional volume reconstruction methods can't reconstruct sparse data efficiently while not introducing geometrical artifacts, and conventional surface reconstruction methods can't reconstruct surfaces from contours that are arbitrarily oriented in 3D space. We developed a new surface reconstruction method for freehand 3D ultrasound. It is based on variational implicit function which is presented by Greg Turk for shape transformation. In the new method, we first constructed on- & off-surface constraints from the segmented contours of all recorded B-scans, then used a variational interpolation technique to get a single implicit function in 3D. Finally, the implicit function was evaluated to extract the zero-valued surface as reconstruction result. Two experiment was conducted to assess our variational surface reconstruction method, and the experiment results have shown that the new method is capable of reconstructing surface smoothly from sparse contours which can be arbitrarily oriented in 3D space.
Semi-blind sparse image reconstruction with application to MRFM.
Park, Se Un; Dobigeon, Nicolas; Hero, Alfred O
2012-09-01
We propose a solution to the image deconvolution problem where the convolution kernel or point spread function (PSF) is assumed to be only partially known. Small perturbations generated from the model are exploited to produce a few principal components explaining the PSF uncertainty in a high-dimensional space. Unlike recent developments on blind deconvolution of natural images, we assume the image is sparse in the pixel basis, a natural sparsity arising in magnetic resonance force microscopy (MRFM). Our approach adopts a Bayesian Metropolis-within-Gibbs sampling framework. The performance of our Bayesian semi-blind algorithm for sparse images is superior to previously proposed semi-blind algorithms such as the alternating minimization algorithm and blind algorithms developed for natural images. We illustrate our myopic algorithm on real MRFM tobacco virus data.
Sparse angular CT reconstruction using non-local means based iterative-correction POCS.
Huang, Jing; Ma, Jianhua; Liu, Nan; Zhang, Hua; Bian, Zhaoying; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan
2011-04-01
In divergent-beam computed tomography (CT), sparse angular sampling frequently leads to conspicuous streak artifacts. In this paper, we propose a novel non-local means (NL-means) based iterative-correction projection onto convex sets (POCS) algorithm, named as NLMIC-POCS, for effective and robust sparse angular CT reconstruction. The motivation for using NLMIC-POCS is that NL-means filtered image can produce an acceptable priori solution for sequential POCS iterative reconstruction. The NLMIC-POCS algorithm has been tested on simulated and real phantom data. The experimental results show that the presented NLMIC-POCS algorithm can significantly improve the image quality of the sparse angular CT reconstruction in suppressing streak artifacts and preserving the edges of the image.
Image reconstruction from sparse data samples along spiral trajectories in MRI
NASA Astrophysics Data System (ADS)
LaRoque, Samuel J.; Sidky, Emil Y.; Pan, Xiaochuan
2007-03-01
We present a method for obtaining accurate image reconstruction from sparsely sampled magnetic resonance imaging (MRI) data obtained along spiral trajectories in Fourier space. This method minimizes the total variation (TV) of the estimated image, subject to the constraint that the Fourier transform of the image matches the known samples in Fourier space. Using this method, we demonstrate accurate image reconstruction from sparse Fourier samples. We also show that the algorithm is reasonably robust to the effects of measurement noise. Reconstruction from such sparse sampling should reduce scan times, improving scan quality through reduction of motion-related artifacts and allowing more rapid evaluation of time-critical conditions such as stroke. Although our results are discussed in the context of two-dimensional MRI, they are directly applicable to higher dimensional imaging and to other sampling patterns in Fourier space.
Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns
2015-03-01
Transform (DCT). The algorithm was evaluated by using 3 antennas modeled with the high-frequency structural simulator (HFSS): a half-wave dipole, a Vivaldi...and a pyramidal horn. The 2-D radiation pattern was reconstructed for each antenna using less than 44% of the total number of measurements with low...The 3-D radiation patterns of a pyramidal horn antenna was reconstructed by using only 13% of the total number of measurements. By using the
Sparse representation-based ECG signal enhancement and QRS detection.
Zhou, Yichao; Hu, Xiyuan; Tang, Zhenmin; Ahn, Andrew C
2016-12-01
Electrocardiogram (ECG) signal enhancement and QRS complex detection is a critical preprocessing step for further heart disease analysis and diagnosis. In this paper, we propose a sparse representation-based ECG signal enhancement and QRS complex detection algorithm. Unlike traditional Fourier or wavelet transform-based methods, which use fixed bases, the proposed algorithm models the ECG signal as the superposition of a few inner structures plus additive random noise, where these structures (referred to here as atoms) can be learned from the input signal or a training set. Using these atoms and their properties, we can accurately approximate the original ECG signal and remove the noise and other artifacts such as baseline wandering. Additionally, some of the atoms with larger kurtosis values can be modified and used as an indication function to detect and locate the QRS complexes in the enhanced ECG signals. To demonstrate the robustness and efficacy of the proposed algorithm, we compare it with several state-of-the-art ECG enhancement and QRS detection algorithms using both simulated and real-life ECG recordings.
Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.
Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M
2014-02-10
Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.
Impact-force sparse reconstruction from highly incomplete and inaccurate measurements
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Chen, Xuefeng
2016-08-01
The classical l2-norm-based regularization methods applied for force reconstruction inverse problem require that the number of measurements should not be less than the number of unknown sources. Taking into account the sparse nature of impact-force in time domain, we develop a general sparse methodology based on minimizing l1-norm for solving the highly underdetermined model of impact-force reconstruction. A monotonic two-step iterative shrinkage/thresholding (MTWIST) algorithm is proposed to find the sparse solution to such an underdetermined model from highly incomplete and inaccurate measurements, which can be problematic with Tikhonov regularization. MTWIST is highly efficient for large-scale ill-posed problems since it mainly involves matrix-vector multiplies without matrix factorization. In sparsity frame, the proposed sparse regularization method can not only determine the actual impact location from many candidate sources but also simultaneously reconstruct the time history of impact-force. Simulation and experiment including single-source and two-source impact-force reconstruction are conducted on a simply supported rectangular plate and a shell structure to illustrate the effectiveness and applicability of MTWIST, respectively. Both the locations and force time histories of the single-source and two-source cases are accurately reconstructed from a single accelerometer, where the high noise level is considered in simulation and the primary noise in experiment is supposed to be colored noise. Meanwhile, the consecutive impact-forces reconstruction in a large-scale (greater than 104) sparse frame illustrates that MTWIST has advantages of computational efficiency and identification accuracy over Tikhonov regularization.
Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai
2016-12-01
Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.
Sparse Reconstruction of Electric Fields from Radial Magnetic Data
NASA Astrophysics Data System (ADS)
Yeates, Anthony R.
2017-02-01
Accurate estimates of the horizontal electric field on the Sun’s visible surface are important not only for estimating the Poynting flux of magnetic energy into the corona but also for driving time-dependent magnetohydrodynamic models of the corona. In this paper, a method is developed for estimating the horizontal electric field from a sequence of radial-component magnetic field maps. This problem of inverting Faraday’s law has no unique solution. Unfortunately, the simplest solution (a divergence-free electric field) is not realistically localized in regions of nonzero magnetic field, as would be expected from Ohm’s law. Our new method generates instead a localized solution, using a basis pursuit algorithm to find a sparse solution for the electric field. The method is shown to perform well on test cases where the input magnetic maps are flux balanced in both Cartesian and spherical geometries. However, we show that if the input maps have a significant imbalance of flux—usually arising from data assimilation—then it is not possible to find a localized, realistic, electric field solution. This is the main obstacle to driving coronal models from time sequences of solar surface magnetic maps.
Duarte-Carvajalino, Julio Martin; Sapiro, Guillermo
2009-07-01
Sparse signal representation, analysis, and sensing have received a lot of attention in recent years from the signal processing, optimization, and learning communities. On one hand, learning overcomplete dictionaries that facilitate a sparse representation of the data as a liner combination of a few atoms from such dictionary leads to state-of-the-art results in image and video restoration and classification. On the other hand, the framework of compressed sensing (CS) has shown that sparse signals can be recovered from far less samples than those required by the classical Shannon-Nyquist Theorem. The samples used in CS correspond to linear projections obtained by a sensing projection matrix. It has been shown that, for example, a nonadaptive random sampling matrix satisfies the fundamental theoretical requirements of CS, enjoying the additional benefit of universality. On the other hand, a projection sensing matrix that is optimally designed for a certain class of signals can further improve the reconstruction accuracy or further reduce the necessary number of samples. In this paper, we introduce a framework for the joint design and optimization, from a set of training images, of the nonparametric dictionary and the sensing matrix. We show that this joint optimization outperforms both the use of random sensing matrices and those matrices that are optimized independently of the learning of the dictionary. Particular cases of the proposed framework include the optimization of the sensing matrix for a given dictionary as well as the optimization of the dictionary for a predefined sensing environment. The presentation of the framework and its efficient numerical optimization is complemented with numerous examples on classical image datasets.
Novel regularized sparse model for fluorescence molecular tomography reconstruction
NASA Astrophysics Data System (ADS)
Liu, Yuhao; Liu, Jie; An, Yu; Jiang, Shixin
2017-01-01
Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in small animals. FMT has been used in surgical navigation for tumor resection and has many potential applications at the physiological, metabolic, and molecular levels in tissues. The hybrid system combined FMT and X-ray computed tomography (XCT) was pursued for accurate detection. However, the result is usually over-smoothed and over-shrunk. In this paper, we propose a region reconstruction method for FMT in which the elastic net (E-net) regularization is used to combine L1-norm and L2-norm. The E-net penalty corresponds to adding the L1-norm penalty and a L2-norm penalty. Elastic net combines the advantages of L1-norm regularization and L2-norm regularization. It could achieve the balance between the sparsity and smooth by simultaneously employing the L1-norm and the L2-norm. To solve the problem effectively, the proximal gradient algorithms was used to accelerate the computation. To evaluate the performance of the proposed E-net method, numerical phantom experiments are conducted. The simulation study shows that the proposed method achieves accurate and is able to reconstruct image effectively.
Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV)
2014-01-01
Background The sparse CT (Computed Tomography), inspired by compressed sensing, means to introduce a prior information of image sparsity into CT reconstruction to reduce the input projections so as to reduce the potential threat of incremental X-ray dose to patients’ health. Recently, many remarkable works were concentrated on the sparse CT reconstruction from sparse (limited-angle or few-view style) projections. In this paper we would like to incorporate more prior information into the sparse CT reconstruction for improvement of performance. It is known decades ago that the given projection directions can provide information about the directions of edges in the restored CT image. ATV (Anisotropic Total Variation), a TV (Total Variation) norm based regularization, could use the prior information of image sparsity and edge direction simultaneously. But ATV can only represent the edge information in few directions and lose much prior information of image edges in other directions. Methods To sufficiently use the prior information of edge directions, a novel MDATV (Multi-Direction Anisotropic Total Variation) is proposed. In this paper we introduce the 2D-IGS (Two Dimensional Image Gradient Space), and combined the coordinate rotation transform with 2D-IGS to represent edge information in multiple directions. Then by incorporating this multi-direction representation into ATV norm we get the MDATV regularization. To solve the optimization problem based on the MDATV regularization, a novel ART (algebraic reconstruction technique) + MDATV scheme is outlined. And NESTA (NESTerov’s Algorithm) is proposed to replace GD (Gradient Descent) for minimizing the TV-based regularization. Results The numerical and real data experiments demonstrate that MDATV based iterative reconstruction improved the quality of restored image. NESTA is more suitable than GD for minimization of TV-based regularization. Conclusions MDATV regularization can sufficiently use the prior
Sparse CT reconstruction based on multi-direction anisotropic total variation (MDATV).
Li, Hongxiao; Chen, Xiaodong; Wang, Yi; Zhou, Zhongxing; Zhu, Qingzhen; Yu, Daoyin
2014-07-04
The sparse CT (Computed Tomography), inspired by compressed sensing, means to introduce a prior information of image sparsity into CT reconstruction to reduce the input projections so as to reduce the potential threat of incremental X-ray dose to patients' health. Recently, many remarkable works were concentrated on the sparse CT reconstruction from sparse (limited-angle or few-view style) projections. In this paper we would like to incorporate more prior information into the sparse CT reconstruction for improvement of performance. It is known decades ago that the given projection directions can provide information about the directions of edges in the restored CT image. ATV (Anisotropic Total Variation), a TV (Total Variation) norm based regularization, could use the prior information of image sparsity and edge direction simultaneously. But ATV can only represent the edge information in few directions and lose much prior information of image edges in other directions. To sufficiently use the prior information of edge directions, a novel MDATV (Multi-Direction Anisotropic Total Variation) is proposed. In this paper we introduce the 2D-IGS (Two Dimensional Image Gradient Space), and combined the coordinate rotation transform with 2D-IGS to represent edge information in multiple directions. Then by incorporating this multi-direction representation into ATV norm we get the MDATV regularization. To solve the optimization problem based on the MDATV regularization, a novel ART (algebraic reconstruction technique) + MDATV scheme is outlined. And NESTA (NESTerov's Algorithm) is proposed to replace GD (Gradient Descent) for minimizing the TV-based regularization. The numerical and real data experiments demonstrate that MDATV based iterative reconstruction improved the quality of restored image. NESTA is more suitable than GD for minimization of TV-based regularization. MDATV regularization can sufficiently use the prior information of image sparsity and edge information
NASA Astrophysics Data System (ADS)
Tipton, John; Hooten, Mevin; Goring, Simon
2017-01-01
Scientific records of temperature and precipitation have been kept for several hundred years, but for many areas, only a shorter record exists. To understand climate change, there is a need for rigorous statistical reconstructions of the paleoclimate using proxy data. Paleoclimate proxy data are often sparse, noisy, indirect measurements of the climate process of interest, making each proxy uniquely challenging to model statistically. We reconstruct spatially explicit temperature surfaces from sparse and noisy measurements recorded at historical United States military forts and other observer stations from 1820 to 1894. One common method for reconstructing the paleoclimate from proxy data is principal component regression (PCR). With PCR, one learns a statistical relationship between the paleoclimate proxy data and a set of climate observations that are used as patterns for potential reconstruction scenarios. We explore PCR in a Bayesian hierarchical framework, extending classical PCR in a variety of ways. First, we model the latent principal components probabilistically, accounting for measurement error in the observational data. Next, we extend our method to better accommodate outliers that occur in the proxy data. Finally, we explore alternatives to the truncation of lower-order principal components using different regularization techniques. One fundamental challenge in paleoclimate reconstruction efforts is the lack of out-of-sample data for predictive validation. Cross-validation is of potential value, but is computationally expensive and potentially sensitive to outliers in sparse data scenarios. To overcome the limitations that a lack of out-of-sample records presents, we test our methods using a simulation study, applying proper scoring rules including a computationally efficient approximation to leave-one-out cross-validation using the log score to validate model performance. The result of our analysis is a spatially explicit reconstruction of spatio
Sparse Reconstruction of the Merging A520 Cluster System
NASA Astrophysics Data System (ADS)
Peel, Austin; Lanusse, François; Starck, Jean-Luc
2017-09-01
Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is an exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.
Spatial Compressive Sensing for Strain Data Reconstruction from Sparse Sensors
2014-10-01
the novel theory of compressive sensing and principles of continuum mechanics. Compressive sensing , also known as compressed sensing , refers to the...asserts that certain signals or images can be recovered from what was previously believed to be a highly incomplete measurement. Compressed sensing ...matrix completion problem is quite similar to compressive sensing , as a similar heuristic approach , convex relaxation, is used to recover
Precise RFID localization in impaired environment through sparse signal recovery
NASA Astrophysics Data System (ADS)
Subedi, Saurav; Zhang, Yimin D.; Amin, Moeness G.
2013-05-01
Radio frequency identification (RFID) is a rapidly developing wireless communication technology for electronically identifying, locating, and tracking products, assets, and personnel. RFID has become one of the most important means to construct real-time locating systems (RTLS) that track and identify the location of objects in real time using simple, inexpensive tags and readers. The applicability and usefulness of RTLS techniques depend on their achievable accuracy. In particular, when multilateration-based localization techniques are exploited, the achievable accuracy primarily relies on the precision of the range estimates between a reader and the tags. Such range information can be obtained by using the received signal strength indicator (RSSI) and/or the phase difference of arrival (PDOA). In both cases, however, the accuracy is significantly compromised when the operation environment is impaired. In particular, multipath propagation significantly affects the measurement accuracy of both RSSI and phase information. In addition, because RFID systems are typically operated in short distances, RSSI and phase measurements are also coupled with the reader and tag antenna patterns, making accurate RFID localization very complicated and challenging. In this paper, we develop new methods to localize RFID tags or readers by exploiting sparse signal recovery techniques. The proposed method allows the channel environment and antenna patterns to be taken into account and be properly compensated at a low computational cost. As such, the proposed technique yields superior performance in challenging operation environments with the above-mentioned impairments.
Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences
NASA Astrophysics Data System (ADS)
Marcinczak, Jan Marek; Painer, Sven; Grigat, Rolf-Rainer
2015-03-01
Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.
Direct reconstruction of enhanced signal in computed tomography perfusion
NASA Astrophysics Data System (ADS)
Li, Bin; Lyu, Qingwen; Ma, Jianhua; Wang, Jing
2016-04-01
High imaging dose has been a concern in computed tomography perfusion (CTP) as repeated scans are performed at the same location of a patient. On the other hand, signal changes only occur at limited regions in CT acquired at different time points. In this work, we propose a new reconstruction strategy by effectively utilizing the initial phase high-quality CT to reconstruct the later phase CT acquired with a low-dose protocol. In the proposed strategy, initial high-quality CT is considered as a base image and enhanced signal (ES) is reconstructed directly by minimizing the penalized weighted least-square (PWLS) criterion. The proposed PWLS-ES strategy converts the conventional CT reconstruction into a sparse signal reconstruction problem. Digital and anthropomorphic phantom studies were performed to evaluate the performance of the proposed PWLS-ES strategy. Both phantom studies show that the proposed PWLS-ES method outperforms the standard iterative CT reconstruction algorithm based on the same PWLS criterion according to various quantitative metrics including root mean squared error (RMSE) and the universal quality index (UQI).
Compressed sensing techniques for arbitrary frequency-sparse signals in structural health monitoring
NASA Astrophysics Data System (ADS)
Duan, Zhongdong; Kang, Jie
2014-03-01
Structural health monitoring requires collection of large number sample data and sometimes high frequent vibration data for detecting the damage of structures. The expensive cost for collecting the data is a big challenge. The recent proposed Compressive Sensing method enables a potentially large reduction in the sampling, and it is a way to meet the challenge. The Compressed Sensing theory requires sparse signal, meaning that the signals can be well-approximated as a linear combination of just a few elements from a known discrete basis or dictionary. The signal of structure vibration can be decomposed into a few sinusoid linear combinations in the DFT domain. Unfortunately, in most cases, the frequencies of decomposed sinusoid are arbitrary in that domain, which may not lie precisely on the discrete DFT basis or dictionary. In this case, the signal will lost its sparsity, and that makes recovery performance degrades significantly. One way to improve the sparsity of the signal is to increase the size of the dictionary, but there exists a tradeoff: the closely-spaced DFT dictionary will increase the coherence between the elements in the dictionary, which in turn decreases recovery performance. In this work we introduce three approaches for arbitrary frequency signals recovery. The first approach is the continuous basis pursuit (CBP), which reconstructs a continuous basis by introducing interpolation steps. The second approach is a semidefinite programming (SDP), which searches the sparest signal on continuous basis without establish any dictionary, enabling a very high recovery precision. The third approach is spectral iterative hard threshold (SIHT), which is based on redundant DFT dictionary and a restricted union-of-subspaces signal model, inhibiting closely spaced sinusoids. The three approaches are studied by numerical simulation. Structure vibration signal is simulated by a finite element model, and compressed measurements of the signal are taken to perform
Sparse asynchronous cortical generators can produce measurable scalp EEG signals.
von Ellenrieder, Nicolás; Dan, Jonathan; Frauscher, Birgit; Gotman, Jean
2016-09-01
We investigate to what degree the synchronous activation of a smooth patch of cortex is necessary for observing EEG scalp activity. We perform extensive simulations to compare the activity generated on the scalp by different models of cortical activation, based on intracranial EEG findings reported in the literature. The spatial activation is modeled as a cortical patch of constant activation or as random sets of small generators (0.1 to 3cm(2) each) concentrated in a cortical region. Temporal activation models for the generation of oscillatory activity are either equal phase or random phase across the cortical patches. The results show that smooth or random spatial activation profiles produce scalp electric potential distributions with the same shape. Also, in the generation of oscillatory activity, multiple cortical generators with random phase produce scalp activity attenuated on average only 2 to 4 times compared to generators with equal phase. Sparse asynchronous cortical generators can produce measurable scalp EEG. This is a possible explanation for seemingly paradoxical observations of simultaneous disorganized intracranial activity and scalp EEG signals. Thus, the standard interpretation of scalp EEG might constitute an oversimplification of the underlying brain activity. Copyright © 2016 Elsevier Inc. All rights reserved.
Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin
2016-01-01
Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts. PMID:27796013
Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin
2015-10-01
Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
Improved Compressed Sensing-Based Algorithm for Sparse-View CT Image Reconstruction
Babyn, Paul; Cooper, David; Pratt, Isaac
2013-01-01
In computed tomography (CT), there are many situations where reconstruction has to be performed with sparse-view data. In sparse-view CT imaging, strong streak artifacts may appear in conventionally reconstructed images due to limited sampling rate that compromises image quality. Compressed sensing (CS) algorithm has shown potential to accurately recover images from highly undersampled data. In the past few years, total-variation-(TV-) based compressed sensing algorithms have been proposed to suppress the streak artifact in CT image reconstruction. In this paper, we propose an efficient compressed sensing-based algorithm for CT image reconstruction from few-view data where we simultaneously minimize three parameters: the ℓ 1 norm, total variation, and a least squares measure. The main feature of our algorithm is the use of two sparsity transforms—discrete wavelet transform and discrete gradient transform. Experiments have been conducted using simulated phantoms and clinical data to evaluate the performance of the proposed algorithm. The results using the proposed scheme show much smaller streaking artifacts and reconstruction errors than other conventional methods. PMID:23606898
Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn
2016-03-04
Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.
Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn
2016-01-01
Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up. PMID:26940661
Wideband pulse reconstruction from sparse spectral-amplitude data. Final report
Casey, K.F.; Baertlein, B.A.
1993-01-01
Methods are investigated for reconstructing a wideband time-domain pulse waveform from a sparse set of samples of its frequency-domain amplitude spectrum. Approaches are outlined which comprise various means of spectrum interpolation followed by phase retrieval. Methods for phase retrieval are reviewed, and it is concluded that useful results can only be obtained by assuming a minimum-phase solution. Two reconstruction algorithms` are proposed. The first is based upon the use of Cauchy`s technique for estimating the amplitude spectrum in the form of a ratio of polynomials. The second uses B-spline interpolation among the sampled values to reconstruct this spectrum. Reconstruction of the time-domain waveform via inverse Fourier transformation follows, based on the assumption of minimum phase. Representative numerical results are given.
Zhang, Wanhong; Zhou, Tong
2015-01-01
Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can
Mahrous, Hesham; Ward, Rabab
2016-01-01
This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335
Incorporation of noise and prior images in penalized-likelihood reconstruction of sparse data
NASA Astrophysics Data System (ADS)
Ding, Yifu; Siewerdsen, Jeffrey H.; Stayman, J. Webster
2012-03-01
Many imaging scenarios involve a sequence of tomographic data acquisitions to monitor change over time - e.g., longitudinal studies of disease progression (tumor surveillance) and intraoperative imaging of tissue changes during intervention. Radiation dose imparted for these repeat acquisitions present a concern. Because such image sequences share a great deal of information between acquisitions, using prior image information from baseline scans in the reconstruction of subsequent scans can relax data fidelity requirements of follow-up acquisitions. For example, sparse data acquisitions, including angular undersampling and limited-angle tomography, limit exposure by reducing the number of acquired projections. Various approaches such as prior-image constrained compressed sensing (PICCS) have successfully incorporated prior images in the reconstruction of such sparse data. Another technique to limit radiation dose is to reduce the x-ray fluence per projection. However, many methods for reconstruction of sparse data do not include a noise model accounting for stochastic fluctuations in such low-dose measurements and cannot balance the differing information content of various measurements. In this paper, we present a prior-image, penalized-likelihood estimator (PI-PLE) that utilizes prior image information, compressed-sensing penalties, and a Poisson noise model for measurements. The approach is applied to a lung nodule surveillance scenario with sparse data acquired at low exposures to illustrate performance under cases of extremely limited data fidelity. The results show that PI-PLE is able to greatly reduce streak artifacts that otherwise arise from photon starvation, and maintain high-resolution anatomical features, whereas traditional approaches are subject to streak artifacts or lower-resolution images.
NASA Astrophysics Data System (ADS)
Zhang, Pan; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Wei, Yajie
2017-04-01
Full waveform inversion (FWI) reconstructs the underground velocity structures by minimizing the data residual between calculated wavefields and observed wavefileds. The conventional FWI usually uses some local optimization algorithms which lead to a strong dependency on initial velocity model. The objective function corresponding to low-frequency data components has less local minima. Reconstructing low-frequency information from recorded seismic data and using it in FWI can reduce cycle-skipping and thus weaken the dependency of inversion process on initial model. In this paper, based on the conventional frequency down-shifting method, we propose a sparse blind deconvolution-convolution low-frequency data reconstruction method, which can simultaneously update the wavelet and reconstruct the low-frequency components. First, we extract the subsurface reflection impulse responses (SRIR) by solving a L1 norm sparse constraint problem using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). Then we test the accuracy of our algorithm, and discuss the effect of wavelet error and noises on the reconstruction result. When the wavelet is inaccurate, we update the amplitude of wavelet by alternately inverting L1 norm constraint and Tikhonov regularization problems, and correct the time-shift error by cross-correlating the direct waves. After that we can get the accurate wavelet and SRIR simultaneously. Then using the reconstructed data successively as observed data, combining it with dynamic random sources and layer-stripping methods, we propose a new strategy for the fast multiscale FWI. We test our method by numerical examples in several cases including blended acquisition cases. The results show that it has good anti-noise property and it can reconstruct valid low-frequency components when the observed data lacking low-frequency information. The example using inaccurate wavelet shows that the blind deconvolution-convolution algorithm is able to obtain accurate
Advances in thermographic signal reconstruction
NASA Astrophysics Data System (ADS)
Shepard, Steven M.; Frendberg Beemer, Maria
2015-05-01
Since its introduction in 2001, the Thermographic Signal Reconstruction (TSR) method has emerged as one of the most widely used methods for enhancement and analysis of thermographic sequences, with applications extending beyond industrial NDT into biomedical research, art restoration and botany. The basic TSR process, in which a noise reduced replica of each pixel time history is created, yields improvement over unprocessed image data that is sufficient for many applications. However, examination of the resulting logarithmic time derivatives of each TSR pixel replica provides significant insight into the physical mechanisms underlying the active thermography process. The deterministic and invariant properties of the derivatives have enabled the successful implementation of automated defect recognition and measurement systems. Unlike most approaches to analysis of thermography data, TSR does not depend on flawbackground contrast, so that it can also be applied to characterization and measurement of thermal properties of flaw-free samples. We present a summary of recent advances in TSR, a review of the underlying theory and examples of its implementation.
Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging
Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528
SparseTracer: the Reconstruction of Discontinuous Neuronal Morphology in Noisy Images.
Li, Shiwei; Zhou, Hang; Quan, Tingwei; Li, Jing; Li, Yuxin; Li, Anan; Luo, Qingming; Gong, Hui; Zeng, Shaoqun
2017-04-01
Digital reconstruction of a single neuron occupies an important position in computational neuroscience. Although many novel methods have been proposed, recent advances in molecular labeling and imaging systems allow for the production of large and complicated neuronal datasets, which pose many challenges for neuron reconstruction, especially when discontinuous neuronal morphology appears in a strong noise environment. Here, we develop a new pipeline to address this challenge. Our pipeline is based on two methods, one is the region-to-region connection (RRC) method for detecting the initial part of a neurite, which can effectively gather local cues, i.e., avoid the whole image analysis, and thus boosts the efficacy of computation; the other is constrained principal curves method for completing the neurite reconstruction, which uses the past reconstruction information of a neurite for current reconstruction and thus can be suitable for tracing discontinuous neurites. We investigate the reconstruction performances of our pipeline and some of the best state-of-the-art algorithms on the experimental datasets, indicating the superiority of our method in reconstructing sparsely distributed neurons with discontinuous neuronal morphologies in noisy environment. We show the strong ability of our pipeline in dealing with the large-scale image dataset. We validate the effectiveness in dealing with various kinds of image stacks including those from the DIADEM challenge and BigNeuron project.
Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples
NASA Astrophysics Data System (ADS)
Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.
2016-06-01
In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.
Some Factors Affecting Time Reversal Signal Reconstruction
NASA Astrophysics Data System (ADS)
Prevorovsky, Z.; Kober, J.
Time reversal (TR) ultrasonic signal processing is now broadly used in a variety of applications, and also in NDE/NDT field. TR processing is used e.g. for S/N ratio enhancement, reciprocal transducer calibration, location, identification, and reconstruction of unknown sources, etc. TR procedure in con-junction with nonlinear elastic wave spectroscopy NEWS is also useful for sensitive detection of defects (nonlinearity presence). To enlarge possibilities of acoustic emission (AE) method, we proposed the use of TR signal reconstruction ability for detected AE signals transfer from a structure with AE source onto a similar remote model of the structure (real or numerical), which allows easier source analysis under laboratory conditions. Though the TR signal reconstruction is robust regarding the system variations, some small differences and changes influence space-time TR focus and reconstruction quality. Experiments were performed on metallic parts of both simple and complicated geometry to examine effects of small changes of temperature or configuration (body shape, dimensions, transducers placement, etc.) on TR reconstruction quality. Results of experiments are discussed in this paper. Considering mathematical similarity between TR and Coda Wave Interferometry (CWI), prediction of signal reconstruction quality was possible using only the direct propagation. The results show how some factors like temperature or stress changes may deteriorate the TR reconstruction quality. It is also shown that sometimes the reconstruction quality is not enhanced using longer TR signal (S/N ratio may decrease).
Initial experience in primal-dual optimization reconstruction from sparse-PET patient data
NASA Astrophysics Data System (ADS)
Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan
2016-03-01
There exists interest in designing a PET system with reduced detectors due to cost concerns, while not significantly compromising the PET utility. Recently developed optimization-based algorithms, which have demonstrated the potential clinical utility in image reconstruction from sparse CT data, may be used for enabling such design of innovative PET systems. In this work, we investigate a PET configuration with reduced number of detectors, and carry out preliminary studies from patient data collected by use of such sparse-PET configuration. We consider an optimization problem combining Kullback-Leibler (KL) data fidelity with an image TV constraint, and solve it by using a primal-dual optimization algorithm developed by Chambolle and Pock. Results show that advanced algorithms may enable the design of innovative PET configurations with reduced number of detectors, while yielding potential practical PET utilities.
Ray, J.; Lee, J.; Yadav, V.; ...
2015-04-29
Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also
A new look at signal sparsity paradigm for low-dose computed tomography image reconstruction
NASA Astrophysics Data System (ADS)
Liu, Yan; Zhang, Hao; Moore, William; Liang, Zhengrong
2016-03-01
Signal sparsity in computed tomography (CT) image reconstruction field is routinely interpreted as sparse angular sampling around the patient body whose image is to be reconstructed. For CT clinical applications, while the normal tissues may be known and treated as sparse signals but the abnormalities inside the body are usually unknown signals and may not be treated as sparse signals. Furthermore, the locations and structures of abnormalities are also usually unknown, and this uncertainty adds in more challenges in interpreting signal sparsity for clinical applications. In this exploratory experimental study, we assume that once the projection data around the continuous body are discretized regardless at what sampling rate, the image reconstruction of the continuous body from the discretized data becomes a signal sparse problem. We hypothesize that a dense prior model describing the continuous body is a desirable choice for achieving an optimal solution for a given clinical task. We tested this hypothesis by adapting total variation stroke (TVS) model to describe the continuous body signals and showing the gain over the classic filtered backprojection (FBP) at a wide range of angular sampling rate. For the given clinical task of detecting lung nodules of size 5mm and larger, a consistent improvement of TVS over FBP on nodule detection was observed by an experienced radiologists from low sample rate to high sampling rate. This experimental outcome concurs with the expectation of the TVS model. Further investigation for theoretical insights and task-dependent evaluations is needed.
Optimization-based Image Reconstruction from Sparse-view Data in Offset-Detector CBCT
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithm, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data, and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging. PMID:23257068
Multimodal exploitation and sparse reconstruction for guided-wave structural health monitoring
NASA Astrophysics Data System (ADS)
Golato, Andrew; Santhanam, Sridhar; Ahmad, Fauzia; Amin, Moeness G.
2015-05-01
The presence of multiple modes in guided-wave structural health monitoring has been usually considered a nuisance and a variety of methods have been devised to ensure the presence of a single mode. However, valuable information regarding the nature of defects can be gleaned by including multiple modes in image recovery. In this paper, we propose an effective approach for localizing defects in thin plates, which involves inversion of a multimodal Lamb wave based model by means of sparse reconstruction. We consider not only the direct symmetric and anti-symmetric fundamental propagating Lamb modes, but also the defect-spawned mixed modes arising due to asymmetry of defects. Model-based dictionaries for the direct and spawned modes are created, which take into account the associated dispersion and attenuation through the medium. Reconstruction of the region of interest is performed jointly across the multiple modes by employing a group sparse reconstruction approach. Performance validation of the proposed defect localization scheme is provided using simulated data for an aluminum plate.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
Ray, J.; Lee, J.; Yadav, V.; ...
2014-08-20
We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less
Total Variation-Stokes Strategy for Sparse-View X-ray CT Image Reconstruction
Liu, Yan; Ma, Jianhua; Lu, Hongbing; Wang, Ke; Zhang, Hao; Moore, William
2014-01-01
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and/or other constraints, a piecewise-smooth X-ray computed tomography image can be reconstructed from sparse-view projection data. However, due to the piecewise constant assumption for the TV model, the reconstructed images are frequently reported to suffer from the blocky or patchy artifacts. To eliminate this drawback, we present a total variation-stokes-projection onto convex sets (TVS-POCS) reconstruction method in this paper. The TVS model is derived by introducing isophote directions for the purpose of recovering possible missing information in the sparse-view data situation. Thus the desired consistencies along both the normal and the tangent directions are preserved in the resulting images. Compared to the previous TV-based image reconstruction algorithms, the preserved consistencies by the TVS-POCS method are expected to generate noticeable gains in terms of eliminating the patchy artifacts and preserving subtle structures. To evaluate the presented TVS-POCS method, both qualitative and quantitative studies were performed using digital phantom, physical phantom and clinical data experiments. The results reveal that the presented method can yield images with several noticeable gains, measured by the universal quality index and the full-width-at-half-maximum merit, as compared to its corresponding TV-based algorithms. In addition, the results further indicate that the TVS-POCS method approaches to the gold standard result of the filtered back-projection reconstruction in the full-view data case as theoretically expected, while most previous iterative methods may fail in the full-view case because of their artificial textures in the results. PMID:24595347
Data sinogram sparse reconstruction based on steering kernel regression and filtering strategies
NASA Astrophysics Data System (ADS)
Marquez, Miguel A.; Mojica, Edson; Arguello, Henry
2016-05-01
Computed tomography images have an impact in many applications such as medicine, and others. Recently, compressed sensing-based acquisition strategies have been proposed in order to reduce the x-ray radiation dose. However, these methods lose critical information of the sinogram. In this paper, a reconstruction method of sparse measurements from a sinogram is proposed. The proposed approach takes advantage of the redundancy of similar patches in the sinogram, and estimates a target pixel using a weighted average of its neighbors. Simulation results show that the proposed method obtained a gain up to 2 dB with respect to an l1 minimization algorithm.
NASA Astrophysics Data System (ADS)
Zhao, Fengjun; Qu, Xiaochao; Zhang, Xing; Poon, Ting-Chung; Kim, Taegeun; Kim, You Seok; Liang, Jimin
2012-03-01
The optical imaging takes advantage of coherent optics and has promoted the development of visualization of biological application. Based on the temporal coherence, optical coherence tomography can deliver three-dimensional optical images with superior resolutions, but the axial and lateral scanning is a time-consuming process. Optical scanning holography (OSH) is a spatial coherence technique which integrates three-dimensional object into a two-dimensional hologram through a two-dimensional optical scanning raster. The advantages of high lateral resolution and fast image acquisition offer it a great potential application in three-dimensional optical imaging, but the prerequisite is the accurate and practical reconstruction algorithm. Conventional method was first adopted to reconstruct sectional images and obtained fine results, but some drawbacks restricted its practicality. An optimization method based on 2 l norm obtained more accurate results than that of the conventional methods, but the intrinsic smooth of 2 l norm blurs the reconstruction results. In this paper, a hard-threshold based sparse inverse imaging algorithm is proposed to improve the sectional image reconstruction. The proposed method is characterized by hard-threshold based iterating with shrinkage threshold strategy, which only involves lightweight vector operations and matrix-vector multiplication. The performance of the proposed method has been validated by real experiment, which demonstrated great improvement on reconstruction accuracy at appropriate computational cost.
Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views
NASA Astrophysics Data System (ADS)
Xie, Huiqiao; Niu, Tianye; Yang, Yi; Ren, Yi; Tang, Xiangyang
2016-03-01
The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection - FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.
Bandlimited graph signal reconstruction by diffusion operator
NASA Astrophysics Data System (ADS)
Yang, Lishan; You, Kangyong; Guo, Wenbin
2016-12-01
Signal processing on graphs extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs. For a bandlimited graph signal, the unknown data associated with unsampled vertices can be reconstructed from the sampled data by exploiting the spatial relationship of graph signal. In this paper, we propose a generalized analytical framework of unsampled graph signal and introduce a concept of diffusion operator which consists of local-mean and global-bias diffusion operator. Then, a diffusion operator-based iterative algorithm is proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, the reconstructed residuals associated with the sampled vertices are diffused to all the unsampled vertices for accelerating the convergence. We then prove that the proposed reconstruction strategy converges to the original graph signal. The simulation results demonstrate the effectiveness of the proposed reconstruction strategy with various downsampling patterns, fluctuation of graph cut-off frequency, robustness on the classic graph structures, and noisy scenarios.
NASA Astrophysics Data System (ADS)
Mei, Kai; Kopp, Felix K.; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Kirschke, Jan S.; Noël, Peter B.; Baum, Thomas
2017-03-01
The trabecular bone microstructure is a key to the early diagnosis and advanced therapy monitoring of osteoporosis. Regularly measuring bone microstructure with conventional multi-detector computer tomography (MDCT) would expose patients with a relatively high radiation dose. One possible solution to reduce exposure to patients is sampling fewer projection angles. This approach can be supported by advanced reconstruction algorithms, with their ability to achieve better image quality under reduced projection angles or high levels of noise. In this work, we investigated the performance of iterative reconstruction from sparse sampled projection data on trabecular bone microstructure in in-vivo MDCT scans of human spines. The computed MDCT images were evaluated by calculating bone microstructure parameters. We demonstrated that bone microstructure parameters were still computationally distinguishable when half or less of the radiation dose was employed.
NASA Astrophysics Data System (ADS)
Zhang, Zhaohui; Liu, Anran; Lei, Qian
2015-12-01
In this paper, we propose a method for single image super-resolution(SR). Given the training set produced from large amount of high-low resolution image patches, an over-complete joint dictionary is firstly learned from a pair of high-low resolution image feature space based on Restricted Boltzmann Machines (RBM). Then for each low resolution image patch densely extracted from an up-scaled low resolution input image , its high resolution image patch can be reconstructed based on sparse representation. Finally, the reconstructed image patches are overlapped to form a large image, and a high resolution image can be achieved by means of iterated residual image compensation. Experimental results verify the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Jóźwiak, Grzegorz
2017-03-01
Scanning probe microscopy (SPM) is a well known tool used for the investigation of phenomena in objects in the nanometer size range. However, quantitative results are limited by the size and the shape of the nanoprobe used in experiments. Blind tip reconstruction (BTR) is a very popular method used to reconstruct the upper boundary on the shape of the probe. This method is known to be very sensitive to all kinds of interference in the atomic force microscopy (AFM) image. Due to mathematical morphology calculus, the interference makes the BTR results biased rather than randomly disrupted. For this reason, the careful choice of methods used for image enhancement and denoising, as well as the shape of a calibration sample are very important. In the paper, the results of thorough investigations on the shape of a calibration standard are shown. A novel shape is proposed and a tool for the simulation of AFM images of this calibration standard was designed. It was shown that careful choice of the initial tip allows us to use images of hole structures to blindly reconstruct the shape of a probe. The simulator was used to test the impact of modern filtration algorithms on the BTR process. These techniques are based on sparse approximation with function dictionaries learned on the basis of an image itself. Various learning algorithms and parameters were tested to determine the optimal combination for sparse representation. It was observed that the strong reduction of noise does not guarantee strong reduction in reconstruction errors. It seems that further improvements will be possible by the combination of BTR and a noise reduction procedure.
Sparse-view X-ray CT Reconstruction via Total Generalized Variation Regularization
Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2014-01-01
Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as “PWLS-TGV” for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties. PMID:24842150
Su, Hai; Xing, Fuyong; Yang, Lin
2016-01-01
Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96. PMID:26812706
Das, Vineeta; Puhan, Niladri B
2017-04-01
Computer-assisted automated exudate detection is crucial for large-scale screening of diabetic retinopathy (DR). The motivation of this work is robust and accurate detection of low contrast and isolated hard exudates using fundus imaging. Gabor filtering is first performed to enhance exudate visibility followed by Tsallis entropy thresholding. The obtained candidate exudate pixel map is useful for further removal of falsely detected candidates using sparse-based dictionary learning and classification. Two reconstructive dictionaries are learnt using the intensity, gradient, local energy, and transform domain features extracted from exudate and background patches of the training fundus images. Then, a sparse representation-based classifier separates the true exudate pixels from false positives using least reconstruction error. The proposed method is evaluated on the publicly available e-ophtha EX and standard DR database calibration level 1 (DIARETDB1) databases and high exudate detection performance is achieved. In the e-ophtha EX database, mean sensitivity of 85.80% and positive predictive value of 57.93% are found. For the DIARETDB1 database, an area under the curve of 0.954 is obtained.
SD-SEM: sparse-dense correspondence for 3D reconstruction of microscopic samples.
Baghaie, Ahmadreza; Tafti, Ahmad P; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-06-01
Scanning electron microscopy (SEM) imaging has been a principal component of many studies in biomedical, mechanical, and materials sciences since its emergence. Despite the high resolution of captured images, they remain two-dimensional (2D). In this work, a novel framework using sparse-dense correspondence is introduced and investigated for 3D reconstruction of stereo SEM images. SEM micrographs from microscopic samples are captured by tilting the specimen stage by a known angle. The pair of SEM micrographs is then rectified using sparse scale invariant feature transform (SIFT) features/descriptors and a contrario RANSAC for matching outlier removal to ensure a gross horizontal displacement between corresponding points. This is followed by dense correspondence estimation using dense SIFT descriptors and employing a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization. Given the pixel-by-pixel correspondence and the tilt angle of the specimen stage during the acquisition of micrographs, depth can be recovered. Extensive tests reveal the strength of the proposed method for high-quality reconstruction of microscopic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fahmy, Ahmed S.; Gabr, Refaat E.; Heberlein, Keith; Hu, Xiaoping P.
2006-01-01
Image reconstruction from nonuniformly sampled spatial frequency domain data is an important problem that arises in computed imaging. Current reconstruction techniques suffer from limitations in their model and implementation. In this paper, we present a new reconstruction method that is based on solving a system of linear equations using an efficient iterative approach. Image pixel intensities are related to the measured frequency domain data through a set of linear equations. Although the system matrix is too dense and large to solve by direct inversion in practice, a simple orthogonal transformation to the rows of this matrix is applied to convert the matrix into a sparse one up to a certain chosen level of energy preservation. The transformed system is subsequently solved using the conjugate gradient method. This method is applied to reconstruct images of a numerical phantom as well as magnetic resonance images from experimental spiral imaging data. The results support the theory and demonstrate that the computational load of this method is similar to that of standard gridding, illustrating its practical utility. PMID:23165034
Recovery of sparse translation-invariant signals with continuous basis pursuit.
Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero
2011-10-01
We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality.
Wang, Yubo; Veluvolu, Kalyana C
2017-06-14
It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.
Block-sparse reconstruction and imaging for Lamb wave structural health monitoring.
Levine, Ross M; Michaels, Jennifer E
2014-06-01
A frequently investigated paradigm for monitoring the integrity of plate-like structures is a spatially-distributed array of piezoelectric transducers, with each array element capable of both transmitting and receiving ultrasonic guided waves. This configuration is relatively inexpensive and allows interrogation of defects from multiple directions over a relatively large area. Typically, full sets of pairwise transducer signals are acquired by exciting one transducer at a time in a round-robin fashion. Many algorithms that operate on such data use differential signals that are created by subtracting prerecorded baseline signals, leaving only signal differences introduced by scatterers. Analysis methods such as delay-and-sum imaging operate on these signals to detect and locate point-like defects, but such algorithms have limited performance and suffer when potential scatterers have high directionality or unknown phase-shifting behavior. Signal envelopes are commonly used to mitigate the effects of unknown phase shifts, but this further reduces performance. The blocksparse technique presented here uses a different principle to locate damage: each pixel is assumed to have a corresponding multidimensional linear scattering model, allowing any possible amplitude and phase shift for each transducer pair should a scatterer be present. By assuming that the differential signals are linear combinations of a sparse subset of these models, it is possible to split such signals into location-based components. Results are presented here for three experiments using aluminum and composite plates, each with a different type of scatterer. The scatterers in these images have smaller spot sizes than delay-and-sum imaging, and the images themselves have fewer artifacts. Although a propagation model is required, block-sparse imaging performs well even with a small number of transducers or without access to dispersion curves.
GSM signal reconstruction with MLSE detection
NASA Astrophysics Data System (ADS)
Krysik, Piotr
2010-09-01
In this paper method of regeneration of the GSM signal was presented. Unlike other approaches based on suppression of echoes this one makes use of encoded digital data transmitted in the GSM interface. The signal is synthetically reconstructed from digital data stream by comparison to the original signal. In order to keep bit error rate low in the presence of inter-symbol interference the maximum likelihood sequence estimation (MLSE) is used.
Evaluation of Sparse-view Reconstruction from Flat-panel-detector Cone-beam CT
Bian, J.; Siewerdsen, J. H.; Han, X.; Sidky, E. Y.; Prince, J. L.; Pelizzari, C. A.; Pan, X.
2013-01-01
Flat-panel-detector X-ray cone-beam computed tomography (CBCT) is used in a rapidly increasing host of imaging applications, including image-guided surgery and radiotherapy. The purpose of the work is to investigate and evaluate image reconstruction from data collected at projection views significantly fewer than what is used in current CBCT imaging. Specifically, we carried out imaging experiments by use of a bench-top CBCT system that was designed to mimic imaging conditions in image-guided surgery and radiotherapy; we applied an image reconstruction algorithm based on constrained total-variation (TV)-minimization to data acquired with sparsely sampled view-angles; and we conducted extensive evaluation of algorithm performance. Results of the evaluation studies demonstrate that, depending upon scanning conditions and imaging tasks, algorithms based on constrained TV-minimization can reconstruct images of potential utility from a small fraction of the data used in typical, current CBCT applications. A practical implication of the study is that the optimization of algorithm design and implementation can be exploited for considerably reducing imaging effort and radiation dose in CBCT. PMID:20962368
Xie, Huiqiao; Yang, Yi; Tang, Xiangyang; Niu, Tianye; Ren, Yi
2015-06-15
Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, which are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality
Sparse representation of whole-brain fMRI signals for identification of functional networks.
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming
2015-02-01
There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.
Ning, Lipeng; Laun, Frederik; Gur, Yaniv; DiBella, Edward V R; Deslauriers-Gauthier, Samuel; Megherbi, Thinhinane; Ghosh, Aurobrata; Zucchelli, Mauro; Menegaz, Gloria; Fick, Rutger; St-Jean, Samuel; Paquette, Michael; Aranda, Ramon; Descoteaux, Maxime; Deriche, Rachid; O'Donnell, Lauren; Rathi, Yogesh
2015-12-01
Diffusion magnetic resonance imaging (dMRI) is the modality of choice for investigating in-vivo white matter connectivity and neural tissue architecture of the brain. The diffusion-weighted signal in dMRI reflects the diffusivity of water molecules in brain tissue and can be utilized to produce image-based biomarkers for clinical research. Due to the constraints on scanning time, a limited number of measurements can be acquired within a clinically feasible scan time. In order to reconstruct the dMRI signal from a discrete set of measurements, a large number of algorithms have been proposed in recent years in conjunction with varying sampling schemes, i.e., with varying b-values and gradient directions. Thus, it is imperative to compare the performance of these reconstruction methods on a single data set to provide appropriate guidelines to neuroscientists on making an informed decision while designing their acquisition protocols. For this purpose, the SPArse Reconstruction Challenge (SPARC) was held along with the workshop on Computational Diffusion MRI (at MICCAI 2014) to validate the performance of multiple reconstruction methods using data acquired from a physical phantom. A total of 16 reconstruction algorithms (9 teams) participated in this community challenge. The goal was to reconstruct single b-value and/or multiple b-value data from a sparse set of measurements. In particular, the aim was to determine an appropriate acquisition protocol (in terms of the number of measurements, b-values) and the analysis method to use for a neuroimaging study. The challenge did not delve on the accuracy of these methods in estimating model specific measures such as fractional anisotropy (FA) or mean diffusivity, but on the accuracy of these methods to fit the data. This paper presents several quantitative results pertaining to each reconstruction algorithm. The conclusions in this paper provide a valuable guideline for choosing a suitable algorithm and the corresponding
Ning, Lipeng; Laun, Frederik; Gur, Yaniv; DiBella, Edward V. R.; Deslauriers-Gauthier, Samuel; Megherbi, Thinhinane; Ghosh, Aurobrata; Zucchelli, Mauro; Menegaz, Gloria; Fick, Rutger; St-Jean, Samuel; Paquette, Michael; Aranda, Ramon; Descoteaux, Maxime; Deriche, Rachid; O’Donnell, Lauren; Rathi, Yogesh
2015-01-01
Diffusion magnetic resonance imaging (dMRI) is the modality of choice for investigating in-vivo white matter connectivity and neural tissue architecture of the brain. The diffusion-weighted signal in dMRI reflects the diffusivity of water molecules in brain tissue and can be utilized to produce image-based biomarkers for clinical research. Due to the constraints on scanning time, a limited number of measurements can be acquired within a clinically feasible scan time. In order to reconstruct the dMRI signal from a discrete set of measurements, a large number of algorithms have been proposed in recent years in conjunction with varying sampling schemes, i.e., with varying b-values and gradient directions. Thus, it is imperative to compare the performance of these reconstruction methods on a single data set to provide appropriate guidelines to neuroscientists on making an informed decision while designing their acquisition protocols. For this purpose, the SParse Reconstruction Challenge (SPARC) was held along with the workshop on Computational Diffusion MRI (at MICCAI 2014) to validate the performance of multiple reconstruction methods using data acquired from a physical phantom. A total of 16 reconstruction algorithms (9 teams) participated in this community challenge. The goal was to reconstruct single b-value and/or multiple b-value data from a sparse set of measurements. In particular, the aim was to determine an appropriate acquisition protocol (in terms of the number of measurements, b-values) and the analysis method to use for a neuroimaging study. The challenge did not delve on the accuracy of these methods in estimating model specific measures such as fractional anisotropy (FA) or mean diffusivity, but on the accuracy of these methods to fit the data. This paper presents several quantitative results pertaining to each reconstruction algorithm. The conclusions in this paper provide a valuable guideline for choosing a suitable algorithm and the corresponding
Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Sparse Bayesian framework applied to 3D super-resolution reconstruction in fetal brain MRI
NASA Astrophysics Data System (ADS)
Becerra, Laura C.; Velasco Toledo, Nelson; Romero Castro, Eduardo
2015-01-01
Fetal Magnetic Resonance (FMR) is an imaging technique that is becoming increasingly important as allows assessing brain development and thus make an early diagnostic of congenital abnormalities, spatial resolution is limited by the short acquisition time and the unpredictable fetus movements, in consequence the resulting images are characterized by non-parallel projection planes composed by anisotropic voxels. The sparse Bayesian representation is a flexible strategy which is able to model complex relationships. The Super-resolution is approached as a regression problem, the main advantage is the capability to learn data relations from observations. Quantitative performance evaluation was carried out using synthetic images, the proposed method demonstrates a better reconstruction quality compared with standard interpolation approach. The presented method is a promising approach to improve the information quality related with the 3-D fetal brain structure. It is important because allows assessing brain development and thus make an early diagnostic of congenital abnormalities.
Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation
Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina
2014-01-01
In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467
Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao
2016-01-01
At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376
Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao
2016-01-01
At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals.
Sparse approximation of long-term biomedical signals for classification via dynamic PCA.
Xie, Shengkun; Jin, Feng; Krishnan, Sridhar
2011-01-01
Sparse approximation is a novel technique in applications of event detection problems to long-term complex biomedical signals. It involves simplifying the extent of resources required to describe a large set of data sufficiently for classification. In this paper, we propose a multivariate statistical approach using dynamic principal component analysis along with the non-overlapping moving window technique to extract feature information from univariate long-term observational signals. Within the dynamic PCA framework, a few principal components plus the energy measure of signals in principal component subspace are highly promising for applying event detection problems to both stationary and non-stationary signals. The proposed method has been first tested using synthetic databases which contain various representative signals. The effectiveness of the method is then verified with real EEG signals for the purpose of epilepsy diagnosis and epileptic seizure detection. This sparse method produces a 100% classification accuracy for both synthetic data and real single channel EEG data.
NASA Astrophysics Data System (ADS)
Dabbech, A.; Ferrari, C.; Mary, D.; Slezak, E.; Smirnov, O.; Kenyon, J. S.
2015-04-01
Context. Recent years have been seeing huge developments of radio telescopes and a tremendous increase in their capabilities (sensitivity, angular and spectral resolution, field of view, etc.). Such systems make designing more sophisticated techniques mandatory not only for transporting, storing, and processing this new generation of radio interferometric data, but also for restoring the astrophysical information contained in such data. Aims.In this paper we present a new radio deconvolution algorithm named MORESANEand its application to fully realistic simulated data of MeerKAT, one of the SKA precursors. This method has been designed for the difficult case of restoring diffuse astronomical sources that are faint in brightness, complex in morphology, and possibly buried in the dirty beam's side lobes of bright radio sources in the field. Methods.MORESANE is a greedy algorithm that combines complementary types of sparse recovery methods in order to reconstruct the most appropriate sky model from observed radio visibilities. A synthesis approach is used for reconstructing images, in which the synthesis atoms representing the unknown sources are learned using analysis priors. We applied this new deconvolution method to fully realistic simulations of the radio observations of a galaxy cluster and of an HII region in M 31. Results.We show that MORESANE is able to efficiently reconstruct images composed of a wide variety of sources (compact point-like objects, extended tailed radio galaxies, low-surface brightness emission) from radio interferometric data. Comparisons with the state of the art algorithms indicate that MORESANE provides competitive results in terms of both the total flux/surface brightness conservation and fidelity of the reconstructed model. MORESANE seems particularly well suited to recovering diffuse and extended sources, as well as bright and compact radio sources known to be hosted in galaxy clusters.
Liquid argon TPC signal formation, signal processing and reconstruction techniques
NASA Astrophysics Data System (ADS)
Baller, B.
2017-07-01
This document describes a reconstruction chain that was developed for the ArgoNeuT and MicroBooNE experiments at Fermilab. These experiments study accelerator neutrino interactions that occur in a Liquid Argon Time Projection Chamber. Reconstructing the properties of particles produced in these interactions benefits from the knowledge of the micro-physics processes that affect the creation and transport of ionization electrons to the readout system. A wire signal deconvolution technique was developed to convert wire signals to a standard form for hit reconstruction, to remove artifacts in the electronics chain and to remove coherent noise. A unique clustering algorithm reconstructs line-like trajectories and vertices in two dimensions which are then matched to create of 3D objects. These techniques and algorithms are available to all experiments that use the LArSoft suite of software.
An add-on video compression codec based on content-adaptive sparse super-resolution reconstructions
NASA Astrophysics Data System (ADS)
Yang, Shu; Jiang, Jianmin
2017-02-01
In this paper, we introduce an idea of content-adaptive sparse reconstruction to achieve optimized magnification quality for those down sampled video frames, to which two stages of pruning are applied to select the closest correlated images for construction of an over-complete dictionary and drive the sparse representation of its enlarged frame. In this way, not only the sampling and dictionary training process is accelerated and optimized in accordance with the input frame content, but also an add-on video compression codec can be further developed by applying such scheme as a preprocessor to any standard video compression algorithm. Our extensive experiments illustrate that (i) the proposed content-adaptive sparse reconstruction outperforms the existing benchmark in terms of super-resolution quality; (ii) When applied to H.264, one of the international video compression standards, the proposed add-on video codec can achieve three times more compression while maintaining competitive decoding quality.
Wilson, Neil E.; Burns, Brian L.; Iqbal, Zohaib; Thomas, M. Albert
2015-01-01
Purpose To implement a 5D (3 spatial + 2 spectral) correlated spectroscopic imaging sequence for application to human calf. Theory and Methods Nonuniform sampling was applied across the two phase encoded dimensions and the indirect spectral dimension of an echo planar correlated spectroscopic imaging sequence. Reconstruction was applied that minimized the group sparse mixed ℓ2,1-norm of the data. Multichannel data was compressed using a sensitivity map-based approach with a spatially-dependent transform matrix and utilized the self sparsity of the individual coil images to simplify the reconstruction. Results Single channel data with 8× and 16× undersampling are shown in the calf of a diabetic patient. A 15 channel scan with 12× undersampling of a healthy volunteer was reconstructed using 5 virtual channels and compared to a fully sampled single slice scan. Group sparse reconstruction faithfully reconstructs the lipid cross peaks much better than ℓ1 minimization. Conclusion COSY spectra can be acquired over a 3D spatial volume with scan time under 15 minutes using echo planar readout with highly undersampled data and group sparse reconstruction. PMID:26382049
Reconstructing Boolean Models of Signaling
Karp, Richard M.
2013-01-01
Abstract Since the first emergence of protein–protein interaction networks more than a decade ago, they have been viewed as static scaffolds of the signaling–regulatory events taking place in cells, and their analysis has been mainly confined to topological aspects. Recently, functional models of these networks have been suggested, ranging from Boolean to constraint-based methods. However, learning such models from large-scale data remains a formidable task, and most modeling approaches rely on extensive human curation. Here we provide a generic approach to learning Boolean models automatically from data. We apply our approach to growth and inflammatory signaling systems in humans and show how the learning phase can improve the fit of the model to experimental data, remove spurious interactions, and lead to better understanding of the system at hand. PMID:23286509
Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.
Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai
2016-02-01
The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem.
A sparse digital signal model for ultrasonic nondestructive evaluation of layered materials.
Bochud, N; Gomez, A M; Rus, G; Peinado, A M
2015-09-01
Signal modeling has been proven to be an useful tool to characterize damaged materials under ultrasonic nondestructive evaluation (NDE). In this paper, we introduce a novel digital signal model for ultrasonic NDE of multilayered materials. This model borrows concepts from lattice filter theory, and bridges them to the physics involved in the wave-material interactions. In particular, the proposed theoretical framework shows that any multilayered material can be characterized by a transfer function with sparse coefficients. The filter coefficients are linked to the physical properties of the material and are analytically obtained from them, whereas a sparse distribution naturally arises and does not rely on heuristic approaches. The developed model is first validated with experimental measurements obtained from multilayered media consisting of homogeneous solids. Then, the sparse structure of the obtained digital filter is exploited through a model-based inverse problem for damage identification in a carbon fiber-reinforced polymer (CFRP) plate.
NASA Astrophysics Data System (ADS)
Zhang, Wenji; Hoorfar, Ahmad
2017-05-01
In this paper, we present a sparse image reconstruction approach for radar imaging through multilayered media with total variation minimization (TVM). The approach is well suited for high-resolution imaging for both ground penetrating radar (GPR) and through-the-wall radar imaging (TWRI) applications. The multilayered media Green's function is incorporated in the imaging algorithm to efficiently model the wave propagation in the multilayered environment. For GPR imaging, the multilayered subsurface Green's function is derived in closed form with saddle point method, which is significantly less time consuming than numerical methods. For through-the-wall radar imaging, where the first and last layers are freespace, a far field approximation of the Green's function in analytical form is used to model the wave propagation through single or multilayered building walls. The TVM minimizes the gradient of the image resulting in excellent edge preservation and shape reconstruction of the image. Representative examples are presented to show high quality imaging results with limited data under various subsurface and through-the-wall imaging scenarios.
Stable Restoration and Separation of Approximately Sparse Signals
2011-07-02
54] National Aeronautics and Space Administration, Apollo 11 : Orbital view, west of daedalus crater , [online accessed on Jun 6, 2011:] URL http...ACRONYM(S) 11 . SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13...corrupted signals using the input-output relation (1). Special cases of the general model (1) were studied in [7, 11 –13, 18, 33–37]. Probabilistic
Signal denoising and ultrasonic flaw detection via overcomplete and sparse representations.
Zhang, Guang-Ming; Harvey, David M; Braden, Derek R
2008-11-01
Sparse signal representations from overcomplete dictionaries are the most recent technique in the signal processing community. Applications of this technique extend into many fields. In this paper, this technique is utilized to cope with ultrasonic flaw detection and noise suppression problem. In particular, a noisy ultrasonic signal is decomposed into sparse representations using a sparse Bayesian learning algorithm and an overcomplete dictionary customized from a Gabor dictionary by incorporating some a priori information of the transducer used. Nonlinear postprocessing including thresholding and pruning is then applied to the decomposed coefficients to reduce the noise contribution and extract the flaw information. Because of the high compact essence of sparse representations, flaw echoes are packed into a few significant coefficients, and noise energy is likely scattered all over the dictionary atoms, generating insignificant coefficients. This property greatly increases the efficiency of the pruning and thresholding operations and is extremely useful for detecting flaw echoes embedded in background noise. The performance of the proposed approach is verified experimentally and compared with the wavelet transform signal processor. Experimental results to detect ultrasonic flaw echoes contaminated by white Gaussian additive noise or correlated noise are presented in the paper.
Subspace weighted ℓ 2,1 minimization for sparse signal recovery
NASA Astrophysics Data System (ADS)
Zheng, Chundi; Li, Gang; Liu, Yimin; Wang, Xiqin
2012-12-01
In this article, we propose a weighted ℓ 2,1 minimization algorithm for jointly-sparse signal recovery problem. The proposed algorithm exploits the relationship between the noise subspace and the overcomplete basis matrix for designing weights, i.e., large weights are appointed to the entries, whose indices are more likely to be outside of the row support of the jointly sparse signals, so that their indices are expelled from the row support in the solution, and small weights are appointed to the entries, whose indices correspond to the row support of the jointly sparse signals, so that the solution prefers to reserve their indices. Compared with the regular ℓ 2,1 minimization, the proposed algorithm can not only further enhance the sparseness of the solution but also reduce the requirements on both the number of snapshots and the signal-to-noise ratio (SNR) for stable recovery. Both simulations and experiments on real data demonstrate that the proposed algorithm outperforms the ℓ 1-SVD algorithm, which exploits straightforwardly ℓ 2,1 minimization, for both deterministic basis matrix and random basis matrix.
Park, Suhyung; Park, Jaeseok
2015-05-07
Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k - t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k - t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k - t SPARKS incorporates Kalman-smoother self-calibration in k - t space and sparse signal recovery in x - f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k - t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k - t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.
Zhang, Hanming; Li, Lei; Cai, Ailong
2015-01-01
Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT). Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV) regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging. PMID:26120355
Yan, Bin; Jin, Zhao; Zhang, Hanming; Li, Lei; Cai, Ailong
2015-01-01
Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT). Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV) regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.
NASA Astrophysics Data System (ADS)
Shoupeng, Song; Zhou, Jiang
2017-03-01
Converting ultrasonic signal to ultrasonic pulse stream is the key step of finite rate of innovation (FRI) sparse sampling. At present, ultrasonic pulse-stream-forming techniques are mainly based on digital algorithms. No hardware circuit that can achieve it has been reported. This paper proposes a new quadrature demodulation (QD) based circuit implementation method for forming an ultrasonic pulse stream. Elaborating on FRI sparse sampling theory, the process of ultrasonic signal is explained, followed by a discussion and analysis of ultrasonic pulse-stream-forming methods. In contrast to ultrasonic signal envelope extracting techniques, a quadrature demodulation method (QDM) is proposed. Simulation experiments were performed to determine its performance at various signal-to-noise ratios (SNRs). The circuit was then designed, with mixing module, oscillator, low pass filter (LPF), and root of square sum module. Finally, application experiments were carried out on pipeline sample ultrasonic flaw testing. The experimental results indicate that the QDM can accurately convert ultrasonic signal to ultrasonic pulse stream, and reverse the original signal information, such as pulse width, amplitude, and time of arrival. This technique lays the foundation for ultrasonic signal FRI sparse sampling directly with hardware circuitry.
Odille, Freddy; Bustin, Aurélien; Liu, Shufang; Chen, Bailiang; Vuissoz, Pierre-André; Felblinger, Jacques; Bonnemains, Laurent
2017-10-02
Segmentation of cardiac cine MRI data is routinely used for the volumetric analysis of cardiac function. Conventionally, 2D contours are drawn on short-axis (SAX) image stacks with relatively thick slices (typically 8 mm). Here, an acquisition/reconstruction strategy is used for obtaining isotropic 3D cine datasets; reformatted slices are then used to optimize the manual segmentation workflow. Isotropic 3D cine datasets were obtained from multiple 2D cine stacks (acquired during free-breathing in SAX and long-axis (LAX) orientations) using nonrigid motion correction (cine-GRICS method) and super-resolution. Several manual segmentation strategies were then compared, including conventional SAX segmentation, LAX segmentation in three views only, and combinations of SAX and LAX slices. An implicit B-spline surface reconstruction algorithm is proposed to reconstruct the left ventricular cavity surface from the sparse set of 2D contours. All tested sparse segmentation strategies were in good agreement, with Dice scores above 0.9 despite using fewer slices (3-6 sparse slices instead of 8-10 contiguous SAX slices). When compared to independent phase-contrast flow measurements, stroke volumes computed from four or six sparse slices had slightly higher precision than conventional SAX segmentation (error standard deviation of 5.4 mL against 6.1 mL) at the cost of slightly lower accuracy (bias of -1.2 mL against 0.2 mL). Functional parameters also showed a trend to improved precision, including end-diastolic volumes, end-systolic volumes, and ejection fractions). The postprocessing workflow of 3D isotropic cardiac imaging strategies can be optimized using sparse segmentation and 3D surface reconstruction. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Maneuver Detection and Reconstruction in Data Sparse Systems with an Optimal Control Based Estimator
NASA Astrophysics Data System (ADS)
Lubey, Daniel P.
The Big Sky Theory once posited that the volume of Earth's orbital environment is so large that the chance of a collision ever occurring is effectively negligible. However, since 1996 six accidental collisions have been recorded in orbit, contributing thousands of trackable debris objects to this environment and possibly hundreds of thousands to millions more that are too small to track with current assets. Much of this debris persists to today. Access to this environment has become critical in our society, thus we need methods to ensure safe and continued access to it. Part of ensuring this is obtaining better information on its dynamics and its population. This research focuses on developing an automated approach to detecting and understanding the presence of mismodeled dynamics for orbital applications in order to provide more information on the objects in Earth orbit. We develop an algorithm called the Adaptive Optimal Control Based Estimator, which automatically tracks a target given observations, detects the presence of dynamic uncertainty, and reconstructs that mismodeling as an optimal control policy. These control policies may then be used to better understand the source of the mismodeling. Outside of a specific astrodynamics application, this algorithm attempts to fulfill a specific hole in the existing literature: automated, real-time estimation in dynamically mismodeled systems with data sparse and non-cooperative observation sets while obtaining information about the mismodeling. The development of this algorithm is shown, and several astrodynamics-based simulations demonstrate its ability to automatically detect and reconstruct dynamic mismodeling while maintaining tracking of the target.
NASA Astrophysics Data System (ADS)
Yu, Wei; Wang, Chengxiang; Huang, Min
2017-04-01
Accurate images reconstructed from limited computed tomography (CT) data are desired when reducing the X-ray radiation exposure imposed on patients. The total variation (TV), known as the l1-norm of the image gradient magnitudes, is popular in CT reconstruction from incomplete projection data. However, as the projection data collected are from a sparse-view of the limited scanning angular range, the results reconstructed by a TV-based method suffer from blocky artifact and gradual changed artifacts near the edges, which in turn make the reconstruction images degraded. Different from the TV, the ℓ0-norm of an image gradient counts the number of its non-zero coefficients of the image gradient. Since the regularization based on the ℓ0-norm of the image gradient will not penalize the large gradient magnitudes, the edge can be effectively retained. In this work, an edge-preserving image reconstruction method based on l0-regularized gradient prior was investigated for limited-angle computed tomography from sparse projections. To solve the optimization model effectively, the variable splitting and the alternating direction method (ADM) were utilized. Experiments demonstrated that the ADM-like method used for the non-convex optimization problem has better performance than other classical iterative reconstruction algorithms in terms of edge preservation and artifact reduction.
Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.
Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H
2015-11-01
Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.
Signal processing using sparse derivatives with applications to chromatograms and ECG
NASA Astrophysics Data System (ADS)
Ning, Xiaoran
In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse
Sample size and power analysis for sparse signal recovery in genome-wide association studies
Xie, Jichun; Cai, T. Tony; Li, Hongzhe
2011-01-01
Genome-wide association studies have successfully identified hundreds of novel genetic variants associated with many complex human diseases. However, there is a lack of rigorous work on evaluating the statistical power for identifying these variants. In this paper, we consider sparse signal identification in genome-wide association studies and present two analytical frameworks for detailed analysis of the statistical power for detecting and identifying the disease-associated variants. We present an explicit sample size formula for achieving a given false non-discovery rate while controlling the false discovery rate based on an optimal procedure. Sparse genetic variant recovery is also considered and a boundary condition is established in terms of sparsity and signal strength for almost exact recovery of both disease-associated variants and nondisease-associated variants. A data-adaptive procedure is proposed to achieve this bound. The analytical results are illustrated with a genome-wide association study of neuroblastoma. PMID:23049128
NASA Astrophysics Data System (ADS)
Moody, D. I.; Smith, D. A.; Heavner, M.; Hamlin, T.
2014-12-01
Ongoing research at Los Alamos National Laboratory studies the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite, launched in 1997, provided a rich RF lightning database. Application of modern pattern recognition techniques to this dataset may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We extend sparse signal processing techniques to radiofrequency (RF) transient signals, and specifically focus on improved signature extraction using sparse representations in data-adaptive dictionaries. We present various processing options and classification results for on-board discharges, and discuss robustness and potential for capability development.
Kernel-Based Reconstruction of Graph Signals
NASA Astrophysics Data System (ADS)
Romero, Daniel; Ma, Meng; Giannakis, Georgios B.
2017-02-01
A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.
Deng, Luzhen; Mi, Deling; He, Peng; Feng, Peng; Yu, Pengwei; Chen, Mianyi; Li, Zhichao; Wang, Jian; Wei, Biao
2015-01-01
For lack of directivity in Total Variation (TV) which only uses x-coordinate and y-coordinate gradient transform as its sparse representation approach during the iteration process, this paper brought in Adaptive-weighted Diagonal Total Variation (AwDTV) that uses the diagonal direction gradient to constraint reconstructed image and adds associated weights which are expressed as an exponential function and can be adaptively adjusted by the local image-intensity diagonal gradient for the purpose of preserving the edge details, then using the steepest descent method to solve the optimization problem. Finally, we did two sets of numerical simulation and the results show that the proposed algorithm can reconstruct high-quality CT images from few-views projection, which has lower Root Mean Square Error (RMSE) and higher Universal Quality Index (UQI) than Algebraic Reconstruction Technique (ART) and TV-based reconstruction method.
Xu, H
2014-06-01
Purpose: To develop and investigate whether the logarithmic barrier (LB) method can result in high-quality reconstructed CT images using sparsely-sampled noisy projection data Methods: The objective function is typically formulated as the sum of the total variation (TV) and a data fidelity (DF) term with a parameter λ that governs the relative weight between them. Finding the optimized value of λ is a critical step for this approach to give satisfactory results. The proposed LB method avoid using λ by constructing the objective function as the sum of the TV and a log function whose augment is the DF term. Newton's method was used to solve the optimization problem. The algorithm was coded in MatLab2013b. Both Shepp-Logan phantom and a patient lung CT image were used for demonstration of the algorithm. Measured data were simulated by calculating the projection data using radon transform. A Poisson noise model was used to account for the simulated detector noise. The iteration stopped when the difference of the current TV and the previous one was less than 1%. Results: Shepp-Logan phantom reconstruction study shows that filtered back-projection (FBP) gives high streak artifacts for 30 and 40 projections. Although visually the streak artifacts are less pronounced for 64 and 90 projections in FBP, the 1D pixel profiles indicate that FBP gives noisier reconstructed pixel values than LB does. A lung image reconstruction is presented. It shows that use of 64 projections gives satisfactory reconstructed image quality with regard to noise suppression and sharp edge preservation. Conclusion: This study demonstrates that the logarithmic barrier method can be used to reconstruct CT images from sparsely-amped data. The number of projections around 64 gives a balance between the over-smoothing of the sharp demarcation and noise suppression. Future study may extend to CBCT reconstruction and improvement on computation speed.
NASA Astrophysics Data System (ADS)
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-09-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno
2015-10-07
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-01-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
On recovery of block-sparse signals via mixed l 2 /l q (0 < q ≤ 1)norm minimization
NASA Astrophysics Data System (ADS)
Wang, Yao; Wang, Jianjun; Xu, Zongben
2013-12-01
Compressed sensing (CS) states that a sparse signal can exactly be recovered from very few linear measurements. While in many applications, real-world signals also exhibit additional structures aside from standard sparsity. The typical example is the so-called block-sparse signals whose non-zero coefficients occur in a few blocks. In this article, we investigate the mixed l 2/ l q (0 < q ≤ 1) norm minimization method for the exact and robust recovery of such block-sparse signals. We mainly show that the non-convex l 2/ l q (0 < q < 1) minimization method has stronger sparsity promoting ability than the commonly used l 2/ l 1 minimization method both practically and theoretically. In terms of a block variant of the restricted isometry property of measurement matrix, we present weaker sufficient conditions for exact and robust block-sparse signal recovery than those known for l 2/ l 1 minimization. We also propose an efficient Iteratively Reweighted Least-Squares (IRLS) algorithm for the induced non-convex optimization problem. The obtained weaker conditions and the proposed IRLS algorithm are tested and compared with the mixed l 2/ l 1 minimization method and the standard l q minimization method on a series of noiseless and noisy block-sparse signals. All the comparisons demonstrate the outperformance of the mixed l 2/ l q (0 < q < 1) method for block-sparse signal recovery applications, and meaningfulness in the development of new CS technology.
NASA Astrophysics Data System (ADS)
Moody, Daniela I.
Automatic classification of non-stationary radio frequency (RF) signals is of particular interest in persistent surveillance and remote sensing applications. Such signals are often acquired in noisy, cluttered environments, and may be characterized by complex or unknown analytical models, making feature extraction and classification difficult. This thesis proposes an adaptive classification approach for poorly characterized targets and backgrounds based on sparse representations in non-analytical dictionaries learned from data. Conventional analytical orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of non-stationary signals, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They generally do not lead to sparse decompositions (i.e., with very few non-zero coefficients), and use in classification requires separate feature selection algorithms. Pursuit-type decompositions in analytical overcomplete (non-orthogonal) dictionaries yield sparse representations, by design, and work well for signals that are similar to the dictionary elements. The pursuit search, however, has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. One such overcomplete analytical dictionary method is also analyzed in this thesis for comparative purposes. The main thrust of the thesis is learning discriminative RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics. A pursuit search is used over the learned dictionaries to generate sparse classification features in order to identify time windows that contain a target pulse. Two state-of-the-art dictionary learning methods are compared, the K-SVD algorithm and Hebbian learning, in terms of their classification performance as a function of dictionary training parameters
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-01-01
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis. PMID:28587198
Huang, Wentao; Sun, Hongjian; Wang, Weijie
2017-06-03
Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.
Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.
2012-12-01
We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical
Ramkumar, Barathram; Sabarimalai Manikandan, M.
2017-01-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal. PMID:28529758
Satija, Udit; Ramkumar, Barathram; Sabarimalai Manikandan, M
2017-02-01
Automatic electrocardiogram (ECG) signal enhancement has become a crucial pre-processing step in most ECG signal analysis applications. In this Letter, the authors propose an automated noise-aware dictionary learning-based generalised ECG signal enhancement framework which can automatically learn the dictionaries based on the ECG noise type for effective representation of ECG signal and noises, and can reduce the computational load of sparse representation-based ECG enhancement system. The proposed framework consists of noise detection and identification, noise-aware dictionary learning, sparse signal decomposition and reconstruction. The noise detection and identification is performed based on the moving average filter, first-order difference, and temporal features such as number of turning points, maximum absolute amplitude, zerocrossings, and autocorrelation features. The representation dictionary is learned based on the type of noise identified in the previous stage. The proposed framework is evaluated using noise-free and noisy ECG signals. Results demonstrate that the proposed method can significantly reduce computational load as compared with conventional dictionary learning-based ECG denoising approaches. Further, comparative results show that the method outperforms existing methods in automatically removing noises such as baseline wanders, power-line interference, muscle artefacts and their combinations without distorting the morphological content of local waves of ECG signal.
Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries
NASA Astrophysics Data System (ADS)
Moody, Daniela I.; Smith, David A.; Hamlin, Timothy D.; Light, Tess E.; Suszcynsky, David M.
2013-05-01
For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lighting database, comprising of five years of data recorded from its two RF payloads. While some classification work has been done previously on the FORTE RF database, application of modern pattern recognition techniques may advance lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification scenarios and future development.
NASA Astrophysics Data System (ADS)
Humphries, T.; Winn, J.; Faridani, A.
2017-08-01
Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.
NASA Astrophysics Data System (ADS)
Riel, B.; Simons, M.; Agram, P.
2012-12-01
Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic accumulation of strain in the crust. Over seismically and volcanically active regions, these signals are often challenging to detect due to noise and other modes of deformation. Geodetic datasets that provide precise measurements of surface displacement over wide areas are ideal for exploiting both the spatial and temporal coherence of transient signals. We present an extension to the Multiscale InSAR Time Series (MInTS) approach for analyzing geodetic data by combining the localization benefits of wavelet transforms (localizing signals in space) with sparse optimization techniques (localizing signals in time). Our time parameterization approach allows us to reduce geodetic time series to sparse, compressible signals with very few non-zero coefficients corresponding to transient events. We first demonstrate the temporal transient detection by analyzing GPS data over the Long Valley caldera in California and along the San Andreas fault near Parkfield, CA. For Long Valley, we are able to resolve the documented 2002-2003 uplift event with greater temporal precision. Similarly for Parkfield, we model the postseismic deformation by specific integrated basis splines characterized by timescales that are largely consistent with postseismic relaxation times. We then apply our method to ERS and Envisat InSAR datasets consisting of over 200 interferograms for Long Valley and over 100 interferograms for Parkfield. The wavelet transforms reduce the impact of spatially correlated atmospheric noise common in InSAR data since the wavelet coefficients themselves are essentially uncorrelated. The spatial density and extended temporal coverage of the InSAR data allows us to effectively localize ground deformation events in both space and time with greater precision than has been previously accomplished.
Chakraborty, Anirban; Perales, Mariano M.; Reddy, G. Venugopala; Roy-Chowdhury, Amit K.
2013-01-01
The need for quantification of cell growth patterns in a multilayer, multi-cellular tissue necessitates the development of a 3D reconstruction technique that can estimate 3D shapes and sizes of individual cells from Confocal Microscopy (CLSM) image slices. However, the current methods of 3D reconstruction using CLSM imaging require large number of image slices per cell. But, in case of Live Cell Imaging of an actively developing tissue, large depth resolution is not feasible in order to avoid damage to cells from prolonged exposure to laser radiation. In the present work, we have proposed an anisotropic Voronoi tessellation based 3D reconstruction framework for a tightly packed multilayer tissue with extreme z-sparsity (2–4 slices/cell) and wide range of cell shapes and sizes. The proposed method, named as the ‘Adaptive Quadratic Voronoi Tessellation’ (AQVT), is capable of handling both the sparsity problem and the non-uniformity in cell shapes by estimating the tessellation parameters for each cell from the sparse data-points on its boundaries. We have tested the proposed 3D reconstruction method on time-lapse CLSM image stacks of the Arabidopsis Shoot Apical Meristem (SAM) and have shown that the AQVT based reconstruction method can correctly estimate the 3D shapes of a large number of SAM cells. PMID:23940509
Perks, Krista Eva; Gentner, Timothy Q.
2015-01-01
Natural acoustic communication signals, such as speech, are typically high-dimensional with a wide range of co-varying spectro-temporal features at multiple timescales. The synaptic and network mechanisms for encoding these complex signals are largely unknown. We are investigating these mechanisms in high-level sensory regions of the songbird auditory forebrain, where single neurons show sparse, object-selective spiking responses to conspecific songs. Using whole-cell in-vivo patch clamp techniques in the caudal mesopallium and the caudal nidopallium of starlings, we examine song-driven subthreshold and spiking activity. We find that both the subthreshold and the spiking activity are reliable (i.e., the same song drives a similar response each time it is presented) and specific (i.e. responses to different songs are distinct). Surprisingly, however, the reliability and specificity of the sub-threshold response was uniformly high regardless of when the cell spiked, even for song stimuli that drove no spikes. We conclude that despite a selective and sparse spiking response, high-level auditory cortical neurons are under continuous, non-selective, stimulus-specific synaptic control. To investigate the role of local network inhibition in this synaptic control, we then recorded extracellularly while pharmacologically blocking local GABA-ergic transmission. This manipulation modulated the strength and the reliability of stimulus-driven spiking, consistent with a role for local inhibition in regulating the reliability of network activity and the stimulus specificity of the subthreshold response in single cells. We discuss these results in the context of underlying computations that could generate sparse, stimulus-selective spiking responses, and models for hierarchical pooling. PMID:25728189
Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries
NASA Astrophysics Data System (ADS)
Moody, D. I.; Hamlin, T.; Light, T. E.; Loveland, R. C.; Smith, D. A.; Suszcynsky, D. M.
2012-12-01
For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory (LANL) to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Arguably the richest satellite lightning database ever recorded is that from the Fast On-orbit Recording of Transient Events (FORTE) satellite, which returned at least five years of data from its two RF payloads after launch in 1997. While some classification work has been done previously on the LANL FORTE RF database, application of modern pattern recognition techniques may further lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Extracting classification features from RF signals typically relies on knowledge of the application domain in order to find feature vectors unique to a signal class and robust against background noise. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification performance
Ramkumar, Barathram; Deshpande, Pranav S.; Choudhary, Tilendra
2015-01-01
An automated noise-robust premature ventricular contraction (PVC) detection method is proposed based on the sparse signal decomposition, temporal features, and decision rules. In this Letter, the authors exploit sparse expansion of electrocardiogram (ECG) signals on mixed dictionaries for simultaneously enhancing the QRS complex and reducing the influence of tall P and T waves, baseline wanders, and muscle artefacts. They further investigate a set of ten generalised temporal features combined with decision-rule-based detection algorithm for discriminating PVC beats from non-PVC beats. The accuracy and robustness of the proposed method is evaluated using 47 ECG recordings from the MIT/BIH arrhythmia database. Evaluation results show that the proposed method achieves an average sensitivity of 89.69%, and specificity 99.63%. Results further show that the proposed decision-rule-based algorithm with ten generalised features can accurately detect different patterns of PVC beats (uniform and multiform, couplets, triplets, and ventricular tachycardia) in presence of other normal and abnormal heartbeats. PMID:26713158
On the estimation of brain signal entropy from sparse neuroimaging data.
Grandy, Thomas H; Garrett, Douglas D; Schmiedek, Florian; Werkle-Bergner, Markus
2016-03-29
Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE's apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs.
On the estimation of brain signal entropy from sparse neuroimaging data
Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus
2016-01-01
Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961
Manikandan, M Sabarimalai; Ramkumar, Barathram; Deshpande, Pranav S; Choudhary, Tilendra
2015-12-01
An automated noise-robust premature ventricular contraction (PVC) detection method is proposed based on the sparse signal decomposition, temporal features, and decision rules. In this Letter, the authors exploit sparse expansion of electrocardiogram (ECG) signals on mixed dictionaries for simultaneously enhancing the QRS complex and reducing the influence of tall P and T waves, baseline wanders, and muscle artefacts. They further investigate a set of ten generalised temporal features combined with decision-rule-based detection algorithm for discriminating PVC beats from non-PVC beats. The accuracy and robustness of the proposed method is evaluated using 47 ECG recordings from the MIT/BIH arrhythmia database. Evaluation results show that the proposed method achieves an average sensitivity of 89.69%, and specificity 99.63%. Results further show that the proposed decision-rule-based algorithm with ten generalised features can accurately detect different patterns of PVC beats (uniform and multiform, couplets, triplets, and ventricular tachycardia) in presence of other normal and abnormal heartbeats.
Removal of Nuisance Signals from Limited and Sparse 1H MRSI Data Using a Union-of-Subspaces Model
Ma, Chao; Lam, Fan; Johnson, Curtis L.; Liang, Zhi-Pei
2015-01-01
Purpose To remove nuisance signals (e.g., water and lipid signals) for 1H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. Methods A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. Results The proposed method has been evaluated using in vivo MRSI data. For conventional CSI data with limited k-space coverage, the proposed method produced “lipid-free” spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Conclusions Nuisance signals in 1H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from 1H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution 1H MRSI of the brain. PMID:25762370
Removal of nuisance signals from limited and sparse 1H MRSI data using a union-of-subspaces model.
Ma, Chao; Lam, Fan; Johnson, Curtis L; Liang, Zhi-Pei
2016-02-01
To remove nuisance signals (e.g., water and lipid signals) for (1) H MRSI data collected from the brain with limited and/or sparse (k, t)-space coverage. A union-of-subspace model is proposed for removing nuisance signals. The model exploits the partial separability of both the nuisance signals and the metabolite signal, and decomposes an MRSI dataset into several sets of generalized voxels that share the same spectral distributions. This model enables the estimation of the nuisance signals from an MRSI dataset that has limited and/or sparse (k, t)-space coverage. The proposed method has been evaluated using in vivo MRSI data. For conventional chemical shift imaging data with limited k-space coverage, the proposed method produced "lipid-free" spectra without lipid suppression during data acquisition at 130 ms echo time. For sparse (k, t)-space data acquired with conventional pulses for water and lipid suppression, the proposed method was also able to remove the remaining water and lipid signals with negligible residuals. Nuisance signals in (1) H MRSI data reside in low-dimensional subspaces. This property can be utilized for estimation and removal of nuisance signals from (1) H MRSI data even when they have limited and/or sparse coverage of (k, t)-space. The proposed method should prove useful especially for accelerated high-resolution (1) H MRSI of the brain. © 2015 Wiley Periodicals, Inc.
An algorithm for extraction of periodic signals from sparse, irregularly sampled data
NASA Technical Reports Server (NTRS)
Wilcox, J. Z.
1994-01-01
Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.
NASA Astrophysics Data System (ADS)
Vitanovski, Dime; Tsymbal, Alexey; Ionasec, Razvan; Georgescu, Bogdan; Zhou, Shaohua K.; Hornegger, Joachim; Comaniciu, Dorin
2011-03-01
Congenital heart defect (CHD) is the most common birth defect and a frequent cause of death for children. Tetralogy of Fallot (ToF) is the most often occurring CHD which affects in particular the pulmonary valve and trunk. Emerging interventional methods enable percutaneous pulmonary valve implantation, which constitute an alternative to open heart surgery. While minimal invasive methods become common practice, imaging and non-invasive assessment tools become crucial components in the clinical setting. Cardiac computed tomography (CT) and cardiac magnetic resonance imaging (cMRI) are techniques with complementary properties and ability to acquire multiple non-invasive and accurate scans required for advance evaluation and therapy planning. In contrary to CT which covers the full 4D information over the cardiac cycle, cMRI often acquires partial information, for example only one 3D scan of the whole heart in the end-diastolic phase and two 2D planes (long and short axes) over the whole cardiac cycle. The data acquired in this way is called sparse cMRI. In this paper, we propose a regression-based approach for the reconstruction of the full 4D pulmonary trunk model from sparse MRI. The reconstruction approach is based on learning a distance function between the sparse MRI which needs to be completed and the 4D CT data with the full information used as the training set. The distance is based on the intrinsic Random Forest similarity which is learnt for the corresponding regression problem of predicting coordinates of unseen mesh points. Extensive experiments performed on 80 cardiac CT and MR sequences demonstrated the average speed of 10 seconds and accuracy of 0.1053mm mean absolute error for the proposed approach. Using the case retrieval workflow and local nearest neighbour regression with the learnt distance function appears to be competitive with respect to "black box" regression with immediate prediction of coordinates, while providing transparency to the
Sen, R; Sen, C; Pack, J; Block, K T; Golfinos, J G; Prabhu, V; Boada, F; Gonen, O; Kondziolka, D; Fatterpekar, G
2017-06-01
Preoperative localization of the pituitary gland with imaging in patients with macroadenomas has been inadequately explored. The pituitary gland enhancing more avidly than a macroadenoma has been described in the literature. Taking advantage of this differential enhancement pattern, our aim was to evaluate the role of high-resolution dynamic MR imaging with golden-angle radial sparse parallel reconstruction in localizing the pituitary gland in patients undergoing trans-sphenoidal resection of a macroadenoma. A retrospective study was performed in 17 patients who underwent trans-sphenoidal surgery for pituitary macroadenoma. Radial volumetric interpolated brain examination sequences with golden-angle radial sparse parallel technique were obtained. Using an ROI-based method to obtain signal-time curves and permeability measures, 3 separate readers identified the normal pituitary gland distinct from the macroadenoma. The readers' localizations were then compared with the intraoperative location of the gland. Statistical analyses were performed to assess the interobserver agreement and correlation with operative findings. The normal pituitary gland was found to have steeper enhancement-time curves as well as higher peak enhancement values compared with the macroadenoma (P < .001). Interobserver agreement was almost perfect in all 3 planes (κ = 0.89). In the 14 cases in which the gland was clearly identified intraoperatively, the correlation between the readers' localization and the true location derived from surgery was also nearly perfect (κ = 0.95). This study confirms our ability to consistently and accurately identify the normal pituitary gland in patients with macroadenomas with the golden-angle radial sparse parallel technique with quantitative permeability measurements and enhancement-time curves. © 2017 by American Journal of Neuroradiology.
Zheng, Zhizhong; Cai, Ailong; Li, Lei; Yan, Bin; Le, Fulong; Wang, Linyuan; Hu, Guoen
2017-07-07
Sparse-view imaging is a promising scanning approach which has fast scanning rate and low-radiation dose in X-ray computed tomography (CT). Conventional L1-norm based total variation (TV) has been widely used in image reconstruction since the advent of compressive sensing theory. However, with only the first order information of the image used, the TV often generates dissatisfactory image for some applications. As is widely known, image curvature is among the most important second order features of images and can potentially be applied in image reconstruction for quality improvement. This study incorporates the curvature in the optimization model and proposes a new total absolute curvature (TAC) based reconstruction method. The proposed model contains both total absolute curvature and total variation (TAC-TV), which are intended for better description of the featured complicated image. As for the practical algorithm development, the efficient alternating direction method of multipliers (ADMM) is utilized, which generates a practical and easy-coded algorithm. The TAC-TV iterations mainly contain FFTs, soft-thresholding and projection operations and can be launched on graphics processing unit, which leads to relatively high performance. To evaluate the presented algorithm, both qualitative and quantitative studies were performed using various few view datasets. The results illustrated that the proposed approach yielded better reconstruction quality and satisfied convergence property compared with TV-based methods.
Multilayer material characterization using thermographic signal reconstruction
NASA Astrophysics Data System (ADS)
Shepard, Steven M.; Beemer, Maria Frendberg
2016-02-01
Active-thermography has become a well-established Nondestructive Testing (NDT) method for detection of subsurface flaws. In its simplest form, flaw detection is based on visual identification of contrast between a flaw and local intact regions in an IR image sequence of the surface temperature as the sample responds to thermal stimulation. However, additional information and insight can be obtained from the sequence, even in the absence of a flaw, through analysis of the logarithmic derivatives of individual pixel time histories using the Thermographic Signal Reconstruction (TSR) method. For example, the response of a flaw-free multilayer sample to thermal stimulation can be viewed as a simple transition between the responses of infinitely thick samples of the individual constituent layers over the lifetime of the thermal diffusion process. The transition is represented compactly and uniquely by the logarithmic derivatives, based on the ratio of thermal effusivities of the layers. A spectrum of derivative responses relative to thermal effusivity ratios allows prediction of the time scale and detectability of the interface, and measurement of the thermophysical properties of one layer if the properties of the other are known. A similar transition between steady diffusion states occurs for flat bottom holes, based on the hole aspect ratio.
NASA Astrophysics Data System (ADS)
Wang, Kunpeng; Chai, Yi; Su, Chunxiao
2013-08-01
In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ1-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Gunther, Jacob H.
2014-06-01
The Eyesafe Ladar Test-bed (ELT) is an experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited off-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped-for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The first step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with different delays/ranges. Using this dictionary ladar returns from a wide variety of surface configurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coefficient vector to be sparse. Other constraints such as the non-negativity of the coefficients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques.
Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.
Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping
2015-05-01
This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.
Cosparsity-based Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals
NASA Astrophysics Data System (ADS)
Wu, Di; Zhao, Yuxin; Wang, Wenwu; Hao, Yanling
2015-12-01
The cosparse analysis model has been introduced as an interesting alternative to the standard sparse synthesis model. Given a set of corrupted measurements, finding a signal belonging to this model is known as analysis pursuit, which is an important problem in analysis model based sparse representation. Several pursuit methods have already been proposed, such as the methods based on l 1-relaxation and greedy approaches based on the cosparsity of the signal. This paper presents a novel greedy-like algorithm, called Cosparsity-based Stagewise Matching Pursuit (CSMP), where the cosparsity of the target signal is estimated adaptively with a stagewise approach composed of forward and backward processes. In the forward process, the cosparsity is estimated and the signal is approximated, followed by the refinement of the cosparsity and the signal in the backward process. As a result, the target signal can be reconstructed without the prior information of the cosparsity level. Experiments show that the performance of the proposed algorithm is comparable to those of the l 1-relaxation and Analysis Subspace Pursuit (ASP)/Analysis Compressive Sampling Matching Pursuit (ACoSaMP) in noiseless case and better than that of Greedy Analysis Pursuit (GAP) in noisy case.
2014-06-17
t which has a small random set of non-zero values . B. OMP based ideal time-frequency representation The OMP algorithm has been known as the...distributions. Index Terms – instantaneous frequency, time-frequency distributions, signal sparsity, autocorrelation function, reconstruction...it suffices to calculate the bilinear data products, as in the case of Wigner distribution (WD), whereas Time-Frequency Based Instantaneous
Theiler, James P; Cao, Guangzhi; Bouman, Charles A
2009-01-01
Many detection algorithms in hyperspectral image analysis, from well-characterized gaseous and solid targets to deliberately uncharacterized anomalies and anomlous changes, depend on accurately estimating the covariance matrix of the background. In practice, the background covariance is estimated from samples in the image, and imprecision in this estimate can lead to a loss of detection power. In this paper, we describe the sparse matrix transform (SMT) and investigate its utility for estimating the covariance matrix from a limited number of samples. The SMT is formed by a product of pairwise coordinate (Givens) rotations, which can be efficiently estimated using greedy optimization. Experiments on hyperspectral data show that the estimate accurately reproduces even small eigenvalues and eigenvectors. In particular, we find that using the SMT to estimate the covariance matrix used in the adaptive matched filter leads to consistently higher signal-to-noise ratios.
NASA Astrophysics Data System (ADS)
Moody, Daniela I.; Smith, David A.
2015-05-01
For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.
Sparse sampling image reconstruction in Lissajous trajectory beam-scanning multiphoton microscopy
NASA Astrophysics Data System (ADS)
Geiger, Andreas C.; Newman, Justin A.; Sreehari, Suhas; Sullivan, Shane Z.; Bouman, Charles A.; Simpson, Garth J.
2017-02-01
Propagation of action potentials arises on millisecond timescales, suggesting the need for advancement of methods capable of commensurate volume rendering for in vivo brain mapping. In practice, beam-scanning multiphoton microscopy is widely used to probe brain function, striking a balance between simplicity and penetration depth. However, conventional beam-scanning platforms generally do not provide access to full volume renderings at the speeds necessary to map propagation of action potentials. By combining a sparse sampling strategy based on Lissajous trajectory microscopy in combination with temporal multiplexing for simultaneous imaging of multiple focal planes, whole volumes of cells are potentially accessible each millisecond.
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-01-01
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-09-07
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.
Hou, Gary Y.; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E.
2015-01-01
Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed High-Intensity Focused Ultrasound (HIFU) treatment monitoring method. HMIFU utilizes an Amplitude-Modulated (fAM = 25 Hz) HIFU beam to induce a localized focal oscillatory motion, which is simultaneously estimated and imaged by confocally-aligned imaging transducer. HMIFU feasibilities have been previously shown in silico, in vitro, and in vivo in 1-D or 2-D monitoring of HIFU treatment. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system composed of a 93-element HIFU transducer (fcenter = 4.5MHz) and coaxially-aligned 64-element phased array (fcenter = 2.5MHz) for displacement excitation and motion estimation, respectively. A single transmit beam with divergent beam transmit was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface. The present work developed and implemented a sparse matrix beamforming onto a fully-integrated, clinically relevant system, which can stream displacement images up to 15 Hz using a GPU-based processing, an increase of 100 fold in rate of streaming displacement images compared to conventional CPU-based conventional beamforming and reconstruction processing. The achieved feedback rate is also currently the fastest and only approach that does not require interrupting the HIFU treatment amongst the acoustic radiation force based HIFU imaging techniques. Results in phantom experiments showed reproducible displacement imaging, and monitoring of twenty two in vitro HIFU treatments using the new 2D system showed a
Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong
2012-01-01
Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, a piecewise-smooth X-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing noticeable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously-reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several noticeable gains, in terms of noise-resolution tradeoff plots and full width at half maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621
NASA Astrophysics Data System (ADS)
Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei
2015-01-01
Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.
NASA Astrophysics Data System (ADS)
Wiaux, Y.; Puy, G.; Vandergheynst, P.
2010-03-01
We propose an algorithm for the reconstruction of the signal induced by cosmic strings in the cosmic microwave background (CMB), from radio-interferometric data at arcminute resolution. Radio interferometry provides incomplete and noisy Fourier measurements of the string signal, which exhibits sparse or compressible magnitude of the gradient due to the Kaiser-Stebbins effect. In this context, the versatile framework of compressed sensing naturally applies for solving the corresponding inverse problem. Our algorithm notably takes advantage of a model of the prior statistical distribution of the signal fitted on the basis of realistic simulations. Enhanced performance relative to the standard CLEAN algorithm is demonstrated by simulated observations under noise conditions including primary and secondary CMB anisotropies.
NASA Astrophysics Data System (ADS)
Peng, Fuqiang; Yu, Dejie; Luo, Jiesi
2011-02-01
Based on the chirplet path pursuit and the sparse signal decomposition method, a new sparse signal decomposition method based on multi-scale chirplet is proposed and applied to the decomposition of vibration signals from gearboxes in fault diagnosis. An over-complete dictionary with multi-scale chirplets as its atoms is constructed using the method. Because of the multi-scale character, this method is superior to the traditional sparse signal decomposition method wherein only a single scale is adopted, and is more applicable to the decomposition of non-stationary signals with multi-components whose frequencies are time-varying. When there are faults in a gearbox, the vibration signals collected are usually AM-FM signals with multiple components whose frequencies vary with the rotational speed of the shaft. The meshing frequency and modulating frequency, which vary with time, can be derived by the proposed method and can be used in gearbox fault diagnosis under time-varying shaft-rotation speed conditions, where the traditional signal processing methods are always blocked. Both simulations and experiments validate the effectiveness of the proposed method.
Real time reconstruction of quasiperiodic multi parameter physiological signals
NASA Astrophysics Data System (ADS)
Ganeshapillai, Gartheeban; Guttag, John
2012-12-01
A modern intensive care unit (ICU) has automated analysis systems that depend on continuous uninterrupted real time monitoring of physiological signals such as electrocardiogram (ECG), arterial blood pressure (ABP), and photo-plethysmogram (PPG). These signals are often corrupted by noise, artifacts, and missing data. We present an automated learning framework for real time reconstruction of corrupted multi-parameter nonstationary quasiperiodic physiological signals. The key idea is to learn a patient-specific model of the relationships between signals, and then reconstruct corrupted segments using the information available in correlated signals. We evaluated our method on MIT-BIH arrhythmia data, a two-channel ECG dataset with many clinically significant arrhythmias, and on the CinC challenge 2010 data, a multi-parameter dataset containing ECG, ABP, and PPG. For each, we evaluated both the residual distance between the original signals and the reconstructed signals, and the performance of a heartbeat classifier on a reconstructed ECG signal. At an SNR of 0 dB, the average residual distance on the CinC data was roughly 3% of the energy in the signal, and on the arrhythmia database it was roughly 16%. The difference is attributable to the large amount of diversity in the arrhythmia database. Remarkably, despite the relatively high residual difference, the classification accuracy on the arrhythmia database was still 98%, indicating that our method restored the physiologically important aspects of the signal.
Wang, Tonghe; Zhu, Lei
2016-09-21
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Localization of Transient Signals in Geodetic Data Using Sparse Estimation Techniques
NASA Astrophysics Data System (ADS)
Riel, B. V.; Simons, M.
2013-12-01
Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic, non-secular accumulation of strain in the crust. Transients can be the surface manifestations of slow slip events, underlying magmatic activity, hydrologic activity, etc. Successful detection of such signals with unknown magnitudes and durations requires precise measurements of surface displacements over sufficiently large regions. Large-scale continuously operating GPS (cGPS) networks provide an excellent data source for detecting transient motions of various spatial and temporal scales. However, data noise and the presence of confounding signals from seasonal processes, long-term plate loading, co-seismic offsets, etc. complicate detection when the data volume is large and manual inspection of individual time series is infeasible. We present a method for automatically detecting spatially and temporally coherent transient signals recorded within large geodetic datasets. Our temporal parameterization scheme constructs a highly overcomplete, non-orthogonal dictionary of candidate displacement functions that resemble transient signals of varying start times and durations. For a given time series, the transient detection problem is cast as a linear least squares estimation procedure with a sparsity-inducing regularization term to limit the total number of dictionary elements needed to reconstruct the signal while still providing a good fit to the data. The sparsity-inducing regularization enhances the interpretability of the reconstructed time series by localizing the dominant timescales and onset times of the temporally correlated signals. We distinguish spatially coherent transient signals from local motion or noise by incorporating a network re-weighting approach that enhances non-zero dictionary elements common to multiple stations and penalizes elements localized to a single station. We focus application of our method to several large-scale cGPS networks
Hou, Gary Y; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E
2014-11-01
Harmonic motion imaging for focused ultrasound (HMIFU) utilizes an amplitude-modulated HIFU beam to induce a localized focal oscillatory motion simultaneously estimated. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system. A single divergent transmit beam was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface with frame rates up to 15 Hz, a 100-fold increase compared to conventional CPU-based processing. The real-time feedback rate does not require interrupting the HIFU treatment. Results in phantom experiments showed reproducible HMI images and monitoring of 22 in vitro HIFU treatments using the new 2-D system demonstrated reproducible displacement imaging, and monitoring of 22 in vitro HIFU treatments using the new 2-D system showed a consistent average focal displacement decrease of 46.7 ±14.6% during lesion formation. Complementary focal temperature monitoring also indicated an average rate of displacement increase and decrease with focal temperature at 0.84±1.15%/(°)C, and 2.03±0.93%/(°)C , respectively. These results reinforce the HMIFU capability of estimating and monitoring stiffness related changes in real time. Current ongoing studies include clinical translation of the presented system for monitoring of HIFU treatment for breast and pancreatic tumor applications.
Parto Dezfouli, Mohammad Ali; Parto Dezfouli, Mohsen; Ahmadian, Alireza; Frangi, Alejandro F; Esmaeili Rad, Melika; Saligheh Rad, Hamidreza
2017-02-01
MRS is an analytical approach used for both quantitative and qualitative analysis of human body metabolites. The accurate and robust quantification capability of proton MRS ((1) H-MRS) enables the accurate estimation of living tissue metabolite concentrations. However, such methods can be efficiently employed for quantification of metabolite concentrations only if the overlapping nature of metabolites, existing static field inhomogeneity and low signal-to-noise ratio (SNR) are taken into consideration. Representation of (1) H-MRS signals in the time-frequency domain enables us to handle the baseline and noise better. This is possible because the MRS signal of each metabolite is sparsely represented, with only a few peaks, in the frequency domain, but still along with specific time-domain features such as distinct decay constant associated with T2 relaxation rate. The baseline, however, has a smooth behavior in the frequency domain. In this study, we proposed a quantification method using continuous wavelet transformation of (1) H-MRS signals in combination with sparse representation of features in the time-frequency domain. Estimation of the sparse representations of MR spectra is performed according to the dictionaries constructed from metabolite profiles. Results on simulated and phantom data show that the proposed method is able to quantify the concentration of metabolites in (1) H-MRS signals with high accuracy and robustness. This is achieved for both low SNR (5 dB) and low signal-to-baseline ratio (-5 dB) regimes.
Sparse sampling and reconstruction for electron and scanning probe microscope imaging
Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.
2015-07-28
Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.
NASA Astrophysics Data System (ADS)
Moody, Daniela I.; Smith, David A.
2014-05-01
Ongoing research at Los Alamos National Laboratory studies the Earth's radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events are dispersed through the ionosphere and appear as broadband nonlinear chirps at a receiver on-orbit. They occur in the presence of additive noise and structured clutter, making their classification challenging. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. Conventional feature extraction techniques using analytical dictionaries, such as a short-time Fourier basis or wavelets, are not comprehensively suitable for analyzing the broadband RF pulses under consideration here. We explore an alternative approach based on non-analytical dictionaries learned directly from data, and extend two dictionary learning algorithms, K-SVD and Hebbian, for use with satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features, and discuss their performance in terms of event classification. We also use principal component analysis to analyze and compare the respective learned dictionary spaces to the real data space.
Light field reconstruction robust to signal dependent noise
NASA Astrophysics Data System (ADS)
Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai
2014-11-01
Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.
Ying, Jinfa; Delaglio, Frank; Torchia, Dennis A; Bax, Ad
2016-11-19
Implementation of a new algorithm, SMILE, is described for reconstruction of non-uniformly sampled two-, three- and four-dimensional NMR data, which takes advantage of the known phases of the NMR spectrum and the exponential decay of underlying time domain signals. The method is very robust with respect to the chosen sampling protocol and, in its default mode, also extends the truncated time domain signals by a modest amount of non-sampled zeros. SMILE can likewise be used to extend conventional uniformly sampled data, as an effective multidimensional alternative to linear prediction. The program is provided as a plug-in to the widely used NMRPipe software suite, and can be used with default parameters for mainstream application, or with user control over the iterative process to possibly further improve reconstruction quality and to lower the demand on computational resources. For large data sets, the method is robust and demonstrated for sparsities down to ca 1%, and final all-real spectral sizes as large as 300 Gb. Comparison between fully sampled, conventionally processed spectra and randomly selected NUS subsets of this data shows that the reconstruction quality approaches the theoretical limit in terms of peak position fidelity and intensity. SMILE essentially removes the noise-like appearance associated with the point-spread function of signals that are a default of five-fold above the noise level, but impacts the actual thermal noise in the NMR spectra only minimally. Therefore, the appearance and interpretation of SMILE-reconstructed spectra is very similar to that of fully sampled spectra generated by Fourier transformation.
Signal enhanced holographic fluorescence microscopy with guide-star reconstruction
Jang, Changwon; Clark, David C.; Kim, Jonghyun; Lee, Byoungho; Kim, Myung K.
2016-01-01
We propose a signal enhanced guide-star reconstruction method for holographic fluorescence microscopy. In the late 00’s, incoherent digital holography started to be vigorously studied by several groups to overcome the limitations of conventional digital holography. The basic concept of incoherent digital holography is to acquire the complex hologram from incoherent light by utilizing temporal coherency of a spatially incoherent light source. The advent of incoherent digital holography opened new possibility of holographic fluorescence microscopy (HFM), which was difficult to achieve with conventional digital holography. However there has been an important issue of low and noisy signal in HFM which slows down the system speed and degrades the imaging quality. When guide-star reconstruction is adopted, the image reconstruction gives an improved result compared to the conventional propagation reconstruction method. The guide-star reconstruction method gives higher imaging signal-to-noise ratio since the acquired complex point spread function provides optimal system-adaptive information and can restore the signal buried in the noise more efficiently. We present theoretical explanation and simulation as well as experimental results. PMID:27446653
Reconstruction of ocean circulation from sparse data using the adjoint method: LGM and the present
NASA Astrophysics Data System (ADS)
Kurahashi-Nakamura, T.; Losch, M. J.; Paul, A.; Mulitza, S.; Schulz, M.
2010-12-01
tailored to be used with a source-to-source compiler to generate exact and efficient adjoint model code. To mimic real geological data, we carried out verification experiments with artificial data (temperature and salinity) sampled from a simulation with the MITgcm, and we examined how well the original model ocean was reconstructed through the adjoint method with our model. Through these ‘identical twin experiments’, we evaluated the performance and usefulness of the model to obtain guidelines for future experiments with real data.
Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming
2015-01-01
A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release. PMID:25732072
Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming
2016-03-01
A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release.
NASA Astrophysics Data System (ADS)
Wu, Chunyan; Liu, Jian; Peng, Fuqiang; Yu, Dejie; Li, Rong
2013-07-01
When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To overcome this drawback, the zero phase filter is introduced to the mentioned filter, and a fault diagnosis method for speed-changing gearbox is proposed. Firstly, the gear meshing frequency of each gearbox is estimated by chirplet path pursuit. Then, according to the estimated gear meshing frequencies, an adaptive zero phase time-varying filter(AZPTF) is designed to filter the original signal. Finally, the basis for fault diagnosis is acquired by the envelope order analysis to the filtered signal. The signal consisting of two time-varying amplitude modulation and frequency modulation(AM-FM) signals is respectively analyzed by ATF and AZPTF based on MCSSD. The simulation results show the variances between the original signals and the filtered signals yielded by AZPTF based on MCSSD are 13.67 and 41.14, which are far less than variances (323.45 and 482.86) between the original signals and the filtered signals obtained by ATF based on MCSSD. The experiment results on the vibration signals of gearboxes indicate that the vibration signals of the two speed-changing gearboxes installed on one foundation bed can be separated by AZPTF effectively. Based on the demodulation information of the vibration signal of each gearbox, the fault diagnosis can be implemented. Both simulation and experiment examples prove that the proposed filter can extract a mono-component time-varying AM-FM signal from the multi-component time-varying AM-FM signal without distortion.
Reconstruction of Multidimensional Signals from Multiple Level Threshold Crossings.
1988-01-01
New York: Springer-Verlag, 1978. [38] A. Mostowski and M. Stark. Introducion to Higher Algebra . New York: Mac-Millan Co., 1964. 139] A. Zakhor and D...thresholds. I- orde-’ to circumvent this difficulty, we-pepeeea second approach which is implicit, ~aauses algebraic geometric concepts to find conditions...approach which is implicit, and uses algebraic geometric con- cepts to find conditions under which a signal is almost always reconstructible from its
Reconstruction of Thermographic Signals to Map Perforator Vessels in Humans.
Liu, Wei-Min; Maivelett, Jordan; Kato, Gregory J; Taylor, James G; Yang, Wen-Chin; Liu, Yun-Chung; Yang, You-Gang; Gorbach, Alexander M
2012-01-01
Thermal representations on the surface of a human forearm of underlying perforator vessels have previously been mapped via recovery-enhanced infrared imaging, which is performed as skin blood flow recovers to baseline levels following cooling of the forearm. We noted that the same vessels could also be observed during reactive hyperaemia tests after complete 5-min occlusion of the forearm by an inflatable cuff. However, not all subjects showed vessels with acceptable contrast. Therefore, we applied a thermographic signal reconstruction algorithm to reactive hyperaemia testing, which substantially enhanced signal-to-noise ratios between perforator vessels and their surroundings, thereby enabling their mapping with higher accuracy and a shorter occlusion period.
Two-dimensional signal reconstruction: The correlation sampling method
Roman, H. E.
2007-12-15
An accurate approach for reconstructing a time-dependent two-dimensional signal from non-synchronized time series recorded at points located on a grid is discussed. The method, denoted as correlation sampling, improves the standard conditional sampling approach commonly employed in the study of turbulence in magnetoplasma devices. Its implementation is illustrated in the case of an artificial time-dependent signal constructed using a fractal algorithm that simulates a fluctuating surface. A statistical method is also discussed for distinguishing coherent (i.e., collective) from purely random (noisy) behavior for such two-dimensional fluctuating phenomena.
Network component analysis: reconstruction of regulatory signals in biological systems.
Liao, James C; Boscolo, Riccardo; Yang, Young-Lyeol; Tran, Linh My; Sabatti, Chiara; Roychowdhury, Vwani P
2003-12-23
High-dimensional data sets generated by high-throughput technologies, such as DNA microarray, are often the outputs of complex networked systems driven by hidden regulatory signals. Traditional statistical methods for computing low-dimensional or hidden representations of these data sets, such as principal component analysis and independent component analysis, ignore the underlying network structures and provide decompositions based purely on a priori statistical constraints on the computed component signals. The resulting decomposition thus provides a phenomenological model for the observed data and does not necessarily contain physically or biologically meaningful signals. Here, we develop a method, called network component analysis, for uncovering hidden regulatory signals from outputs of networked systems, when only a partial knowledge of the underlying network topology is available. The a priori network structure information is first tested for compliance with a set of identifiability criteria. For networks that satisfy the criteria, the signals from the regulatory nodes and their strengths of influence on each output node can be faithfully reconstructed. This method is first validated experimentally by using the absorbance spectra of a network of various hemoglobin species. The method is then applied to microarray data generated from yeast Saccharamyces cerevisiae and the activities of various transcription factors during cell cycle are reconstructed by using recently discovered connectivity information for the underlying transcriptional regulatory networks.
Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua
2014-01-01
To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611
Sparse Sensing of Aerodynamic Loads on Insect Wings
NASA Astrophysics Data System (ADS)
Manohar, Krithika; Brunton, Steven; Kutz, J. Nathan
2015-11-01
We investigate how insects use sparse sensors on their wings to detect aerodynamic loading and wing deformation using a coupled fluid-structure model given periodically flapping input motion. Recent observations suggest that insects collect sensor information about their wing deformation to inform control actions for maneuvering and rejecting gust disturbances. Given a small number of point measurements of the chordwise aerodynamic loads from the sparse sensors, we reconstruct the entire chordwise loading using sparsesensing - a signal processing technique that reconstructs a signal from a small number of measurements using l1 norm minimization of sparse modal coefficients in some basis. We compare reconstructions from sensors randomly sampled from probability distributions biased toward different regions along the wing chord. In this manner, we determine the preferred regions along the chord for sensor placement and for estimating chordwise loads to inform control decisions in flight.
Pei, Yinqing; Xu, Kun; Li, Jianqiang; Zhang, Anxu; Dai, Yitang; Ji, Yuefeng; Lin, Jintong
2013-02-11
A novel multi-band digital predistortion (DPD) technique is proposed to linearize the subcarrier multiplexed radio-over-fiber (SCM-RoF) system transmitting sparse multi-band RF signal with large blank spectra between the constituent RF bands. DPD performs on the baseband signal of each individual RF band before up-conversion and RF combination. By disregarding the blank spectra, the processing bandwidth of the proposed DPD technique is greatly reduced, which is only determined by the baseband signal bandwidth of each individual RF band, rather than the entire bandwidth of the combined multi-band RF signal. Experimental demonstration is performed in a directly modulated SCM-RoF system transmitting two 64QAM modulated OFDM signals on 2.4GHz band and 3.6GHz band. Results show that the adjacent channel power (ACP) is suppressed by 15dB leading to significant improvement of the EVM performances of the signals on both of the two bands.
Adaptive multimode signal reconstruction from time–frequency representations
Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick
2016-01-01
This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM–FM signals by their time–frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM–FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184
Wang, T; Zhu, L
2015-06-15
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CT scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can
Image reconstruction method for non-synchronous THz signals
NASA Astrophysics Data System (ADS)
Oda, Naoki; Okubo, Syuichi; Sudou, Takayuki; Isoyama, Goro; Kato, Ryukou; Irizawa, Akinori; Kawase, Keigo
2014-05-01
Image reconstruction method for non-synchronous THz signals was developed for a combination of THz Free Electron Laser (THz-FEL) developed by Osaka University with THz imager. The method employs a slight time-difference between repetition period of THz macro-pulse from THz-FEL and a plurality of frames for THz imager, so that image can be reconstructed out of a predetermined number of time-sequential frames. This method was applied to THz-FEL and other pulsed THz source, and found very effective. Thermal time constants of pixels in 320x240 microbolometer array were also evaluated with this method, using quantum cascade laser as a THz source.
NASA Astrophysics Data System (ADS)
Yamada, Takayuki; Lee, Doohwan; Shiba, Hiroyuki; Yamaguchi, Yo; Akabane, Kazunori; Uehara, Kazuhiro
We previously proposed a unified wireless system called “Flexible Wireless System”. Comprising of flexible access points and a flexible signal processing unit, it collectively receives a wideband spectrum that includes multiple signals from various wireless systems. In cases of simultaneous multiple signal reception, however, reception performance degrades due to the interference among multiple signals. To address this problem, we propose a new signal separation and reconstruction method for spectrally overlapped signals. The method analyzes spectral information obtained by the short-time Fourier transform to extract amplitude and phase values at each center frequency of overlapped signals at a flexible signal processing unit. Using these values enables signals from received radio wave data to be separated and reconstructed for simultaneous multi-system reception. In this paper, the BER performance of the proposed method is evaluated using computer simulations. Also, the performance of the interference suppression is evaluated by analyzing the probability density distribution of the amplitude of the overlapped interference on a symbol of the received signal. Simulation results confirmed the effectiveness of the proposed method.
Fast L1-based sparse representation of EEG for motor imagery signal classification.
Younghak Shin; Heung-No Lee; Balasingham, Ilangko
2016-08-01
Improvement of classification performance is one of the key challenges in electroencephalogram (EEG) based motor imagery brain-computer interface (BCI). Recently, sparse representation based classification (SRC) method has been shown to provide satisfactory classification accuracy in motor imagery classification. In this paper, we aim to evaluate the performance of the SRC method in terms of not only its classification accuracy but also of its computation time. For this purpose, we investigate the performance of recently developed fast L1 minimization methods for their use in SRC, such as homotopy and fast iterative soft-thresholding algorithm (FISTA). From experimental analysis, we note that the SRC method with the fast L1 minimization algorithms is shown to provide robust classification performance, compared to support vector machine (SVM), both in time and accuracy.
NASA Astrophysics Data System (ADS)
Zha, Guofeng; Wang, Hongqiang; Yang, Zhaocheng; Cheng, Yongqiang; Qin, Yuliang
2016-04-01
As a complementary imaging technology, coincidence imaging radar (CIR) achieves high resolution for stationary or low-speed targets under the assumption of ignoring the influence of the original position mismatching. As to high-speed moving targets moving from the original imaging cell to other imaging cells during imaging, it is inaccurate to reconstruct the target using the previous imaging plane. We focus on the recovery problem for high-speed moving targets in the CIR system based on the intrapulse frequency random modulation signal in a single pulse. The effects induced by the motion on the imaging performance are analyzed. Because the basis matrix in the CIR imaging equation is determined by the unknown velocity parameter of the moving target, both the target images and basis matrix should be estimated jointly. We propose an adaptive joint parametric estimation recovery algorithm based on the Tikhonov regularization method to update the target velocity and basis matrix adaptively and recover the target images synchronously. Finally, the target velocity and target images are obtained in an iterative manner. Simulation results are presented to demonstrate the efficiency of the proposed algorithm.
Berkels, Benjamin; Rumpf, Martin; Bauer, Sebastian; Ettl, Svenja; Arold, Oliver; Hornegger, Joachim
2013-09-15
Purpose: The intraprocedural tracking of respiratory motion has the potential to substantially improve image-guided diagnosis and interventions. The authors have developed a sparse-to-dense registration approach that is capable of recovering the patient's external 3D body surface and estimating a 4D (3D + time) surface motion field from sparse sampling data and patient-specific prior shape knowledge.Methods: The system utilizes an emerging marker-less and laser-based active triangulation (AT) sensor that delivers sparse but highly accurate 3D measurements in real-time. These sparse position measurements are registered with a dense reference surface extracted from planning data. Thereby a dense displacement field is recovered, which describes the spatio-temporal 4D deformation of the complete patient body surface, depending on the type and state of respiration. It yields both a reconstruction of the instantaneous patient shape and a high-dimensional respiratory surrogate for respiratory motion tracking. The method is validated on a 4D CT respiration phantom and evaluated on both real data from an AT prototype and synthetic data sampled from dense surface scans acquired with a structured-light scanner.Results: In the experiments, the authors estimated surface motion fields with the proposed algorithm on 256 datasets from 16 subjects and in different respiration states, achieving a mean surface reconstruction accuracy of ±0.23 mm with respect to ground truth data—down from a mean initial surface mismatch of 5.66 mm. The 95th percentile of the local residual mesh-to-mesh distance after registration did not exceed 1.17 mm for any subject. On average, the total runtime of our proof of concept CPU implementation is 2.3 s per frame, outperforming related work substantially.Conclusions: In external beam radiation therapy, the approach holds potential for patient monitoring during treatment using the reconstructed surface, and for motion-compensated dose delivery using
NASA Astrophysics Data System (ADS)
Watanabe, Hidenori; Sato, Masa-aki; Suzuki, Takafumi; Nambu, Atsushi; Nishimura, Yukio; Kawato, Mitsuo; Isa, Tadashi
2012-06-01
Subdural electrode arrays provide stable, less invasive electrocorticogram (ECoG) recordings of neural signals than multichannel needle electrodes. Accurate reconstruction of intracortical local field potentials (LFPs) from ECoG signals would provide a critical step for the development of a less invasive, high-performance brain-machine interface; however, neural signals from individual ECoG channels are generally coarse and have limitations in estimating deep layer LFPs. Here, we developed a high-density, 32-channel, micro-ECoG array and applied a sparse linear regression algorithm to reconstruct the LFPs at various depths of primary motor cortex (M1) in a monkey performing a reach-and-grasp task. At 0.2 mm beneath the cortical surface, the real and estimated LFPs were significantly correlated (correlation coefficient (r); 0.66 ± 0.11), and the r at 3.2 mm was still as high as 0.55 ± 0.04. A time-frequency analysis of the reconstructed LFP showed clear transition between resting and movements by the monkey. These methods would be a powerful tool with wide-ranging applicability in neuroscience studies.
Comparison of pulse phase and thermographic signal reconstruction processing methods
NASA Astrophysics Data System (ADS)
Oswald-Tranta, Beata; Shepard, Steven M.
2013-05-01
Active thermography data for nondestructive testing has traditionally been evaluated by either visual or numerical identification of anomalous surface temperature contrast in the IR image sequence obtained as the target sample cools in response to thermal stimulation. However, in recent years, it has been demonstrated that considerably more information about the subsurface condition of a sample can be obtained by evaluating the time history of each pixel independently. In this paper, we evaluate the capabilities of two such analysis techniques, Pulse Phase Thermography (PPT) and Thermographic Signal Reconstruction (TSR) using induction and optical flash excitation. Data sequences from optical pulse and scanned induction heating are analyzed with both methods. Results are evaluated in terms of signal-tobackground ratio for a given subsurface feature. In addition to the experimental data, we present finite element simulation models with varying flaw diameter and depth, and discuss size measurement accuracy and the effect of noise on detection limits and sensitivity for both methods.
NASA Astrophysics Data System (ADS)
Zwick, D.; Sakhaee, E.; Balachandar, S.; Entezari, A.
2017-10-01
Multiphase flow simulation serves a vital purpose in applications as diverse as engineering design, natural disaster prediction, and even study of astrophysical phenomena. In these scenarios, it can be very difficult, expensive, or even impossible to fully represent the physical system under consideration. Even still, many such real-world applications can be modeled as a two-phase flow containing both continuous and dispersed phases. Consequentially, the continuous phase is thought of as a fluid and the dispersed phase as particles. The continuous phase is typically treated in the Eulerian frame of reference and represented on a fixed grid, while the dispersed phase is treated in the Lagrangian frame and represented by a sample distribution of Lagrangian particles that approximate a cloud. Coupling between the phases requires interpolation of the continuous phase properties at the locations of the Lagrangian particles. This interpolation step is straightforward and can be performed at higher order accuracy. The reverse process of projecting the Lagrangian particle properties from the sample points to the Eulerian grid is complicated by the time-dependent non-uniform distribution of the Lagrangian particles. In this paper we numerically examine three reconstruction, or projection, methods: (i) direct summation (DS), (ii) least-squares, and (iii) sparse approximation. We choose a continuous representation of the dispersed phase property that is systematically varied from a simple single mode periodic signal to a more complex artificially constructed turbulent signal to see how each method performs in reconstruction. In these experiments, we show that there is a link between the number of dispersed Lagrangian sample points and the number of structured grid points to accurately represent the underlying functional representation to machine accuracy. The least-squares method outperforms the other methods in most cases, while the sparse approximation method is able to
Ghaderi, Parviz; Marateb, Hamid R
2017-07-01
The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μVrms ± 6.1 μVrms and 7.5 μVrms ± 5.9 μVrms) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.
2016-09-02
need to be determined include the K × P pre- combiner weights, arranged in a matrix C ; the feedforward filter coefficients arranged in P vectors...that minimizes the MSE in data detection. The signals obtained after pre-combining are given by xp(t) = K∑ k=1 c ∗p,kvk(t) = c ′ pv(t), p = 1, . . . P...20) where cp is the pth column of the pre-combining matrix C . The signals are sampled once every Ts seconds. (As before, we may assume Ts = T/2.) Let V
Truncation Error Analysis on Reconstruction of Signal From Unsymmetrical Local Average Sampling.
Pang, Yanwei; Song, Zhanjie; Li, Xuelong; Pan, Jing
2015-10-01
The classical Shannon sampling theorem is suitable for reconstructing a band-limited signal from its sampled values taken at regular instances with equal step by using the well-known sinc function. However, due to the inertia of the measurement apparatus, it is impossible to measure the value of a signal precisely at such discrete time. In practice, only unsymmetrically local averages of signal near the regular instances can be measured and used as the inputs for a signal reconstruction method. In addition, when implemented in hardware, the traditional sinc function cannot be directly used for signal reconstruction. We propose using the Taylor expansion of sinc function to reconstruct signal sampled from unsymmetrically local averages and give the upper bound of the reconstruction error (i.e., truncation error). The convergency of the reconstruction method is also presented.
Reconstruction of signaling networks regulating fungal morphogenesis by transcriptomics.
Meyer, Vera; Arentshorst, Mark; Flitter, Simon J; Nitsche, Benjamin M; Kwon, Min Jin; Reynaga-Peña, Cristina G; Bartnicki-Garcia, Salomon; van den Hondel, Cees A M J J; Ram, Arthur F J
2009-11-01
Coordinated control of hyphal elongation and branching is essential for sustaining mycelial growth of filamentous fungi. In order to study the molecular machinery ensuring polarity control in the industrial fungus Aspergillus niger, we took advantage of the temperature-sensitive (ts) apical-branching ramosa-1 mutant. We show here that this strain serves as an excellent model system to study critical steps of polar growth control during mycelial development and report for the first time a transcriptomic fingerprint of apical branching for a filamentous fungus. This fingerprint indicates that several signal transduction pathways, including TORC2, phospholipid, calcium, and cell wall integrity signaling, concertedly act to control apical branching. We furthermore identified the genetic locus affected in the ramosa-1 mutant by complementation of the ts phenotype. Sequence analyses demonstrated that a single amino acid exchange in the RmsA protein is responsible for induced apical branching of the ramosa-1 mutant. Deletion experiments showed that the corresponding rmsA gene is essential for the growth of A. niger, and complementation analyses with Saccharomyces cerevisiae evidenced that RmsA serves as a functional equivalent of the TORC2 component Avo1p. TORC2 signaling is required for actin polarization and cell wall integrity in S. cerevisiae. Congruently, our microscopic investigations showed that polarized actin organization and chitin deposition are disturbed in the ramosa-1 mutant. The integration of the transcriptomic, genetic, and phenotypic data obtained in this study allowed us to reconstruct a model for cellular events involved in apical branching.
Wiener filter reloaded: fast signal reconstruction without preconditioning
NASA Astrophysics Data System (ADS)
Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.
2017-06-01
We present a high-performance solution to the Wiener filtering problem via a formulation that is dual to the recently developed messenger technique. This new dual messenger algorithm, like its predecessor, efficiently calculates the Wiener filter solution of large and complex data sets without preconditioning and can account for inhomogeneous noise distributions and arbitrary mask geometries. We demonstrate the capabilities of this scheme in signal reconstruction by applying it on a simulated cosmic microwave background temperature data set. The performance of this new method is compared to that of the standard messenger algorithm and the preconditioned conjugate gradient (PCG) approach, using a series of well-known convergence diagnostics and their processing times, for the particular problem under consideration. This variant of the messenger algorithm matches the performance of the PCG method in terms of the effectiveness of reconstruction of the input angular power spectrum and converges smoothly to the final solution. The dual messenger algorithm outperforms the standard messenger and PCG methods in terms of execution time, as it runs to completion around two and three to four times faster than the respective methods, for the specific problem considered.
Waldmann, I. P.
2014-01-01
Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here we present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.
Getting a decent (but sparse) signal to the brain for users of cochlear implants.
Wilson, Blake S
2015-04-01
The challenge in getting a decent signal to the brain for users of cochlear implants (CIs) is described. A breakthrough occurred in 1989 that later enabled most users to understand conversational speech with their restored hearing alone. Subsequent developments included stimulation in addition to that provided with a unilateral CI, either with electrical stimulation on both sides or with acoustic stimulation in combination with a unilateral CI, the latter for persons with residual hearing at low frequencies in either or both ears. Both types of adjunctive stimulation produced further improvements in performance for substantial fractions of patients. Today, the CI and related hearing prostheses are the standard of care for profoundly deaf persons and ever-increasing indications are now allowing persons with less severe losses to benefit from these marvelous technologies. The steps in achieving the present levels of performance are traced, and some possibilities for further improvements are mentioned. This article is part of a Special Issue entitled
Reconstruction of Signaling Networks Regulating Fungal Morphogenesis by Transcriptomics▿ †
Meyer, Vera; Arentshorst, Mark; Flitter, Simon J.; Nitsche, Benjamin M.; Kwon, Min Jin; Reynaga-Peña, Cristina G.; Bartnicki-Garcia, Salomon; van den Hondel, Cees A. M. J. J.; Ram, Arthur F. J.
2009-01-01
Coordinated control of hyphal elongation and branching is essential for sustaining mycelial growth of filamentous fungi. In order to study the molecular machinery ensuring polarity control in the industrial fungus Aspergillus niger, we took advantage of the temperature-sensitive (ts) apical-branching ramosa-1 mutant. We show here that this strain serves as an excellent model system to study critical steps of polar growth control during mycelial development and report for the first time a transcriptomic fingerprint of apical branching for a filamentous fungus. This fingerprint indicates that several signal transduction pathways, including TORC2, phospholipid, calcium, and cell wall integrity signaling, concertedly act to control apical branching. We furthermore identified the genetic locus affected in the ramosa-1 mutant by complementation of the ts phenotype. Sequence analyses demonstrated that a single amino acid exchange in the RmsA protein is responsible for induced apical branching of the ramosa-1 mutant. Deletion experiments showed that the corresponding rmsA gene is essential for the growth of A. niger, and complementation analyses with Saccharomyces cerevisiae evidenced that RmsA serves as a functional equivalent of the TORC2 component Avo1p. TORC2 signaling is required for actin polarization and cell wall integrity in S. cerevisiae. Congruently, our microscopic investigations showed that polarized actin organization and chitin deposition are disturbed in the ramosa-1 mutant. The integration of the transcriptomic, genetic, and phenotypic data obtained in this study allowed us to reconstruct a model for cellular events involved in apical branching. PMID:19749177
NASA Astrophysics Data System (ADS)
Prabhakar, Sunil Kumar; Rajaguru, Harikumar
2015-12-01
The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.
NASA Astrophysics Data System (ADS)
Park, Byeongjin; Sohn, Hoon
2017-02-01
Laser ultrasonic scanning is attractive for damage detection due to its noncontact nature, sensitivity to local damage, and high spatial resolution. However, its practicality is limited because scanning at a high spatial resolution demands a prohibitively long scanning time. Recently, compressed sensing (CS) and super-resolution (SR) are gaining popularity in the image recovery field. CS estimates unmeasured ultrasonic responses from measured responses, and SR recovers high spatial frequency information from low resolution images. Inspired by these techniques, a laser ultrasonic wavefield reconstruction technique is developed to localize and visualize damage with a reduced number of ultrasonic measurements. First, a low spatial resolution ultrasonic wavefield image for a given inspection region is reconstructed from reduced number of ultrasonic measurements using CS. Here, the ultrasonic waves are generated using a pulsed laser, and measured at a fixed sensing point using a laser Doppler vibrometer (LDV). Then, a high spatial resolution ultrasonic wave image is recovered from the reconstructed low spatial resolution image using SR. The number of measurement points required for ultrasonic wavefield imaging is significantly reduced over 90%. The performance of the proposed technique is validated by an experiment performed on a cracked aluminum plate.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
A Max-Margin Perspective on Sparse Representation-Based Classification
2013-11-30
ABSTRACT 16. SECURITY CLASSIFICATION OF: 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY...Perspective on Sparse Representation-Based Classification Sparse Representation-based Classification (SRC) is a powerful tool in distinguishing signal...a reconstructive perspective, which neither offer- s any guarantee on its classification performance nor pro- The views, opinions and/or findings
Reconstruction and signal propagation analysis of the Syk signaling network in breast cancer cells.
Naldi, Aurélien; Larive, Romain M; Czerwinska, Urszula; Urbach, Serge; Montcourrier, Philippe; Roy, Christian; Solassol, Jérôme; Freiss, Gilles; Coopman, Peter J; Radulescu, Ovidiu
2017-03-01
The ability to build in-depth cell signaling networks from vast experimental data is a key objective of computational biology. The spleen tyrosine kinase (Syk) protein, a well-characterized key player in immune cell signaling, was surprisingly first shown by our group to exhibit an onco-suppressive function in mammary epithelial cells and corroborated by many other studies, but the molecular mechanisms of this function remain largely unsolved. Based on existing proteomic data, we report here the generation of an interaction-based network of signaling pathways controlled by Syk in breast cancer cells. Pathway enrichment of the Syk targets previously identified by quantitative phospho-proteomics indicated that Syk is engaged in cell adhesion, motility, growth and death. Using the components and interactions of these pathways, we bootstrapped the reconstruction of a comprehensive network covering Syk signaling in breast cancer cells. To generate in silico hypotheses on Syk signaling propagation, we developed a method allowing to rank paths between Syk and its targets. We first annotated the network according to experimental datasets. We then combined shortest path computation with random walk processes to estimate the importance of individual interactions and selected biologically relevant pathways in the network. Molecular and cell biology experiments allowed to distinguish candidate mechanisms that underlie the impact of Syk on the regulation of cortactin and ezrin, both involved in actin-mediated cell adhesion and motility. The Syk network was further completed with the results of our biological validation experiments. The resulting Syk signaling sub-networks can be explored via an online visualization platform.
Reconstruction and signal propagation analysis of the Syk signaling network in breast cancer cells
Urbach, Serge; Montcourrier, Philippe; Roy, Christian; Solassol, Jérôme; Freiss, Gilles; Radulescu, Ovidiu
2017-01-01
The ability to build in-depth cell signaling networks from vast experimental data is a key objective of computational biology. The spleen tyrosine kinase (Syk) protein, a well-characterized key player in immune cell signaling, was surprisingly first shown by our group to exhibit an onco-suppressive function in mammary epithelial cells and corroborated by many other studies, but the molecular mechanisms of this function remain largely unsolved. Based on existing proteomic data, we report here the generation of an interaction-based network of signaling pathways controlled by Syk in breast cancer cells. Pathway enrichment of the Syk targets previously identified by quantitative phospho-proteomics indicated that Syk is engaged in cell adhesion, motility, growth and death. Using the components and interactions of these pathways, we bootstrapped the reconstruction of a comprehensive network covering Syk signaling in breast cancer cells. To generate in silico hypotheses on Syk signaling propagation, we developed a method allowing to rank paths between Syk and its targets. We first annotated the network according to experimental datasets. We then combined shortest path computation with random walk processes to estimate the importance of individual interactions and selected biologically relevant pathways in the network. Molecular and cell biology experiments allowed to distinguish candidate mechanisms that underlie the impact of Syk on the regulation of cortactin and ezrin, both involved in actin-mediated cell adhesion and motility. The Syk network was further completed with the results of our biological validation experiments. The resulting Syk signaling sub-networks can be explored via an online visualization platform. PMID:28306714
Accelerated nonlinear multichannel ultrasonic tomographic imaging using target sparseness.
Chengdong Dong; Yuanwei Jin; Enyue Lu
2014-03-01
This paper presents an accelerated iterative Landweber method for nonlinear ultrasonic tomographic imaging in a multiple-input multiple-output (MIMO) configuration under a sparsity constraint on the image. The proposed method introduces the emerging MIMO signal processing techniques and target sparseness constraints in the traditional computational imaging field, thus significantly improves the speed of image reconstruction compared with the conventional imaging method while producing high quality images. Using numerical examples, we demonstrate that incorporating prior knowledge about the imaging field such as target sparseness accelerates significantly the convergence of the iterative imaging method, which provides considerable benefits to real-time tomographic imaging applications.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2016-11-22
An autoencoder based framework that simultaneously reconstruct and classify biomedical signals is proposed. Previous work has treated reconstruction and classification as separate problems. This is the first work that proposes a combined framework to address the issue in a holistic fashion.
Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection
Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue
2016-01-01
Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection. PMID:27529245
Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection.
Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue
2016-08-11
Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection.
Reconstruction of sound source signal by analytical passive TR in the environment with airflow
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu
2017-03-01
In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.
Reconstruction of stress corrosion cracks using signals of pulsed eddy current testing
NASA Astrophysics Data System (ADS)
Wang, Li; Xie, Shejuan; Chen, Zhenmao; Li, Yong; Wang, Xiaowei; Takagi, Toshiyuki
2013-06-01
A scheme to apply signals of pulsed eddy current testing (PECT) to reconstruct a deep stress corrosion crack (SCC) is proposed on the basis of a multi-layer and multi-frequency reconstruction strategy. First, a numerical method is introduced to extract conventional eddy current testing (ECT) signals of different frequencies from the PECT responses at different scanning points, which are necessary for multi-frequency ECT inversion. Second, the conventional fast forward solver for ECT signal simulation is upgraded to calculate the single-frequency pickup signal of a magnetic field by introducing a strategy that employs a tiny search coil. Using the multiple-frequency ECT signals and the upgraded fast signal simulator, we reconstructed the shape profiles and conductivity of an SCC at different depths layer-by-layer with a hybrid inversion scheme of the conjugate gradient and particle swarm optimisation. Several modelled SCCs of rectangular or stepwise shape in an SUS304 plate are reconstructed from simulated PECT signals with artificial noise. The reconstruction results show better precision in crack depth than the conventional ECT inversion method, which demonstrates the validity and efficiency of the proposed PECT inversion scheme.
Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing.
Yoshida, Ryo; West, Mike
2010-05-01
We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate "artificial" posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data.
Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging
Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao
2016-01-01
Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114
Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.
Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao
2016-05-07
Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging.
A Novel Reconstruction Framework for Time-Encoded Signals with Integrate-and-Fire Neurons.
Florescu, Dorian; Coca, Daniel
2015-09-01
Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding and time decoding methods have been studied using the nonuniform sampling theory for band-limited spaces, as well as for generic shift-invariant spaces. This letter proposes a new framework for studying IF time encoding and decoding by reformulating the IF time encoding problem as a uniform sampling problem. This framework forms the basis for two new algorithms for reconstructing signals from spike time sequences. We demonstrate that the proposed reconstruction algorithms are faster, and thus better suited for real-time processing, while providing a similar level of accuracy, compared to the standard reconstruction algorithm.
NASA Astrophysics Data System (ADS)
Oh, Jinsung; Senay, Seda; Chaparro, LuisF
2010-12-01
We consider the reconstruction of signals from nonuniformly spaced samples using a projection onto convex sets (POCSs) implemented with the evolutionary time-frequency transform. Signals of practical interest have finite time support and are nearly band-limited, and as such can be better represented by Slepian functions than by sinc functions. The evolutionary spectral theory provides a time-frequency representation of nonstationary signals, and for deterministic signals the kernel of the evolutionary representation can be derived from a Slepian projection of the signal. The representation of low pass and band pass signals is thus efficiently done by means of the Slepian functions. Assuming the given nonuniformly spaced samples are from a signal satisfying the finite time support and the essential band-limitedness conditions with a known center frequency, imposing time and frequency limitations in the evolutionary transformation permit us to reconstruct the signal iteratively. Restricting the signal to a known finite time and frequency support, a closed convex set, the projection generated by the time-frequency transformation converges into a close approximation to the original signal. Simulation results illustrate the evolutionary Slepian-based transform in the representation and reconstruction of signals from irregularly-spaced and contiguous lost samples.
Ensslin, Torsten A.; Frommert, Mona
2011-05-15
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.
New signal processing technique for density profile reconstruction using reflectometry
Clairet, F.; Bottereau, C.; Ricaud, B.; Briolle, F.; Heuraux, S.
2011-08-15
Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10{sup 16} m{sup -1}. For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.
NASA Astrophysics Data System (ADS)
Jeong, Hyunwook; Kim, Hak Gu; Ro, Yong Man
2017-03-01
In this paper, we investigate quality factors of numerical reconstruction with a small number of signals based on sparsity from a holographic fringe pattern. Holographic fringe pattern generated by Fresnel diffraction is a complex amplitude and sparse distribution in frequency domain. The sparsity of holographic fringe pattern could play a key role in reconstruction quality assessment in compressive holography. In this paper we have investigated sparsity constraint on holographic fringe pattern which influences the overall quality of numerically reconstructed data. In addition, we have investigated reconstruction quality for various subsampling methods including uniform sampling, random sampling, variable density sampling, and magnitude-based sampling. Experiments have been conducted to evaluate reconstruction qualities on sparsity constraints and sampling patterns. Experimental results indicate that the way to extract the sparse signals could significantly affect the quality of the numerical reconstruction in digital holography and visually plausible reconstruction could be obtained with a sparse holographic fringe pattern.
Sparse coding with memristor networks.
Sheridan, Patrick M; Cai, Fuxi; Du, Chao; Ma, Wen; Zhang, Zhengya; Lu, Wei D
2017-08-01
Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.
Implementation and Performance of the Signal Reconstruction in the ATLAS Hadronic Tile Calorimeter
NASA Astrophysics Data System (ADS)
Valero, Alberto; ATLAS Tile Calorimeter Group
The Tile Calorimeter (TileCal) for the ATLAS experiment at the CERN Large Hadron Collider (LHC) is currently taking data with proton-proton collisions. The Tile Calorimeter is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are read-out by wavelength shifting fibers coupled to photomultiplier tubes (PMT). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The TileCal front-end electronics allows to read-out the signals produced by about 10000 channels measuring energies ranging from ∼30 MeV to ∼2 TeV. The read-out system is designed to reconstruct the data in real-time fulfilling the tight time constraint imposed by the ATLAS first level trigger rate (100 kHz). The main component of the read-out system is the Digital Signal Processor (DSP) which, using the Optimal Filtering technique, allows to compute for each channel the signal amplitude, phase and quality factor at the required high rate. A solid knowledge of the signal pulse-shapes and of the timing is fundamental to reach the required accuracy in energy reconstruction. Systematic studies to understand the pulse-shape have been carried out using both electronic calibration signals and data collected in the proton-proton collisions at √s = 7 TeV. After a short overview of the TileCal system we will discuss the implementation of Optimal Filtering signal reconstruction highlighting the constraints imposed by the use of the DSP fixed point arithmetic. We will report also results on the validation of the implementation of the DSP signal reconstruction and on the overall signal reconstruction performance measured in calibration, single beam and collision events.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos; Pina, Robert
2005-05-17
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.
Accelerated signal encoding and reconstruction using pixon method
Puetter, Richard; Yahil, Amos
2002-01-01
The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.
Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens
2016-01-01
Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044
Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B; Cook, Mark J; Wolters, Carsten H; Haueisen, Jens
2016-01-01
Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery.
EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT
Choudhary, Kumari Sonal; Rohatgi, Neha; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar
2016-01-01
Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend. PMID:27253373
Network Reconstruction and Systems Analysis of Cardiac Myocyte Hypertrophy Signaling*
Ryall, Karen A.; Holland, David O.; Delaney, Kyle A.; Kraeutler, Matthew J.; Parker, Audrey J.; Saucerman, Jeffrey J.
2012-01-01
Cardiac hypertrophy is managed by a dense web of signaling pathways with many pathways influencing myocyte growth. A quantitative understanding of the contributions of individual pathways and their interactions is needed to better understand hypertrophy signaling and to develop more effective therapies for heart failure. We developed a computational model of the cardiac myocyte hypertrophy signaling network to determine how the components and network topology lead to differential regulation of transcription factors, gene expression, and myocyte size. Our computational model of the hypertrophy signaling network contains 106 species and 193 reactions, integrating 14 established pathways regulating cardiac myocyte growth. 109 of 114 model predictions were validated using published experimental data testing the effects of receptor activation on transcription factors and myocyte phenotypic outputs. Network motif analysis revealed an enrichment of bifan and biparallel cross-talk motifs. Sensitivity analysis was used to inform clustering of the network into modules and to identify species with the greatest effects on cell growth. Many species influenced hypertrophy, but only a few nodes had large positive or negative influences. Ras, a network hub, had the greatest effect on cell area and influenced more species than any other protein in the network. We validated this model prediction in cultured cardiac myocytes. With this integrative computational model, we identified the most influential species in the cardiac hypertrophy signaling network and demonstrate how different levels of network organization affect myocyte size, transcription factors, and gene expression. PMID:23091058
Sudarski, Sonja; Henzler, Thomas; Haubenreisser, Holger; Dösch, Christina; Zenge, Michael O; Schmidt, Michaela; Nadar, Mariappan S; Borggrefe, Martin; Schoenberg, Stefan O; Papavassiliu, Theano
2017-01-01
Purpose To prospectively evaluate the accuracy of left ventricle (LV) analysis with a two-dimensional real-time cine true fast imaging with steady-state precession (trueFISP) magnetic resonance (MR) imaging sequence featuring sparse data sampling with iterative reconstruction (SSIR) performed with and without breath-hold (BH) commands at 3.0 T. Materials and Methods Ten control subjects (mean age, 35 years; range, 25-56 years) and 60 patients scheduled to undergo a routine cardiac examination that included LV analysis (mean age, 58 years; range, 20-86 years) underwent a fully sampled segmented multiple BH cine sequence (standard of reference) and a prototype undersampled SSIR sequence performed during a single BH and during free breathing (non-BH imaging). Quantitative analysis of LV function and mass was performed. Linear regression, Bland-Altman analysis, and paired t testing were performed. Results Similar to the results in control subjects, analysis of the 60 patients showed excellent correlation with the standard of reference for single-BH SSIR (r = 0.93-0.99) and non-BH SSIR (r = 0.92-0.98) for LV ejection fraction (EF), volume, and mass (P < .0001 for all). Irrespective of breath holding, LV end-diastolic mass was overestimated with SSIR (standard of reference: 163.9 g ± 58.9, single-BH SSIR: 178.5 g ± 62.0 [P < .0001], non-BH SSIR: 175.3 g ± 63.7 [P < .0001]); the other parameters were not significantly different (EF: 49.3% ± 11.9 with standard of reference, 48.8% ± 11.8 with single-BH SSIR, 48.8% ± 11 with non-BH SSIR; P = .03 and P = .12, respectively). Bland-Altman analysis showed similar measurement errors for single-BH SSIR and non-BH SSIR when compared with standard of reference measurements for EF, volume, and mass. Conclusion Assessment of LV function with SSIR at 3.0 T is noninferior to the standard of reference irrespective of BH commands. LV mass, however, is overestimated with SSIR. (©) RSNA, 2016 Online supplemental material is available
Improved terahertz imaging with a sparse synthetic aperture array
NASA Astrophysics Data System (ADS)
Zhang, Zhuopeng; Buma, Takashi
2010-02-01
Sparse arrays are highly attractive for implementing two-dimensional arrays, but come at the cost of degraded image quality. We demonstrate significantly improved performance by exploiting the coherent ultrawideband nature of singlecycle THz pulses. We compute two weighting factors to each time-delayed signal before final summation to form the reconstructed image. The first factor employs cross-correlation analysis to measure the degree of walk-off between timedelayed signals of neighboring elements. The second factor measures the spatial coherence of the time-delayed delayed signals. Synthetic aperture imaging experiments are performed with a THz time-domain system employing a mechanically scanned single transceiver element. Cross-sectional imaging of wire targets is performed with a onedimensional sparse array with an inter-element spacing of 1.36 mm (over four λ at 1 THz). The proposed image reconstruction technique improves image contrast by 15 dB, which is impressive considering the relatively few elements in the array. En-face imaging of a razor blade is also demonstrated with a 56 x 56 element two-dimensional array, showing reduced image artifacts with adaptive reconstruction. These encouraging results suggest that the proposed image reconstruction technique can be highly beneficial to the development of large area two-dimensional THz arrays.
NASA Astrophysics Data System (ADS)
Holan, Scott H.; Viator, John A.
2008-06-01
Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.
Automated wavelet denoising of photoacoustic signals for burn-depth image reconstruction
NASA Astrophysics Data System (ADS)
Holan, Scott H.; Viator, John A.
2007-02-01
Photoacoustic image reconstruction involves dozens or perhaps hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a sample with laser light are used to produce an image of the acoustic source. Each of these point measurements must undergo some signal processing, such as denoising and system deconvolution. In order to efficiently process the numerous signals acquired for photoacoustic imaging, we have developed an automated wavelet algorithm for processing signals generated in a burn injury phantom. We used the discrete wavelet transform to denoise photoacoustic signals generated in an optically turbid phantom containing whole blood. The denoising used universal level independent thresholding, as developed by Donoho and Johnstone. The entire signal processing technique was automated so that no user intervention was needed to reconstruct the images. The signals were backprojected using the automated wavelet processing software and showed reconstruction using denoised signals improved image quality by 21%, using a relative 2-norm difference scheme.
NASA Astrophysics Data System (ADS)
Testa, D.; Carfantan, H.; Albergante, M.; Blanchard, P.; Bourguignon, S.; Fasoli, A.; Goodyear, A.; Klein, A.; Lister, J. B.; Panis, T.; Contributors, JET
2016-12-01
Efficient, real-time and automated data analysis is one of the key elements for achieving scientific success in complex engineering and physical systems, two examples of which include the JET and ITER tokamaks. One problem which is common to these fields is the determination of the pulsation modes from an irregularly sampled time series. To this end, there are a wealth of signal processing techniques that are being applied to post-pulse and real-time data analysis in such complex systems. Here, we wish to present a review of the applications of a method based on the sparse representation of signals, using examples of the synergies that can be exploited when combining ideas and methods from very different fields, such as astronomy, astrophysics and thermonuclear fusion plasmas. Examples of this work in astronomy and astrophysics are the analysis of pulsation modes in various classes of stars and the orbit determination software of the Pioneer spacecraft. Two examples of this work in thermonuclear fusion plasmas include the detection of magneto-hydrodynamic instabilities, which is now performed routinely in JET in real-time on a sub-millisecond time scale, and the studies leading to the optimization of the magnetic diagnostic system in ITER and TCV. These questions have been solved by formulating them as inverse problems, despite the fact that these applicative frameworks are extremely different from the classical use of sparse representations, from both the theoretical and computational point of view. The requirements, prospects and ideas for the signal processing and real-time data analysis applications of this method to the routine operation of ITER will also be discussed. Finally, a very recent development has been the attempt to apply this method to the deconvolution of the measurement of electric potential performed during a ground-based survey of a proto-Villanovian necropolis in central Italy.
NASA Astrophysics Data System (ADS)
Young, Hsu-Wen Vincent; Hsu, Ke-Hsin; Pham, Van-Truong; Tran, Thi-Thao; Lo, Men-Tzung
2017-09-01
A new method for signal decomposition is proposed and tested. Based on self-consistent nonlinear wave equations with self-sustaining physical mechanisms in mind, the new method is adaptive and particularly effective for dealing with synthetic signals consisting of components of multiple time scales. By formulating the method into an optimization problem and developing the corresponding algorithm and tool, we have proved its usefulness not only for analyzing simulated signals, but, more importantly, also for real clinical data.
Robust reconstruction of a signal from its unthresholded recurrence plot subject to disturbances
NASA Astrophysics Data System (ADS)
Sipers, Aloys; Borm, Paul; Peeters, Ralf
2017-02-01
To make valid inferences from recurrence plots for time-delay embedded signals, two underlying key questions are: (1) to what extent does an unthresholded recurrence (URP) plot carry the same information as the signal that generated it, and (2) how does the information change when the URP gets distorted. We studied the first question in our earlier work [1], where it was shown that the URP admits the reconstruction of the underlying signal (up to its mean and a choice of sign) if and only if an associated graph is connected. Here we refine this result and we give an explicit condition in terms of the embedding parameters and the discrete Fourier spectrum of the URP. We also develop a method for the reconstruction of the underlying signal which overcomes several drawbacks that earlier approaches had. To address the second question we investigate robustness of the proposed reconstruction method under disturbances. We carry out two simulation experiments which are characterized by a broad band and a narrow band spectrum respectively. For each experiment we choose a noise level and two different pairs of embedding parameters. The conventional binary recurrence plot (RP) is obtained from the URP by thresholding and zero-one conversion, which can be viewed as severe distortion acting on the URP. Typically the reconstruction of the underlying signal from an RP is expected to be rather inaccurate. However, by introducing the concept of a multi-level recurrence plot (MRP) we propose to bridge the information gap between the URP and the RP, while still achieving a high data compression rate. We demonstrate the working of the proposed reconstruction procedure on MRPs, indicating that MRPs with just a few discretization levels can usually capture signal properties and morphologies more accurately than conventional RPs.
Antelis, Javier M; Montesano, Luis; Ramos-Murguialday, Ander; Birbaumer, Niels; Minguez, Javier
2013-01-01
Several works have reported on the reconstruction of 2D/3D limb kinematics from low-frequency EEG signals using linear regression models based on positive correlation values between the recorded and the reconstructed trajectories. This paper describes the mathematical properties of the linear model and the correlation evaluation metric that may lead to a misinterpretation of the results of this type of decoders. Firstly, the use of a linear regression model to adjust the two temporal signals (EEG and velocity profiles) implies that the relevant component of the signal used for decoding (EEG) has to be in the same frequency range as the signal to be decoded (velocity profiles). Secondly, the use of a correlation to evaluate the fitting of two trajectories could lead to overly-optimistic results as this metric is invariant to scale. Also, the correlation has a non-linear nature that leads to higher values for sinus/cosinus-like signals at low frequencies. Analysis of these properties on the reconstruction results was carried out through an experiment performed in line with previous studies, where healthy participants executed predefined reaching movements of the hand in 3D space. While the correlations of limb velocity profiles reconstructed from low-frequency EEG were comparable to studies in this domain, a systematic statistical analysis revealed that these results were not above the chance level. The empirical chance level was estimated using random assignments of recorded velocity profiles and EEG signals, as well as combinations of randomly generated synthetic EEG with recorded velocity profiles and recorded EEG with randomly generated synthetic velocity profiles. The analysis shows that the positive correlation results in this experiment cannot be used as an indicator of successful trajectory reconstruction based on a neural correlate. Several directions are herein discussed to address the misinterpretation of results as well as the implications on previous
Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA
NASA Astrophysics Data System (ADS)
Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng
2011-12-01
The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.
Babadi, Behtash; Ba, Demba; Purdon, Patrick L.; Brown, Emery N.
2015-01-01
In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 < ν ≤ 1 and ε > 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(ν, ε) algorithms minimize ε-smooth versions of the ℓν ‘norms’. We leverage EM theory to show that, for each 0 < ν ≤ 1, the limit points of the sequence of IRLS(ν, ε) iterates are stationary point of the ε-smooth ℓν ‘norm’ minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(ν, ε) algorithms is stable for each 0 < ν ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case ν = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for ν < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery. PMID:26549965
Signal quality quantification and waveform reconstruction of arterial blood pressure recordings.
Fanelli, A; Heldt, T
2014-01-01
Arterial blood pressure (ABP) is an important vital sign of the cardiovascular system. As with other physiological signals, its measurement can be corrupted by different sources of noise, interference, and artifact. Here, we present an algorithm for the quantification of signal quality and for the reconstruction of the ABP waveform in noise-corrupted segments of the measurement. The algorithm quantifies the quality of the ABP signal on a beat-by-beat basis by computing the normalized mean of successive differences of the ABP amplitude over each beat. In segments of poor signal quality, the ABP wavelets are then reconstructed on the basis of the expected cycle duration and envelope information derived from neighboring ABP wavelet segments. The algorithm was tested on two datasets of ABP waveform signals containing both invasive radial artery ABP and noninvasive ABP waveforms. Our results show that the approach is efficient in identifying the noisy segments (accuracy, sensitivity and specificity over 95%) and reliable in reconstructing beats that were artificially corrupted.
Theory of Multirate Signal Processing with Application to Signal and Image Reconstruction
2005-09-01
ALGEBRA . . . . . . . . . . . 19 1. Random Vectors . . . . . . . . . . . . . . . . . . . . . 19 2. Kronecker Products...theory and linear algebra concepts. Further, this chapter, establishes notation and conventions for purposes of consistency throughout this work. In...system theoretic point of view. Finally, a linear algebraic approach is introduced to model various multirate operations for use in reconstruction
Crack growth sparse pursuit for wind turbine blade
NASA Astrophysics Data System (ADS)
Li, Xiang; Yang, Zhibo; Zhang, Han; Du, Zhaohui; Chen, Xuefeng
2015-01-01
One critical challenge to achieving reliable wind turbine blade structural health monitoring (SHM) is mainly caused by composite laminates with an anisotropy nature and a hard-to-access property. The typical pitch-catch PZTs approach generally detects structural damage with both measured and baseline signals. However, the accuracy of imaging or tomography by delay-and-sum approaches based on these signals requires improvement in practice. Via the model of Lamb wave propagation and the establishment of a dictionary that corresponds to scatters, a robust sparse reconstruction approach for structural health monitoring comes into view for its promising performance. This paper proposes a neighbor dictionary that identifies the first crack location through sparse reconstruction and then presents a growth sparse pursuit algorithm that can precisely pursue the extension of the crack. An experiment with the goal of diagnosing a composite wind turbine blade with an artificial crack is performed, and it validates the proposed approach. The results give competitively accurate crack detection with the correct locations and extension length.
Serteyn, A; Vullings, R; Meftah, M; Bergmans, J W M
2014-01-01
Many healthcare and lifestyle applications could benefit from capacitive measurement systems for unobtrusive ECG monitoring. However, a key technical challenge remains: the susceptibility of such systems to motion artifacts and common-mode interferences. With this in mind, we developed a novel method to reduce various types of artifacts present in capacitive ECG measurement systems. The objective is to perform ECG reconstruction and channel balancing in an automated and continuous manner. The proposed method consists of a) modeling the measurement system; b) specifically parameterizing the reconstruction equation; and c) adaptively estimating the parameters. A multi-frequency injection signal serves to estimate and track the variations of the different parameters of the reconstruction equation. A preliminary investigation on the validity of the method has been performed in both simulation and lab environment: the method shows benefits in terms of common-mode interference and motion artifact reduction, resulting in improved R-peak detection.
Nabeshima, Yuji; Kimura, Yoshitaka; Ito, Takuro; Ohwada, Kazunari; Karashima, Akihiro; Katayama, Norihiro; Nakao, Mitsuyuki
2013-01-01
Fetal electrocardiogram (fECG) and its vector form (fVECG) could provide significant clinical information concerning physiological conditions of a fetus. So far various independent component analysis (ICA)-based methods for extracting fECG from maternal abdominal signals have been proposed. Because full extraction of component waves such as P, Q, R, S, and T, is difficult to be realized under noisy and nonstationary situations, the fVECG is further hard to be reconstructed, where different projections of the fetal heart vector are required. In order to reconstruct fVECG, we proposed a novel method for synthesizing different projections of the heart vector, making good use of the fetus movement. This method consists of ICA, estimation of rotation angles of fetus, and synthesis of projections of the heart vector. Through applications to the synthetic and actual data, our method is shown to precisely estimate rotation angle of the fetus and to successfully reconstruct the fVECG.
Group sparse optimization by alternating direction method
NASA Astrophysics Data System (ADS)
Deng, Wei; Yin, Wotao; Zhang, Yin
2013-09-01
This paper proposes efficient algorithms for group sparse optimization with mixed l2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity can often lead to better signal recovery/feature selection. The l2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional l1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the l2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.
Viggiano, Alessandro; Marinesco, Stéphane; Pain, Frédéric; Meiller, Anne; Gurden, Hirac
2012-04-30
A new feasible and reproducible method to reconstruct local field potentials from amperometric biosensor signals is presented. It is based on the least-square fit of the current response of the biosensor electrode to a voltage step by the use of two time constants. After determination of the electrode impedance, Fast Fourier Transform (FFT) and Inverse FFT are performed to convert the recorded amperometric signals into voltage and trace the local field potentials using a resistor-capacitor circuit-based model. We applied this method to reconstruct field evoked potentials from currents recorded by a lactate biosensor in the rat dentate gyrus after stimulation of the perforant pathway in vivo. Initial slope of the reconstructed field excitatory postsynaptic potentials was used in order to demonstrate long term potentiation induced by high frequency stimulation of the perforant path. Our results show that reconstructing evoked potentials from amperometric recordings is a reliable method to obtain in vivo electrophysiological and amperometric information simultaneously from the same electrode in order to understand how chemical compounds vary with and modulate the dynamics of brain activity.
Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries
Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.
2015-01-01
Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494
Fast multi-dimensional NMR acquisition and processing using the sparse FFT.
Hassanieh, Haitham; Mayzel, Maxim; Shi, Lixin; Katabi, Dina; Orekhov, Vladislav Yu
2015-09-01
Increasing the dimensionality of NMR experiments strongly enhances the spectral resolution and provides invaluable direct information about atomic interactions. However, the price tag is high: long measurement times and heavy requirements on the computation power and data storage. We introduce sparse fast Fourier transform as a new method of NMR signal collection and processing, which is capable of reconstructing high quality spectra of large size and dimensionality with short measurement times, faster computations than the fast Fourier transform, and minimal storage for processing and handling of sparse spectra. The new algorithm is described and demonstrated for a 4D BEST-HNCOCA spectrum.
NASA Astrophysics Data System (ADS)
Guo, Rui; Li, Weixing; Zhang, Yue; Chen, Zengping
2016-01-01
A direction of arrival (DOA) estimation algorithm for coherent signals in the presence of unknown mutual coupling is proposed. A group of auxiliary sensors in a uniform linear array are applied to eliminate the effects on the orthogonality of subspaces brought by mutual coupling. Then, a Toeplitz matrix, whose rank is independent of the coherency between impinging signals, is reconstructed to eliminate the rank loss of the spatial covariance matrix. Therefore, the signal and noise subspaces can be estimated properly. This method can estimate the DOAs of coherent signals under unknown mutual coupling accurately without any iteration and calibration sources. It has a low computational burden and high accuracy. Simulation results demonstrate the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Su, Zhi-Yuan; Wu, Tzuyin; Yang, Po-Hua; Wang, Yeng-Tseng
2008-04-01
The heartbeat rate signal provides an invaluable means of assessing the sympathetic-parasympathetic balance of the human autonomic nervous system and thus represents an ideal diagnostic mechanism for detecting a variety of disorders such as epilepsy, cardiac disease and so forth. The current study analyses the dynamics of the heartbeat rate signal of known epilepsy sufferers in order to obtain a detailed understanding of the heart rate pattern during a seizure event. In the proposed approach, the ECG signals are converted into heartbeat rate signals and the embedology theorem is then used to construct the corresponding multidimensional phase space. The dynamics of the heartbeat rate signal are then analyzed before, during and after an epileptic seizure by examining the maximum Lyapunov exponent and the correlation dimension of the attractors in the reconstructed phase space. In general, the results reveal that the heartbeat rate signal transits from an aperiodic, highly-complex behaviour before an epileptic seizure to a low dimensional chaotic motion during the seizure event. Following the seizure, the signal trajectories return to a highly-complex state, and the complex signal patterns associated with normal physiological conditions reappear.
Flexible sparse regularization
NASA Astrophysics Data System (ADS)
Lorenz, Dirk A.; Resmerita, Elena
2017-01-01
The seminal paper of Daubechies, Defrise, DeMol made clear that {{\\ell }}p spaces with p\\in [1,2) and p-powers of the corresponding norms are appropriate settings for dealing with reconstruction of sparse solutions of ill-posed problems by regularization. It seems that the case p = 1 provides the best results in most of the situations compared to the cases p\\in (1,2). An extensive literature gives great credit also to using {{\\ell }}p spaces with p\\in (0,1) together with the corresponding quasi-norms, although one has to tackle challenging numerical problems raised by the non-convexity of the quasi-norms. In any of these settings, either superlinear, linear or sublinear, the question of how to choose the exponent p has been not only a numerical issue, but also a philosophical one. In this work we introduce a more flexible way of sparse regularization by varying exponents. We introduce the corresponding functional analytic framework, that leaves the setting of normed spaces but works with so-called F-norms. One curious result is that there are F-norms which generate the ℓ 1 space, but they are strictly convex, while the ℓ 1-norm is just convex.
Sparse cortical source localization using spatio-temporal atoms.
Korats, Gundars; Ranta, Radu; Le Cam, Steven; Louis-Dorr, Valérie
2015-01-01
This paper addresses the problem of sparse localization of cortical sources from scalp EEG recordings. Localization algorithms use propagation model under spatial and/or temporal constraints, but their performance highly depends on the data signal-to-noise ratio (SNR). In this work we propose a dictionary based sparse localization method which uses a data driven spatio-temporal dictionary to reconstruct the measurements using Single Best Replacement (SBR) and Continuation Single Best Replacement (CSBR) algorithms. We tested and compared our methods with the well-known MUSIC and RAP-MUSIC algorithms on simulated realistic data. Tests were carried out for different noise levels. The results show that our method has a strong advantage over MUSIC-type methods in case of synchronized sources.
Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation.
Grossi, Giuliano; Lanzarotti, Raffaella; Lin, Jianyi
2017-01-01
In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD's robustness and wide applicability.
Orthogonal Procrustes Analysis for Dictionary Learning in Sparse Linear Representation
Grossi, Giuliano; Lin, Jianyi
2017-01-01
In the sparse representation model, the design of overcomplete dictionaries plays a key role for the effectiveness and applicability in different domains. Recent research has produced several dictionary learning approaches, being proven that dictionaries learnt by data examples significantly outperform structured ones, e.g. wavelet transforms. In this context, learning consists in adapting the dictionary atoms to a set of training signals in order to promote a sparse representation that minimizes the reconstruction error. Finding the best fitting dictionary remains a very difficult task, leaving the question still open. A well-established heuristic method for tackling this problem is an iterative alternating scheme, adopted for instance in the well-known K-SVD algorithm. Essentially, it consists in repeating two stages; the former promotes sparse coding of the training set and the latter adapts the dictionary to reduce the error. In this paper we present R-SVD, a new method that, while maintaining the alternating scheme, adopts the Orthogonal Procrustes analysis to update the dictionary atoms suitably arranged into groups. Comparative experiments on synthetic data prove the effectiveness of R-SVD with respect to well known dictionary learning algorithms such as K-SVD, ILS-DLA and the online method OSDL. Moreover, experiments on natural data such as ECG compression, EEG sparse representation, and image modeling confirm R-SVD’s robustness and wide applicability. PMID:28103283
Sparse Methods for Biomedical Data.
Ye, Jieping; Liu, Jun
2012-06-01
Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the [Formula: see text] norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data.
Sparse Methods for Biomedical Data
Ye, Jieping; Liu, Jun
2013-01-01
Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the ℓ1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data. PMID:24076585
Reconstruction of signal in plastic scintillator of PET using Tikhonov regularization.
Raczynski, Lech
2015-08-01
The new concept of Time of Flight Positron Emission Tomography (TOF-PET) detection system, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The Jagiellonian-PET (J-PET) detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on idea from the Tikhonov regularization method, is presented. From the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long plastic scintillator strip. It is shown that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction from 1.05 cm to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm.
NASA Astrophysics Data System (ADS)
Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico
2015-08-01
In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.
Camargo, Gabriel C; Erthal, Fernanda; Sabioni, Leticia; Penna, Filipe; Strecker, Ralph; Schmidt, Michaela; Zenge, Michael O; Lima, Ronaldo de S L; Gottlieb, Ilan
2017-05-01
Segmented cine imaging with a steady-state free-precession sequence (Cine-SSFP) is currently the gold standard technique for measuring ventricular volumes and mass, but due to multi breath-hold (BH) requirements, it is prone to misalignment of consecutive slices, time consuming and dependent on respiratory capacity. Real-time cine avoids those limitations, but poor spatial and temporal resolution of conventional sequences has prevented its routine application. We sought to examine the accuracy and feasibility of a newly developed real-time sequence with aggressive under-sampling of k-space using sparse sampling and iterative reconstruction (Cine-RT). Stacks of short-axis cines were acquired covering both ventricles in a 1.5T system using gold standard Cine-SSFP and Cine-RT. Acquisition parameters for Cine-SSFP were: acquisition matrix of 224×196, temporal resolution of 39ms, retrospective gating, with an average of 8 heartbeats per slice and 1-2 slices/BH. For Cine-RT: acquisition matrix of 224×196, sparse sampling net acceleration factor of 11.3, temporal resolution of 41ms, prospective gating, real-time acquisition of 1 heart-beat/slice and all slices in one BH. LV contours were drawn at end diastole and systole to derive LV volumes and mass. Forty-one consecutive patients (15 male; 41±17years) in sinus rhythm were successfully included. All images from Cine-SSFP and Cine-RT were considered to have excellent quality. Cine-RT-derived LV volumes and mass were slightly underestimated but strongly correlated with gold standard Cine-SSFP. Inter- and intra-observer analysis presented similar results between both sequences. Cine-RT featuring sparse sampling and iterative reconstruction can achieve spatial and temporal resolution equivalent to Cine-SSFP, providing excellent image quality, with similar precision measurements and highly correlated and only slightly underestimated volume and mass values. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Osman, Matthew; Das, Sarah B.; Marchal, Olivier; Evans, Matthew J.
2017-04-01
Methanesulfonic acid (MSA; CH3SO3H) in polar ice cores is a unique proxy of marine primary productivity, synoptic atmospheric transport, and regional sea ice behavior. However, MSA can be unstable within the ice column, leading to uncertainties surrounding the integrity of its paleoclimatic signal. Here, we use ice core records coupled with forward and inverse numerical models to investigate the post-depositional processes affecting the migration of MSA within the firn and ice column, and attempt to reconstruct the original signal in the ice column. The forward model, detailing the vertical diffusive transport of soluble impurities through supercooled liquid pathways, allows us to systematically assess the contribution of varying influences on the post-depositional migration of MSA. Our results show that two site-specific variables in particular, i) snow accumulation rate, and ii) seasonal concentration gradients of Na+(typically the highest concentration sea salt), may be sufficient to reasonably predict the timing and magnitude of MSA migration within the ice column. However, at present the temporal accuracy of the forward MSA migration model remains limited by inadequate constraints on the diffusion coefficient of MSA, DMS-. Specifically, we find that previous estimates of DMS-are unable to reproduce, within significant uncertainty, the progressive phase alignment of the MSA and Na+signals observed in real Antarctic ice cores. To attempt to correct for the effects of post-depositional migration, we combine recent high-resolution West Antarctic MSA data using sequential methods from optimal control theory (a Kalman filter and a related fixed-interval smoother) to reconstruct and provide uncertainty estimates on the original, pre-migrated MSA profile. We find that although the reconstructed MSA profile provides a reasonable estimate of the original MSA signal, the large uncertainties associated with this reconstructed signal cannot be objectively discriminated
NASA Astrophysics Data System (ADS)
Hitt, N. T.; Cobb, K. M.; Sayani, H. R.; Grothe, P. R.; Atwood, A. R.; O'Connor, G.; Chen, T.; Hagos, M. M.; Deocampo, D.; Edwards, R. L.; Cheng, H.; Lu, Y.; Thompson, D. M.
2016-12-01
Sea-surface temperature (SST) variability in the central tropical Pacific drives global-scale responses through atmospheric teleconnections, so the response of this region to anthropogenic forcing has important implications for regional climate responses in many areas. However, quantification of anthropogenic SST trends in the central tropical Pacific is complicated by the fact that instrumental SST observations in this region are extremely limited prior to 1950, with trends of opposite sign observed across the various gridded instrumental datasets (Deser et al., 2010). Researchers have turned to multi-century coral records to reconstruct ocean temperatures through time, but the paucity of such records prohibits the generation of uncertainty estimates. In this study, we use a large collection of U/Th-dated fossil corals that to investigate a new ensemble approach to reconstructing temperature from the Central Pacific over the late 20th century. Here we combine monthly-resolved d18O and Sr/Ca from 8 5-14 year long coral records from Christmas Island (2°N, 157°W) to quantify temperature and hydrological trends in this region from 1930 to present. We compare our fossil coral ensemble reconstruction to a long modern coral core from this site that extends back to 1940, as well as to gridded SST datasets. We also provide the first well-replicated coral d18O and Sr/Ca records across both the 1997/98 and 2015/2016 El Nino events, comparing the strength of these two events in the context of long-term temperature trends observed in our longer reconstruction. We conclude that the fossil coral ensemble approach provides a robust means of reconstructing 20th century climate trends. Deser et al., 2010, GRL, doi: 10.1029/2010GL043321
NASA Astrophysics Data System (ADS)
Li, Q.; Gasparini, N. M.; Straub, K. M.
2015-12-01
Changes in tectonics can affect erosion rates across a mountain belt, leading to non-steady sediment flux delivery to fluvial transport systems. The sediment flux signal produced from time-varying tectonics may eventually be recorded in a depositional basin. However, before the sediment flux from an erosional watershed is fed to the downstream transport system and preserved in sedimentary deposits, tectonic signals can be distorted or even destroyed as they are transformed into a sediment-flux signal that is exported out of a watershed . In this study, we use the Channel-Hillslope Integrated Landscape Development (CHILD) model to explore how the sediment flux delivered from a mountain watershed responds to non-steady rock uplift. We observe that (1) a non-linear relationship between the erosion response and tectonic perturbation can lead to a sediment-flux signal that is out of phase with the change in uplift rate; (2) in some cases in which the uplift perturbation is short, the sediment flux signal may contain no record of the change; (3) uplift rates interpreted from sediment flux at the outlet of a transient erosional landscape are likely to be underestimated. All these observations highlight the difficulty in accurately reconstructing tectonic history from sediment flux records. Results from this study will help to constrain what tectonic signals may be evident in the sediment flux delivered from an erosional system and therefore have the potential to be recorded in stratigraphy, ultimately improving our ability to interpret stratigraphy.
Performances of the signal reconstruction in the ATLAS Hadronic Tile Calorimeter
NASA Astrophysics Data System (ADS)
Meoni, E.
2013-08-01
The Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of ATLAS. It is a key detector for the reconstruction of hadrons, jets, tau leptons and missing transverse energy. TileCal is a sampling calorimeter using steel as an absorber and plastic scintillators as an active medium. The scintillators are read-out by wavelength shifting fibers coupled to photomultiplier tubes (PMTs). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The read-out system is designed to reconstruct the data in real time fulfilling the tight time constraint imposed by the ATLAS first level trigger rate (100 kHz). The signal amplitude and phases for each channel are measured using Optimal Filtering algorithms both at online and offline level. We present the performances of these techniques on the data collected in the proton-proton collisions at center-of-mass energy of 7 TeV. We show in particular the measurements of low amplitudes, close to the pedestal value, using as probe high transverse momenta muons produced in the proton-proton collisions.
Roever, Christian; Bizouard, Marie-Anne; Christensen, Nelson; Dimmelmeier, Harald; Heng, Ik Siong; Meyer, Renate
2009-11-15
Presented in this paper is a technique that we propose for extracting the physical parameters of a rotating stellar core collapse from the observation of the associated gravitational wave signal from the collapse and core bounce. Data from interferometric gravitational wave detectors can be used to provide information on the mass of the progenitor model, precollapse rotation, and the nuclear equation of state. We use waveform libraries provided by the latest numerical simulations of rotating stellar core collapse models in general relativity, and from them create an orthogonal set of eigenvectors using principal component analysis. Bayesian inference techniques are then used to reconstruct the associated gravitational wave signal that is assumed to be detected by an interferometric detector. Posterior probability distribution functions are derived for the amplitudes of the principal component analysis eigenvectors, and the pulse arrival time. We show how the reconstructed signal and the principal component analysis eigenvector amplitude estimates may provide information on the physical parameters associated with the core collapse event.
TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information
Struck, Torsten H
2014-01-01
Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118
Sparse representation utilizing tight frame for phase retrieval
NASA Astrophysics Data System (ADS)
Shi, Baoshun; Lian, Qiusheng; Chen, Shuzhen
2015-12-01
We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is ill-posed. Several techniques have been used to address this problem by utilizing various priors such as non-negative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding non-convex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noise-free and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsity-based algorithms in terms of reconstruction quality.
Mukherjee, Sayak; Stewart, David; Stewart, William; Lanier, Lewis L.
2017-01-01
Single-cell responses are shaped by the geometry of signalling kinetic trajectories carved in a multidimensional space spanned by signalling protein abundances. It is, however, challenging to assay a large number (more than 3) of signalling species in live-cell imaging, which makes it difficult to probe single-cell signalling kinetic trajectories in large dimensions. Flow and mass cytometry techniques can measure a large number (4 to more than 40) of signalling species but are unable to track single cells. Thus, cytometry experiments provide detailed time-stamped snapshots of single-cell signalling kinetics. Is it possible to use the time-stamped cytometry data to reconstruct single-cell signalling trajectories? Borrowing concepts of conserved and slow variables from non-equilibrium statistical physics we develop an approach to reconstruct signalling trajectories using snapshot data by creating new variables that remain invariant or vary slowly during the signalling kinetics. We apply this approach to reconstruct trajectories using snapshot data obtained from in silico simulations, live-cell imaging measurements, and, synthetic flow cytometry datasets. The application of invariants and slow variables to reconstruct trajectories provides a radically different way to track objects using snapshot data. The approach is likely to have implications for solving matching problems in a wide range of disciplines. PMID:28879015
Mourzina, Yulia; Steffen, Alfred; Kaliaguine, Dmitri; Wolfrum, Bernhard; Schulte, Petra; Böcker-Meffert, Simone; Offenhäusser, Andreas
2006-04-22
Functional coupling of reconstructed neuronal networks with microelectronic circuits has potential for the development of bioelectronic devices, pharmacological assays and medical engineering. Modulation of the signal processing properties of on-chip reconstructed neuronal networks is an important aspect in such applications. It may be achieved by controlling the biochemical environment, preferably with cellular resolution. In this work, we attempt to design cell-cell and cell-medium interactions in confined geometries with the aim to manipulate non-invasively the activity pattern of an individual neuron in neuronal networks for long-term modulation. Therefore, we have developed a biohybrid system in which neuronal networks are reconstructed on microstructured silicon chips and interfaced to a microfluidic system. A high degree of geometrical control over the network architecture and alignment of the network with the substrate features has been achieved by means of aligned microcontact printing. Localized non-invasive on-chip chemical stimulation of micropatterned rat cortical neurons within a network has been demonstrated with an excitatory neurotransmitter glutamate. Our system will be useful for the investigation of the influence of localized chemical gradients on network formation and long-term modulation.
Vigilance detection based on sparse representation of EEG.
Yu, Hongbin; Lu, Hongtao; Ouyang, Tian; Liu, Hongjun; Lu, Bao-Liang
2010-01-01
Electroencephalogram (EEG) based vigilance detection of those people who engage in long time attention demanding tasks such as monotonous monitoring or driving is a key field in the research of brain-computer interface (BCI). However, robust detection of human vigilance from EEG is very difficult due to the low SNR nature of EEG signals. Recently, compressive sensing and sparse representation become successful tools in the fields of signal reconstruction and machine learning. In this paper, we propose to use the sparse representation of EEG to the vigilance detection problem. We first use continuous wavelet transform to extract the rhythm features of EEG data, and then employ the sparse representation method to the wavelet transform coefficients. We collect five subjects' EEG recordings in a simulation driving environment and apply the proposed method to detect the vigilance of the subjects. The experimental results show that the algorithm framework proposed in this paper can successfully estimate driver's vigilance with the average accuracy about 94.22 %. We also compare our algorithm framework with other vigilance estimation methods using different feature extraction and classifier selection approaches, the result shows that the proposed method has obvious advantages in the classification accuracy.
Fault feature extraction of rolling element bearings using sparse representation
NASA Astrophysics Data System (ADS)
He, Guolin; Ding, Kang; Lin, Huibin
2016-03-01
Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.
Gao, Yi; Bouix, Sylvain; Shenton, Martha; Tannenbaum, Allen
2014-01-01
In image segmentation, we are often interested in using certain quantities to characterize the object, and perform the classification based on them: mean intensity, gradient magnitude, responses to certain predefined filters, etc. Unfortunately, in many cases such quantities are not adequate to model complex textured objects. Along a different line of research, the sparse characteristic of natural signals has been recognized and studied in recent years. Therefore, how such sparsity can be utilized, in a non-parametric way, to model the object texture and assist the textural image segmentation process is studied in this work, and a segmentation scheme based on the sparse representation of the texture information is proposed. More explicitly, the texture is encoded by the dictionaries constructed from the user initialization. Then, an active contour is evolved to optimize the fidelity of the representation provided by the dictionary of the target. In doing so, not only a non-parametric texture modeling technique is provided, but also the sparsity of the representation guarantees the computation efficiency. The experiments are carried out on the publicly available image data sets which contain a large variety of texture images, to analyze the user interaction, performance statistics, and to highlight the algorithm’s capability of robustly extracting textured regions from an image. PMID:23799695
NASA Astrophysics Data System (ADS)
Kießling, Dominik
2017-03-01
The research infrastructure KM3NeT will comprise a multi cubic kilometer neutrino telescope that is currently being constructed in the Mediterranean Sea. Modules with optical and acoustic sensors are used in the detector. While the main purpose of the acoustic sensors is the position calibration of the detection units, they can be used as instruments for studies on acoustic neutrino detection, too. In this article, methods for signal classification and event reconstruction for acoustic neutrino detectors will be presented, which were developed using Monte Carlo simulations. For the signal classification the disk-like emission pattern of the acoustic neutrino signal is used. This approach improves the suppression of transient background by several orders of magnitude. Additionally, an event reconstruction is developed based on the signal classification. An overview of these algorithms will be presented and the efficiency of the classification will be discussed. The quality of the event reconstruction will also be presented.
Separation and reconstruction of BCG and EEG signals during continuous EEG and fMRI recordings
Xia, Hongjing; Ruan, Dan; Cohen, Mark S.
2014-01-01
Despite considerable effort to remove it, the ballistocardiogram (BCG) remains a major artifact in electroencephalographic data (EEG) acquired inside magnetic resonance imaging (MRI) scanners, particularly in continuous (as opposed to event-related) recordings. In this study, we have developed a new Direct Recording Prior Encoding (DRPE) method to extract and separate the BCG and EEG components from contaminated signals, and have demonstrated its performance by comparing it quantitatively to the popular Optimal Basis Set (OBS) method. Our modified recording configuration allows us to obtain representative bases of the BCG- and EEG-only signals. Further, we have developed an optimization-based reconstruction approach to maximally incorporate prior knowledge of the BCG/EEG subspaces, and of the signal characteristics within them. Both OBS and DRPE methods were tested with experimental data, and compared quantitatively using cross-validation. In the challenging continuous EEG studies, DRPE outperforms the OBS method by nearly sevenfold in separating the continuous BCG and EEG signals. PMID:25002836
NASA Astrophysics Data System (ADS)
Mantini, D.; Alleva, G.; Comani, S.
2005-10-01
Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.
Virtual head rotation reveals a process of route reconstruction from human vestibular signals.
Day, Brian L; Fitzpatrick, Richard C
2005-09-01
The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled.
Liu, Yipeng; De Vos, Maarten; Van Huffel, Sabine
2015-08-01
This paper deals with the problems that some EEG signals have no good sparse representation and single-channel processing is not computationally efficient in compressed sensing of multichannel EEG signals. An optimization model with L0 norm and Schatten-0 norm is proposed to enforce cosparsity and low-rank structures in the reconstructed multichannel EEG signals. Both convex relaxation and global consensus optimization with alternating direction method of multipliers are used to compute the optimization model. The performance of multichannel EEG signal reconstruction is improved in term of both accuracy and computational complexity. The proposed method is a better candidate than previous sparse signal recovery methods for compressed sensing of EEG signals. The proposed method enables successful compressed sensing of EEG signals even when the signals have no good sparse representation. Using compressed sensing would much reduce the power consumption of wireless EEG system.
Rank Awareness in Group-Sparse Recovery of Multi-Echo MR Images
Majumdar, Angshul; Ward, Rabab
2013-01-01
This work addresses the problem of recovering multi-echo T1 or T2 weighted images from their partial K-space scans. Recent studies have shown that the best results are obtained when all the multi-echo images are reconstructed by simultaneously exploiting their intra-image spatial redundancy and inter-echo correlation. The aforesaid studies either stack the vectorised images (formed by row or columns concatenation) as columns of a Multiple Measurement Vector (MMV) matrix or concatenate them as a long vector. Owing to the inter-image correlation, the thus formed MMV matrix or the long concatenated vector is row-sparse or group-sparse respectively in a transform domain (wavelets). Consequently the reconstruction problem was formulated as a row-sparse MMV recovery or a group-sparse vector recovery. In this work we show that when the multi-echo images are arranged in the MMV form, the thus formed matrix is low-rank. We show that better reconstruction accuracy can be obtained when the information about rank-deficiency is incorporated into the row/group sparse recovery problem. Mathematically, this leads to a constrained optimization problem where the objective function promotes the signal's groups-sparsity as well as its rank-deficiency; the objective function is minimized subject to data fidelity constraints. The experiments were carried out on ex vivo and in vivo T2 weighted images of a rat's spinal cord. Results show that this method yields considerably superior results than state-of-the-art reconstruction techniques. PMID:23519348
Sparse Regression as a Sparse Eigenvalue Problem
NASA Technical Reports Server (NTRS)
Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai
2008-01-01
We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization
Learning discriminative dictionary for group sparse representation.
Sun, Yubao; Liu, Qingshan; Tang, Jinhui; Tao, Dacheng
2014-09-01
In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multisubspaces structural information of the data. In addition, the learned subdictionary for each class usually shares some common atoms, which weakens the discriminative ability of the reconstruction error of each subdictionary. This paper presents a new dictionary learning model to improve sparse representation for image classification, which targets at learning a class-specific subdictionary for each class and a common subdictionary shared by all classes. The model is composed of a discriminative fidelity, a weighted group sparse constraint, and a subdictionary incoherence term. The discriminative fidelity encourages each class-specific subdictionary to sparsely represent the samples in the corresponding class. The weighted group sparse constraint term aims at capturing the structural information of the data. The subdictionary incoherence term is to make all subdictionaries independent as much as possible. Because the common subdictionary represents features shared by all classes, we only use the reconstruction error of each class-specific subdictionary for classification. Extensive experiments are conducted on several public image databases, and the experimental results demonstrate the power of the proposed method, compared with the state-of-the-arts.
Compressive sensing and entropy in seismic signals
NASA Astrophysics Data System (ADS)
Marinho, Eberton S.; Rocha, Tiago C.; Corso, Gilberto; Lucena, Liacir S.
2017-09-01
This work analyzes the correlation between the seismic signal entropy and the Compressive Sensing (CS) recovery index. The recovery index measures the quality of a signal reconstructed by the CS method. We analyze the performance of two CS algorithms: the ℓ1-MAGIC and the Fast Bayesian Compressive Sensing (BCS). We have observed a negative correlation between the performance of CS and seismic signal entropy. Signals with low entropy have small recovery index in their reconstruction by CS. The rationale behind our finding is: a sparse signal is easy to recover by CS and, besides, a sparse signal has low entropy. In addition, ℓ1-MAGIC shows a more significant correlation between entropy and CS performance than Fast BCS.
Onorati, Francesco; Barbieri, Riccardo; Mauri, Maurizio; Russo, Vincenzo; Mainardi, Luca
2013-01-01
Pupil dilation (PD) dynamics reflect the interactions of sympathetic and parasympathetic innervations in the iris muscle. Different pupillary responses have been observed with respect to emotionally characterized stimuli. Evidences of the correlation between PD and respiration, heart rate variability (HRV) and blood pressure (BP) are present in literature, making the pupil dilation a candidate for estimating the activity state of the Autonomic Nervous System (ANS), in particular during stressful and/or emotionally characterized stimuli. The aim of this study is to investigate whether both slow and fast PD dynamics can be addressed to characterized different affective states. Two different frequency bands were considered: the classical autonomic band [0-0.45] Hz and a very high frequency (VHF) band [0.45-5] Hz. The pupil dilation signals from 13 normal subjects were recorded during a psychological protocol suitable to evoke particular affective states. An elaborate reconstruction of the missing data (blink events and artifacts) was performed to obtain a more reliable signal, particularly in the VHF band. Results show a high correlation between the arousal of the event and the power characteristics of the signal, in all frequencies. In particular, for the "Anger" condition, we can observe 10 indices out of 13 significantly different with respect to "Baseline" counterparts. These preliminary results suggest that both slow and fast oscillations of the PD can be used to characterize affective states.
Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici
2016-04-22
Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable.
Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici
2016-01-01
Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable. PMID:27110789
lpNet: a linear programming approach to reconstruct signal transduction networks.
Matos, Marta R A; Knapp, Bettina; Kaderali, Lars
2015-10-01
With the widespread availability of high-throughput experimental technologies it has become possible to study hundreds to thousands of cellular factors simultaneously, such as coding- or non-coding mRNA or protein concentrations. Still, extracting information about the underlying regulatory or signaling interactions from these data remains a difficult challenge. We present a flexible approach towards network inference based on linear programming. Our method reconstructs the interactions of factors from a combination of perturbation/non-perturbation and steady-state/time-series data. We show both on simulated and real data that our methods are able to reconstruct the underlying networks fast and efficiently, thus shedding new light on biological processes and, in particular, into disease's mechanisms of action. We have implemented the approach as an R package available through bioconductor. This R package is freely available under the Gnu Public License (GPL-3) from bioconductor.org (http://bioconductor.org/packages/release/bioc/html/lpNet.html) and is compatible with most operating systems (Windows, Linux, Mac OS) and hardware architectures. bettina.knapp@helmholtz-muenchen.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wang, Nizhuan; Zeng, Weiming; Chen, Lei
2013-05-30
Independent component analysis (ICA) has been widely used in functional magnetic resonance imaging (fMRI) data to evaluate the functional connectivity, which assumes that the sources of functional networks are statistically independent. Recently, many researchers have demonstrated that sparsity is an effective assumption for fMRI signal separation. In this research, we present a sparse approximation coefficient-based ICA (SACICA) model to analyse fMRI data, which is a promising combination model of sparse features and an ICA technique. The SACICA method consists of three procedures. The wavelet packet decomposition procedure, which decomposes the fMRI data into wavelet tree nodes with different degrees of sparsity, is first. Then, the sparse approximation coefficients set formation procedure, in which an effective Lp norm is proposed to measure the sparse degree of the distinct wavelet tree nodes, is second. The ICA decomposition and reconstruction procedure, which utilises the sparse approximation coefficients set of the fMRI data, is last. The hybrid data experimental results demonstrated that the SACICA method exhibited the stronger spatial source reconstruction ability with respect to the unsmoothed fMRI data and better detection sensitivity of the functional signal on the smoothed fMRI data than the FastICA method. Furthermore, task-related experiments also revealed that SACICA was not only effective in discovering the functional networks but also exhibited a better detection sensitivity of the visual-related functional signal. In addition, the SACICA combined with Fast-FENICA proposed by Wang et al. (2012) was demonstrated to conduct the group analysis effectively on the resting-state data set.
Sparse Bayesian Learning for DOA Estimation with Mutual Coupling
Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi
2015-01-01
Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284
Sparse stochastic processes and discretization of linear inverse problems.
Bostan, Emrah; Kamilov, Ulugbek S; Nilchian, Masih; Unser, Michael
2013-07-01
We present a novel statistically-based discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving ill-conditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuous-domain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the well-studied methods of Tikhonov and l1-type regularizations as particular cases, but also open the door to a broader class of sparsity-promoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, magnetic resonance imaging, and X-ray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.
An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner
NASA Astrophysics Data System (ADS)
Bergman, Elad; Yeredor, Arie; Nevo, Uri
2013-12-01
Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.
NASA Astrophysics Data System (ADS)
Konter, Oliver; Büntgen, Ulf; Carrer, Marco; Timonen, Mauri; Esper, Jan
2016-06-01
Age-related alternation in the sensitivity of tree-ring width (TRW) to climate variability has been reported for different forest species and environments. The resulting growth-climate response patterns are, however, often inconsistent and similar assessments using maximum latewood density (MXD) are still missing. Here, we analyze climate signal age effects (CSAE, age-related changes in the climate sensitivity of tree growth) in a newly aggregated network of 692 Pinus sylvestris L. TRW and MXD series from northern Fennoscandia. Although summer temperature sensitivity of TRW (rAll = 0.48) ranges below that of MXD (rAll = 0.76), it declines for both parameters as cambial age increases. Assessment of CSAE for individual series further reveals decreasing correlation values as a function of time. This declining signal strength remains temporally robust and negative for MXD, while age-related trends in TRW exhibit resilient meanderings of positive and negative trends. Although CSAE are significant and temporally variable in both tree-ring parameters, MXD is more suitable for the development of climate reconstructions. Our results indicate that sampling of young and old trees, and testing for CSAE, should become routine for TRW and MXD data prior to any paleoclimatic endeavor.
Adaptive feature extraction using sparse coding for machinery fault diagnosis
NASA Astrophysics Data System (ADS)
Liu, Haining; Liu, Chengliang; Huang, Yixiang
2011-02-01
In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.
Structured sparse priors for image classification.
Srinivas, Umamahesh; Suo, Yuanming; Dao, Minh; Monga, Vishal; Tran, Trac D
2015-06-01
Model-based compressive sensing (CS) exploits the structure inherent in sparse signals for the design of better signal recovery algorithms. This information about structure is often captured in the form of a prior on the sparse coefficients, with the Laplacian being the most common such choice (leading to l1 -norm minimization). Recent work has exploited the discriminative capability of sparse representations for image classification by employing class-specific dictionaries in the CS framework. Our contribution is a logical extension of these ideas into structured sparsity for classification. We introduce the notion of discriminative class-specific priors in conjunction with class specific dictionaries, specifically the spike-and-slab prior widely applied in Bayesian sparse regression. Significantly, the proposed framework takes the burden off the demand for abundant training image samples necessary for the success of sparsity-based classification schemes. We demonstrate this practical benefit of our approach in important applications, such as face recognition and object categorization.
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-01
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612
Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun
2015-01-22
High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes.
Protein crystal structure from non-oriented, single-axis sparse X-ray data
Wierman, Jennifer L.; Lan, Ti-Yen; Tate, Mark W.; ...
2016-01-01
X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that aremore » kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so `sparse' in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using theEMCalgorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ~200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using theEMCalgorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. In conclusion, this suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of theEMCalgorithm even in cases
Protein crystal structure from non-oriented, single-axis sparse X-ray data.
Wierman, Jennifer L; Lan, Ti-Yen; Tate, Mark W; Philipp, Hugh T; Elser, Veit; Gruner, Sol M
2016-01-01
X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so 'sparse' in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the EMC algorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ∼200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using the EMC algorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. This suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of the EMC algorithm even in cases where the data are
Vectorized Sparse Elimination.
1984-03-01
Grids," Proc. 6th Symposium on Reservoir Simulation , New Orleans, Feb. 1-2, 1982, pp. 489-506. [51 Arya, S., and D. A. Calahan, "Optimal Scheduling of...of Computer Architecture on Direct Sparse Matrix Routines in Petroleum Reservoir Simulation ," Sparse Matrix Symposium, Fairfield Glade, TE, October
Visual tracking via robust multitask sparse prototypes
NASA Astrophysics Data System (ADS)
Zhang, Huanlong; Hu, Shiqiang; Yu, Junyang
2015-03-01
Sparse representation has been applied to an online subspace learning-based tracking problem. To handle partial occlusion effectively, some researchers introduce l1 regularization to principal component analysis (PCA) reconstruction. However, in these traditional tracking methods, the representation of each object observation is often viewed as an individual task so the inter-relationship between PCA basis vectors is ignored. We propose a new online visual tracking algorithm with multitask sparse prototypes, which combines multitask sparse learning with PCA-based subspace representation. We first extend a visual tracking algorithm with sparse prototypes in multitask learning framework to mine inter-relations between subtasks. Then, to avoid the problem that enforcing all subtasks to share the same structure may result in degraded tracking results, we impose group sparse constraints on the coefficients of PCA basis vectors and element-wise sparse constraints on the error coefficients, respectively. Finally, we show that the proposed optimization problem can be effectively solved using the accelerated proximal gradient method with the fast convergence. Experimental results compared with the state-of-the-art tracking methods demonstrate that the proposed algorithm achieves favorable performance when the object undergoes partial occlusion, motion blur, and illumination changes.
NASA Astrophysics Data System (ADS)
Wei, Deyun; Ran, Qiwen; Li, Yuanmin
2011-09-01
Linear canonical transforms (LCTs) are a family of integral transforms with wide application in optical, acoustical, electromagnetic, and other wave propagation problems. This paper addresses the problem of signal reconstruction from multichannel and periodic nonuniform samples in the LCT domain. Firstly, the multichannel sampling theorem (MST) for band-limited signals with the LCT is proposed based on multichannel system equations, which is the generalization of the well-known sampling theorem for the LCT. We consider the problem of reconstructing the signal from its samples which are acquired using a multichannel sampling scheme. For this purpose, we propose two alternatives. The first scheme is based on the conventional Fourier series and inverse LCT operation. The second is based on the conventional Fourier series and inverse Fourier transform (FT) operation. Moreover, the classical Papoulis MST in FT domain is shown to be special case of the achieved results. Since the periodic nonuniformly sampled signal in the LCT has valuable applications, the reconstruction expression for the periodic nonuniformly sampled signal has been then obtained by using the derived MST and the specific space-shifting property of the LCT. Last, the potential applications of the MST are presented to show the advantage of the theory.
Pisharady, Pramod Kumar; Sotiropoulos, Stamatios N; Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Lenglet, Christophe
2017-06-29
We present a sparse Bayesian unmixing algorithm BusineX: Bayesian Unmixing for Sparse Inference-based Estimation of Fiber Crossings (X), for estimation of white matter fiber parameters from compressed (under-sampled) diffusion MRI (dMRI) data. BusineX combines compressive sensing with linear unmixing and introduces sparsity to the previously proposed multiresolution data fusion algorithm RubiX, resulting in a method for improved reconstruction, especially from data with lower number of diffusion gradients. We formulate the estimation of fiber parameters as a sparse signal recovery problem and propose a linear unmixing framework with sparse Bayesian learning for the recovery of sparse signals, the fiber orientations and volume fractions. The data is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible diffusion directions. Volume fractions of fibers along these directions define the dictionary weights. The proposed sparse inference, which is based on the dictionary representation, considers the sparsity of fiber populations and exploits the spatial redundancy in data representation, thereby facilitating inference from under-sampled q-space. The algorithm improves parameter estimation from dMRI through data-dependent local learning of hyperparameters, at each voxel and for each possible fiber orientation, that moderate the strength of priors governing the parameter variances. Experimental results on synthetic and in-vivo data show improved accuracy with a lower uncertainty in fiber parameter estimates. BusineX resolves a higher number of second and third fiber crossings. For under-sampled data, the algorithm is also shown to produce more reliable estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju
2014-01-01
It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988
Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju
2014-05-27
It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively.
NASA Astrophysics Data System (ADS)
Wiegert, Jens; Hohmann, Steffen; Bertram, Matthias
2007-03-01
This paper presents a novel framework for the systematic assessment of the impact of scattered radiation in .at-detector based cone-beam CT. While it is well known that scattered radiation causes three di.erent types of artifacts in reconstructed images (inhomogeneity artifacts such as cupping and streaks, degradation of contrast, and enhancement of noise), investigations in the literature quantify the impact of scatter mostly only in terms of inhomogeneity artifacts, giving little insight, e.g., into the visibility of low contrast lesions. Therefore, for this study a novel framework has been developed that in addition to normal reconstruction of the CT (HU) number allows for reconstruction of voxelized expectation values of three additional important characteristics of image quality: signal degradation, contrast reduction, and noise variances. The new framework has been applied to projection data obtained with voxelized Monte-Carlo simulations of clinical CT data sets of high spatial resolution. Using these data, the impact of scattered radiation was thoroughly studied for realistic and clinically relevant patient geometries of the head, thorax, and pelvis region. By means of spatially resolved reconstructions of contrast and noise propagation, the image quality of a scenario with using standard antiscatter grids could be evaluated with great detail. Results show the spatially resolved contrast degradation and the spatially resolved expected standard deviation of the noise at any position in the reconstructed object. The new framework represents a general tool for analyzing image quality in reconstructed images.
Analog system for computing sparse codes
Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell
2010-08-24
A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.
Online sparse representation for remote sensing compressed-sensed video sampling
NASA Astrophysics Data System (ADS)
Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li
2014-11-01
Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.
NASA Technical Reports Server (NTRS)
Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.
1998-01-01
A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.
Reconstructing the Nature of the First Cosmic Sources from the Anisotropic 21-cm Signal
NASA Astrophysics Data System (ADS)
Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad
2015-03-01
The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z ˜15 ), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.
A unified treatment of some iterative algorithms in signal processing and image reconstruction
NASA Astrophysics Data System (ADS)
Byrne, Charles
2004-02-01
Let T be a (possibly nonlinear) continuous operator on Hilbert space {\\cal H} . If, for some starting vector x, the orbit sequence {Tkx,k = 0,1,...} converges, then the limit z is a fixed point of T; that is, Tz = z. An operator N on a Hilbert space {\\cal H} is nonexpansive (ne) if, for each x and y in {\\cal H} , \\[ \\| Nx-Ny\\| \\leq \\| x-y\\|. \\] Even when N has fixed points the orbit sequence {Nkx} need not converge; consider the example N = -I, where I denotes the identity operator. However, for any \\alpha \\in (0,1) the iterative procedure defined by \\[ x^{k+1}=(1-\\alpha)x^k+\\alpha Nx^k \\] converges (weakly) to a fixed point of N whenever such points exist. This is the Krasnoselskii-Mann (KM) approach to finding fixed points of ne operators. A wide variety of iterative procedures used in signal processing and image reconstruction and elsewhere are special cases of the KM iterative procedure, for particular choices of the ne operator N. These include the Gerchberg-Papoulis method for bandlimited extrapolation, the SART algorithm of Anderson and Kak, the Landweber and projected Landweber algorithms, simultaneous and sequential methods for solving the convex feasibility problem, the ART and Cimmino methods for solving linear systems of equations, the CQ algorithm for solving the split feasibility problem and Dolidze's procedure for the variational inequality problem for monotone operators.
Sparse recovery via convex optimization
NASA Astrophysics Data System (ADS)
Randall, Paige Alicia
This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.All our methods involve solving an l_1 minimization
Sparse Superpixel Unmixing for Hyperspectral Image Analysis
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Thompson, David R.; Gilmore, Martha
2010-01-01
Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-01-01
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications. PMID:28649173
Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations.
Chaspari, Theodora; Tsiartas, Andreas; Tsilifis, Panagiotis; Narayanan, Shrikanth
2016-06-15
Parametric dictionaries can increase the ability of sparse representations to meaningfully capture and interpret the underlying signal information, such as encountered in biomedical problems. Given a mapping function from the atom parameter space to the actual atoms, we propose a sparse Bayesian framework for learning the atom parameters, because of its ability to provide full posterior estimates, take uncertainty into account and generalize on unseen data. Inference is performed with Markov Chain Monte Carlo, that uses block sampling to generate the variables of the Bayesian problem. Since the parameterization of dictionary atoms results in posteriors that cannot be analytically computed, we use a Metropolis-Hastings-within-Gibbs framework, according to which variables with closed-form posteriors are generated with the Gibbs sampler, while the remaining ones with the Metropolis Hastings from appropriate candidate-generating densities. We further show that the corresponding Markov Chain is uniformly ergodic ensuring its convergence to a stationary distribution independently of the initial state. Results on synthetic data and real biomedical signals indicate that our approach offers advantages in terms of signal reconstruction compared to previously proposed Steepest Descent and Equiangular Tight Frame methods. This paper demonstrates the ability of Bayesian learning to generate parametric dictionaries that can reliably represent the exemplar data and provides the foundation towards inferring the entire variable set of the sparse approximation problem for signal denoising, adaptation and other applications.
Tseitlin, Mark; Eaton, Sandra S; Eaton, Gareth R
2011-04-01
Selection of the amplitude of magnetic field modulation for continuous wave electron paramagnetic resonance (EPR) often is a trade-off between sensitivity and resolution. Increasing the modulation amplitude improves the signal-to-noise ratio, S/N, at the expense of broadening the signal. Combining information from multiple harmonics of the field-modulated signal is proposed as a method to obtain the first derivative spectrum with minimal broadening and improved signal-to-noise. The harmonics are obtained by digital phase-sensitive detection of the signal at the modulation frequency and its integer multiples. Reconstruction of the first-derivative EPR line is done in the Fourier conjugate domain where each harmonic can be represented as the product of the Fourier transform of the 1st derivative signal with an analytical function. The analytical function for each harmonic can be viewed as a filter. The Fourier transform of the 1st derivative spectrum can be calculated from all available harmonics by solving an optimization problem with the goal of maximizing the S/N. Inverse Fourier transformation of the result produces the 1st derivative EPR line in the magnetic field domain. The use of modulation amplitude greater than linewidth improves the S/N, but does not broaden the reconstructed spectrum. The method works for an arbitrary EPR line shape, but is limited to the case when magnetization instantaneously follows the modulation field, which is known as the adiabatic approximation. Copyright © 2011 Elsevier Inc. All rights reserved.
Tseitlin, Mark; Eaton, Sandra S.; Eaton, Gareth R.
2011-01-01
Selection of the amplitude of magnetic field modulation for continuous wave electron paramagnetic resonance (EPR) often is a trade-off between sensitivity and resolution. Increasing the modulation amplitude improves the signal-to-noise ratio, S/N, at the expense of broadening the signal. Combining information from multiple harmonics of the field-modulated signal is proposed as a method to obtain the first derivative spectrum with minimal broadening and improved signal-to-noise. The harmonics are obtained by digital phase-sensitive detection of the signal at the modulation frequency and its integer multiples. Reconstruction of the first derivative EPR line is done in the Fourier conjugate domain where each harmonic can be represented as the product of the Fourier transform of the 1st derivative signal with an analytical function. The analytical function for each harmonic can be viewed as a filter. The Fourier transform of the 1st derivative spectrum can be calculated from all available harmonics by solving an optimization problem with the goal of maximizing the S/N. Inverse Fourier transformation of the result produces the 1st derivative EPR line in the magnetic field domain. The use of modulation amplitude greater than linewidth improves the S/N, but does not broaden the reconstructed spectrum. The method works for an arbitrary EPR line shape, but is limited to the case when magnetization instantaneously follows the modulation field, which is known as the adiabatic approximation. PMID:21349750
Seydnejad, Saeid R
2016-02-01
Extracting the input signal of a neuron by analyzing its spike output is an important step toward understanding how external information is coded into discrete events of action potentials and how this information is exchanged between different neurons in the nervous system. Most of the existing methods analyze this decoding problem in a stochastic framework and use probabilistic metrics such as maximum-likelihood method to determine the parameters of the input signal assuming a leaky and integrate-and-fire (LIF) model. In this article, the input signal of the LIF model is considered as a combination of orthogonal basis functions. The coefficients of the basis functions are found by minimizing the norm of the observed spikes and those generated by the estimated signal. This approach gives rise to the deterministic reconstruction of the input signal and results in a simple matrix identity through which the coefficients of the basis functions and therefore the neuronal stimulus can be identified. The inherent noise of the neuron is considered as an additional factor in the membrane potential and is treated as the disturbance in the reconstruction algorithm. The performance of the proposed scheme is evaluated by numerical simulations, and it is shown that input signals with different characteristics can be well recovered by this algorithm.
Temporal Super Resolution Enhancement of Echocardiographic Images Based on Sparse Representation.
Gifani, Parisa; Behnam, Hamid; Haddadi, Farzan; Sani, Zahra Alizadeh; Shojaeifard, Maryam
2016-01-01
A challenging issue for echocardiographic image interpretation is the accurate analysis of small transient motions of myocardium and valves during real-time visualization. A higher frame rate video may reduce this difficulty, and temporal super resolution (TSR) is useful for illustrating the fast-moving structures. In this paper, we introduce a novel framework that optimizes TSR enhancement of echocardiographic images by utilizing temporal information and sparse representation. The goal of this method is to increase the frame rate of echocardiographic videos, and therefore enable more accurate analyses of moving structures. For the proposed method, we first derived temporal information by extracting intensity variation time curves (IVTCs) assessed for each pixel. We then designed both low-resolution and high-resolution overcomplete dictionaries based on prior knowledge of the temporal signals and a set of prespecified known functions. The IVTCs can then be described as linear combinations of a few prototype atoms in the low-resolution dictionary. We used the Bayesian compressive sensing (BCS) sparse recovery algorithm to find the sparse coefficients of the signals. We extracted the sparse coefficients and the corresponding active atoms in the low-resolution dictionary to construct new sparse coefficients corresponding to the high-resolution dictionary. Using the estimated atoms and the high-resolution dictionary, a new IVTC with more samples was constructed. Finally, by placing the new IVTC signals in the original IVTC positions, we were able to reconstruct the original echocardiography video with more frames. The proposed method does not require training of low-resolution and high-resolution dictionaries, nor does it require motion estimation; it does not blur fast-moving objects, and does not have blocking artifacts.
Signal Reconstruction and Analysis Via New Techniques in Harmonic and Complex Analysis
2005-08-31
June 30th, 2005 1.) FORWARD We have used tools from theory of harmonic analysis and number theory to extend existing theories and develop new approaches...likelihood estimates for the sparse data sets on which our methods work. We are also working on extending our work to multiply periodic processes. We...deconvolution and sampling to radial domains, exploiting coprime relationships among zero sets of Bessel functions. We have also discussed applications
NASA Astrophysics Data System (ADS)
Jesús Rubio, Maria; Sanchez, Guiomar; Saez, Alberto; Vázquez-Loureiro, David; Bao, Roberto; José Pueyo, Juan; Gómez-Paccard, Miriam; Gonçalves, Vitor; Raposeiro, Pedro M.; Francus, Pierre; Hernández, Armand; Margalef, Olga; Buchaca, Teresa; Pla, Sergi; Barreiro-Lostres, Fernando; Valero-Garcés, Blas L.; Giralt, Santiago
2013-04-01
radiocarbon date at the base of this fine mixture manifests the record for the last ca 650 cal. years B.P., which corresponds to the last recorded eruption. The dark brown layers are dominated by organic matter (low XRF signal and almost no diatoms) whereas light brown facies are mainly made up of terrigenous particles (high XRF signal and high content of benthic diatoms) and vascular plant macroremains. Bulk organic matter analyses have revealed that algae constitute the main compound of the organic fraction. However, the organic matter in the dark layers is composed by C3 plants, coherent with the clastic nature of this facies deposited during flood events. Increase of precipitation, ruled by the negative phase of the NAO, together with the steep borders of the Sete Cidades crater prompts a substantial increase in the erosion of the catchment and hence an enhancement of runoff that reaches Azul Lake and the occurrence of the flood events. Therefore, identifying, characterizing and counting the dark layers would allow to reconstruct the intensity and periodicity of the negative phase of the NAO climate mode.
Sparse regularization for force identification using dictionaries
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Evolving sparse stellar populations
NASA Astrophysics Data System (ADS)
Bruzual, Gustavo; Gladis Magris, C.; Hernández-Pérez, Fabiola
2017-03-01
We examine the role that stochastic fluctuations in the IMF and in the number of interacting binaries have on the spectro-photometric properties of sparse stellar populations as a function of age and metallicity.
Inverse polynomial reconstruction method in DCT domain
NASA Astrophysics Data System (ADS)
Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen
2012-12-01
The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.
Structured sparse models for classification
NASA Astrophysics Data System (ADS)
Castrodad, Alexey
The main focus of this thesis is the modeling and classification of high dimensional data using structured sparsity. Sparse models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and its use has led to state-of-the-art results in many signal and image processing tasks. The success of sparse modeling is highly due to its ability to efficiently use the redundancy of the data and find its underlying structure. On a classification setting, we capitalize on this advantage to properly model and separate the structure of the classes. We design and validate modeling solutions to challenging problems arising in computer vision and remote sensing. We propose both supervised and unsupervised schemes for the modeling of human actions from motion imagery under a wide variety of acquisition condi- tions. In the supervised case, the main goal is to classify the human actions in the video given a predefined set of actions to learn from. In the unsupervised case, the main goal is to an- alyze the spatio-temporal dynamics of the individuals in the scene without having any prior information on the actions themselves. We also propose a model for remotely sensed hysper- spectral imagery, where the main goal is to perform automatic spectral source separation and mapping at the subpixel level. Finally, we present a sparse model for sensor fusion to exploit the common structure and enforce collaboration of hyperspectral with LiDAR data for better mapping capabilities. In all these scenarios, we demonstrate that these data can be expressed as a combination of atoms from a class-structured dictionary. These data representation becomes essentially a "mixture of classes," and by directly exploiting the sparse codes, one can attain highly accurate classification performance with relatively unsophisticated classifiers.
Multichannel sparse spike inversion
NASA Astrophysics Data System (ADS)
Pereg, Deborah; Cohen, Israel; Vassiliou, Anthony A.
2017-10-01
In this paper, we address the problem of sparse multichannel seismic deconvolution. We introduce multichannel sparse spike inversion as an iterative procedure, which deconvolves the seismic data and recovers the Earth two-dimensional reflectivity image, while taking into consideration the relations between spatially neighboring traces. We demonstrate the improved performance of the proposed algorithm and its robustness to noise, compared to competitive single-channel algorithm through simulations and real seismic data examples.
Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data
Chartrand, Rick
2009-01-01
Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammadreza M.; Jafarpour, Behnam
2014-07-01
Despite their apparent high dimensionality, spatially distributed hydraulic properties of geologic formations can often be compactly (sparsely) described in a properly designed basis. Hence, the estimation of high-dimensional subsurface flow properties from dynamic performance and monitoring data can be formulated and solved as a sparse reconstruction inverse problem. Recent advances in statistical signal processing, formalized under the compressed sensing paradigm, provide important guidelines on formulating and solving sparse inverse problems, primarily for linear models and using a deterministic framework. Given the uncertainty in describing subsurface physical properties, even after integration of the dynamic data, it is important to develop a practical sparse Bayesian inversion approach to enable uncertainty quantification. In this paper, we use sparse geologic dictionaries to compactly represent uncertain subsurface flow properties and develop a practical sparse Bayesian method for effective data integration and uncertainty quantification. The multi-Gaussian assumption that is widely used in classical probabilistic inverse theory is not appropriate for representing sparse prior models. Following the results presented by the compressed sensing paradigm, the Laplace (or double exponential) probability distribution is found to be more suitable for representing sparse parameters. However, combining Laplace priors with the frequently used Gaussian likelihood functions leads to neither a Laplace nor a Gaussian posterior distribution, which complicates the analytical characterization of the posterior. Here, we first express the form of the Maximum A-Posteriori (MAP) estimate for Laplace priors and then use the Monte-Carlo-based Randomize Maximum Likelihood (RML) method to generate approximate samples from the posterior distribution. The proposed Sparse RML (SpRML) approximate sampling approach can be used to assess the uncertainty in the calibrated model with a
An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.
Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi
2016-02-01
Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.
Adaptive compressed sensing recovery utilizing the property of signal's autocorrelations.
Fu, Changjun; Ji, Xiangyang; Dai, Qionghai
2012-05-01
Perfect compressed sensing (CS) recovery can be achieved when a certain basis space is found to sparsely represent the original signal. However, due to the diversity of the signals, there does not exist a universal predetermined basis space that can sparsely represent all kinds of signals, which results in an unsatisfying performance. To improve the accuracy of recovered signal, this paper proposes an adaptive basis CS reconstruction algorithm by minimizing the rank of an accumulated matrix (MRAM), whose eigenvectors approximate the optimal basis sparsely representing the original signal. The accumulated matrix is constructed to efficiently exploit the second-order statistical property of the signal's autocorrelations. Based on the theory of matrix completion, MRAM reconstructs the original signal from its random projections under the observation that the constructed accumulated matrix is of low rank for most natural signals such as periodic signals and those coming from an autoregressive stationary process. Experimental results show that the proposed MRAM efficiently improves the reconstruction quality compared with the existing algorithms.
X-ray computed tomography using curvelet sparse regularization
Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.
2015-04-15
Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Advanced spectral analysis of ionospheric waves observed with sparse arrays
NASA Astrophysics Data System (ADS)
Helmboldt, J. F.; Intema, H. T.
2014-02-01
This paper presents a case study from a single, 6h observing period to illustrate the application of techniques developed for interferometric radio telescopes to the spectral analysis of observations of ionospheric fluctuations with sparse arrays. We have adapted the deconvolution methods used for making high dynamic range images of cosmic sources with radio arrays to making comparably high dynamic range maps of spectral power of wavelike ionospheric phenomena. In the example presented here, we have used observations of the total electron content (TEC) gradient derived from Very Large Array (VLA) observations of synchrotron emission from two galaxy clusters at 330MHz as well as GPS-based TEC measurements from a sparse array of 33 receivers located within New Mexico near the VLA. We show that these techniques provide a significant improvement in signal-to-noise ratio (S/N) of detected wavelike structures by correcting for both measurement inaccuracies and wavefront distortions. This is especially true for the GPS data when combining all available satellite/receiver pairs, which probe a larger physical area and likely have a wider variety of measurement errors than in the single-satellite case. In this instance, we found that the peak S/N of the detected waves was improved by more than an order of magnitude. The data products generated by the deconvolution procedure also allow for a reconstruction of the fluctuations as a two-dimensional waveform/phase screen that can be used to correct for their effects.
Grassmannian sparse representations
NASA Astrophysics Data System (ADS)
Azary, Sherif; Savakis, Andreas
2015-05-01
We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.
Sparse distributed memory overview
NASA Technical Reports Server (NTRS)
Raugh, Mike
1990-01-01
The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.
Efficient convolutional sparse coding
Wohlberg, Brendt
2017-06-20
Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.
Multiple Sparse Representations Classification
Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik
2015-01-01
Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and
Characterizing heterogeneity among virus particles by stochastic 3D signal reconstruction
NASA Astrophysics Data System (ADS)
Xu, Nan; Gong, Yunye; Wang, Qiu; Zheng, Yili; Doerschuk, Peter C.
2015-09-01
In single-particle cryo electron microscopy, many electron microscope images each of a single instance of a biological particle such as a virus or a ribosome are measured and the 3-D electron scattering intensity of the particle is reconstructed by computation. Because each instance of the particle is imaged separately, it should be possible to characterize the heterogeneity of the different instances of the particle as well as a nominal reconstruction of the particle. In this paper, such an algorithm is described and demonstrated on the bacteriophage Hong Kong 97. The algorithm is a statistical maximum likelihood estimator computed by an expectation maximization algorithm implemented in Matlab software.
BLIND COMPRESSED SENSING WITH SPARSE DICTIONARIES FOR ACCELERATED DYNAMIC MRI.
Lingala, Sajan Goud; Jacob, Mathews
2013-01-01
Several algorithms that model the voxel time series as a sparse linear combination of basis functions in a fixed dictionary were introduced to recover dynamic MRI data from under sampled Fourier measurements. We have recently demonstrated that the joint estimation of dictionary basis and the sparse coefficients from the k-space data results in improved reconstructions. In this paper, we investigate the use of additional priors on the learned basis functions. Specifically, we assume the basis functions to be sparse in pre-specified transform or operator domains. Our experiments show that this constraint enables the suppression of noisy basis functions, thus further improving the quality of the reconstructions. We demonstrate the usefulness of the proposed method through various reconstruction examples.
NASA Astrophysics Data System (ADS)
St-Jacques, J. M.; Cumming, B. F.; Smol, J. P.; Sauchyn, D.
2015-12-01
High-resolution proxy reconstructions are essential to assess the rate and magnitude of anthropogenic global warming. High-resolution pollen records are being critically examined for the production of accurate climate reconstructions of the last millennium, often as extensions of tree-ring records. Past climate inference from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have directly altered vegetation, and hence modern pollen deposition is a product of landscape disturbance and climate, unlike in the past with its dominance of climate-derived processes. This could cause serious bias in pollen reconstructions. In the US Midwest, direct human impacts have greatly altered the vegetation and pollen rain since Euro-American settlement in the mid-19th century. Using instrumental climate data from the early 1800s from Fort Snelling (Minnesota), we assessed the bias from the conventional method of inferring climate from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The pre-settlement calibration set provides more accurate reconstructions of 19th century temperature than the modern set does. When both calibration sets are used to reconstruct temperatures since AD 1116 from a varve-dated pollen record from Lake Mina, Minnesota, the conventional method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 oC, resulting in an overestimation of Little Ice Age temperature and an underestimation of anthropogenic warming. We also compared the pollen-inferred moisture reconstruction to a four-century tree-ring-inferred moisture record from Minnesota and Dakotas, which shows that the tree-ring reconstruction is biased towards dry conditions and records wet periods relatively poorly, giving a false impression of regional aridity. The tree-ring chronology also suggests varve
Yao, Jincao; Yu, Huimin; Hu, Roland
2017-01-01
This paper introduces a new implicit-kernel-sparse-shape-representation-based object segmentation framework. Given an input object whose shape is similar to some of the elements in the training set, the proposed model can automatically find a cluster of implicit kernel sparse neighbors to approximately represent the input shape and guide the segmentation. A distance-constrained probabilistic definition together with a dualization energy term is developed to connect high-level shape representation and low-level image information. We theoretically prove that our model not only derives from two projected convex sets but is also equivalent to a sparse-reconstruction-error-based representation in the Hilbert space. Finally, a "wake-sleep"-based segmentation framework is applied to drive the evolutionary curve to recover the original shape of the object. We test our model on two public datasets. Numerical experiments on both synthetic images and real applications show the superior capabilities of the proposed framework.
Touchan, Ramzi; Woodhouse, Connie A.; Meko, David M.; Allen, Craig
2011-01-01
Drought is a recurring phenomenon in the American Southwest. Since the frequency and severity of hydrologic droughts and other hydroclimatic events are of critical importance to the ecology and rapidly growing human population of this region, knowledge of long-term natural hydroclimatic variability is valuable for resource managers and policy-makers. An October–June precipitation reconstruction for the period AD 824–2007 was developed from multi-century tree-ring records of Pseudotsuga menziesii (Douglas-fir), Pinus strobiformis (Southwestern white pine) and Pinus ponderosa (Ponderosa pine) for the Jemez Mountains in Northern New Mexico. Calibration and verification statistics for the period 1896–2007 show a high level of skill, and account for a significant portion of the observed variance (>50%) irrespective of which period is used to develop or verify the regression model. Split-sample validation supports our use of a reconstruction model based on the full period of reliable observational data (1896–2007). A recent segment of the reconstruction (2000–2006) emerges as the driest 7-year period sensed by the trees in the entire record. That this period was only moderately dry in precipitation anomaly likely indicates accentuated stress from other factors, such as warmer temperatures. Correlation field maps of actual and reconstructed October–June total precipitation, sea surface temperatures and 500-mb geopotential heights show characteristics that are similar to those indicative of El Niño–Southern Oscillation patterns, particularly with regard to ocean and atmospheric conditions in the equatorial and north Pacific. Our 1184-year reconstruction of hydroclimatic variability provides long-term perspective on current and 20th century wet and dry events in Northern New Mexico, is useful to guide expectations of future variability, aids sustainable water management, provides scenarios for drought planning and as inputs for hydrologic models under a
NASA Astrophysics Data System (ADS)
Kim, Hojin; Chen, Josephine; Wang, Adam; Chuang, Cynthia; Held, Mareike; Pouliot, Jean
2016-09-01
The compressed sensing (CS) technique has been employed to reconstruct CT/CBCT images from fewer projections as it is designed to recover a sparse signal from highly under-sampled measurements. Since the CT image itself cannot be sparse, a variety of transforms were developed to make the image sufficiently sparse. The total-variation (TV) transform with local image gradient in L1-norm was adopted in most cases. This approach, however, which utilizes very local information and penalizes the weight at a constant rate regardless of different degrees of spatial gradient, may not produce qualified reconstructed images from noise-contaminated CT projection data. This work presents a new non-local operator of total-variation (NLTV) to overcome the deficits stated above by utilizing a more global search and non-uniform weight penalization in reconstruction. To further improve the reconstructed results, a reweighted L1-norm that approximates the ideal sparse signal recovery of the L0-norm is incorporated into the NLTV reconstruction with additional iterates. This study tested the proposed reconstruction method (reweighted NLTV) from under-sampled projections of 4 objects and 5 experiments (1 digital phantom with low and high noise scenarios, 1 pelvic CT, and 2 CBCT images). We assessed its performance against the conventional TV, NLTV and reweighted TV transforms in the tissue contrast, reconstruction accuracy, and imaging resolution by comparing contrast-noise-ratio (CNR), normalized root-mean square error (nRMSE), and profiles of the reconstructed images. Relative to the conventional NLTV, combining the reweighted L1-norm with NLTV further enhanced the CNRs by 2-4 times and improved reconstruction accuracy. Overall, except for the digital phantom with low noise simulation, our proposed algorithm produced the reconstructed image with the lowest nRMSEs and the highest CNRs for each experiment.
Sparse Exponential Family Principal Component Analysis.
Lu, Meng; Huang, Jianhua Z; Qian, Xiaoning
2016-12-01
We propose a Sparse exponential family Principal Component Analysis (SePCA) method suitable for any type of data following exponential family distributions, to achieve simultaneous dimension reduction and variable selection for better interpretation of the results. Because of the generality of exponential family distributions, the method can be applied to a wide range of applications, in particular when analyzing high dimensional next-generation sequencing data and genetic mutation data in genomics. The use of sparsity-inducing penalty helps produce sparse principal component loading vectors such that the principal components can focus on informative variables. By using an equivalent dual form of the formulated optimization problem for SePCA, we derive optimal solutions with efficient iterative closed-form updating rules. The results from both simulation experiments and real-world applications have demonstrated the superiority of our SePCA in reconstruction accuracy and computational efficiency over traditional exponential family PCA (ePCA), the existing Sparse PCA (SPCA) and Sparse Logistic PCA (SLPCA) algorithms.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
SAR Image despeckling via sparse representation
NASA Astrophysics Data System (ADS)
Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang
2014-11-01
SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-01-01
Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.
An infrared image super-resolution reconstruction method based on compressive sensing
NASA Astrophysics Data System (ADS)
Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei
2016-05-01
Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.
Neuromagnetic source reconstruction
Lewis, P.S.; Mosher, J.C.; Leahy, R.M.
1994-12-31
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Popescu, Mihai; Popescu, Elena-Anda; Fitzgerald-Gustafson, Kathleen; Drake, William B; Lewine, Jeffrey D
2006-12-01
Previous attempts at unequivocal specification of signal strength in fetal magnetocardiographic (fMCG) recordings have used an equivalent current dipole (ECD) to estimate the cardiac vector at the peak of the averaged QRS complex. However, even though the magnitude of fetal cardiac currents are anticipated to be relatively stable, ECD-based estimates of signal strength show substantial and unrealistic variation when comparing results from different time windows of the same recording session. The present study highlights the limitations of the ECD model, and proposes a new methodology for fetal cardiac source reconstruction. The proposed strategy relies on recursive subspace projections to estimate multiple dipoles that account for the distributed myocardial currents. The dipoles are reconstructed from spatio-temporal fMCG data, and are subsequently used to derive estimators of the cardiac vector over the entire QRS. The new method is evaluated with respect to simulated data derived from a model of ventricular depolarization, which was designed to account for the complexity of the fetal cardiac source configuration on the QRS interval. The results show that the present methodology overcomes the drawbacks of conventional ECD fitting, by providing robust estimators of the cardiac vector. Additional evaluation with real fMCG data show fetal cardiac vectors whose morphology closely resembles that obtained in adult MCG.
Tan, Cheng-Yang; /Fermilab
2011-02-01
A bootstrap algorithm for reconstructing the temporal signal from four of its fractional Fourier intensity spectra in the presence of noise is described. An optical arrangement is proposed which realises the bootstrap method for the measurement of ultrashort laser pulses. The measurement of short laser pulses which are less than 1 ps is an ongoing challenge in optical physics. One reason is that no oscilloscope exists today which can directly measure the time structure of these pulses and so it becomes necessary to invent other techniques which indirectly provide the necessary information for temporal pulse reconstruction. One method called FROG (frequency resolved optical gating) has been in use since 19911 and is one of the popular methods for recovering these types of short pulses. The idea behind FROG is the use of multiple time-correlated pulse measurements in the frequency domain for the reconstruction. Multiple data sets are required because only intensity information is recorded and not phase, and thus by collecting multiple data sets, there is enough redundant measurements to yield the original time structure, but not necessarily uniquely (or even up to an arbitrary constant phase offset). The objective of this paper is to describe another method which is simpler than FROG. Instead of collecting many auto-correlated data sets, only two spectral intensity measurements of the temporal signal are needed in the absence of noise. The first can be from the intensity components of its usual Fourier transform and the second from its FrFT (fractional Fourier transform). In the presence of noise, a minimum of four measurements are required with the same FrFT order but with two different apertures. Armed with these two or four measurements, a unique solution up to a constant phase offset can be constructed.
Image super-resolution reconstruction based on regularization technique and guided filter
NASA Astrophysics Data System (ADS)
Huang, De-tian; Huang, Wei-qin; Gu, Pei-ting; Liu, Pei-zhong; Luo, Yan-min
2017-06-01
In order to improve the accuracy of sparse representation coefficients and the quality of reconstructed images, an improved image super-resolution algorithm based on sparse representation is presented. In the sparse coding stage, the autoregressive (AR) regularization and the non-local (NL) similarity regularization are introduced to improve the sparse coding objective function. A group of AR models which describe the image local structures are pre-learned from the training samples, and one or several suitable AR models can be adaptively selected for each image patch to regularize the solution space. Then, the image non-local redundancy is obtained by the NL similarity regularization to preserve edges. In the process of computing the sparse representation coefficients, the feature-sign search algorithm is utilized instead of the conventional orthogonal matching pursuit algorithm to improve the accuracy of the sparse coefficients. To restore image details further, a global error compensation model based on weighted guided filter is proposed to realize error compensation for the reconstructed images. Experimental results demonstrate that compared with Bicubic, L1SR, SISR, GR, ANR, NE + LS, NE + NNLS, NE + LLE and A + (16 atoms) methods, the proposed approach has remarkable improvement in peak signal-to-noise ratio, structural similarity and subjective visual perception.
Sparse inpainting and isotropy
Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it
2014-01-01
Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.
Sparse inpainting and isotropy
NASA Astrophysics Data System (ADS)
Feeney, Stephen M.; Marinucci, Domenico; McEwen, Jason D.; Peiris, Hiranya V.; Wandelt, Benjamin D.; Cammarota, Valentina
2014-01-01
Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.
Bayesian sparse channel estimation
NASA Astrophysics Data System (ADS)
Chen, Chulong; Zoltowski, Michael D.
2012-05-01
In Orthogonal Frequency Division Multiplexing (OFDM) systems, the technique used to estimate and track the time-varying multipath channel is critical to ensure reliable, high data rate communications. It is recognized that wireless channels often exhibit a sparse structure, especially for wideband and ultra-wideband systems. In order to exploit this sparse structure to reduce the number of pilot tones and increase the channel estimation quality, the application of compressed sensing to channel estimation is proposed. In this article, to make the compressed channel estimation more feasible for practical applications, it is investigated from a perspective of Bayesian learning. Under the Bayesian learning framework, the large-scale compressed sensing problem, as well as large time delay for the estimation of the doubly selective channel over multiple consecutive OFDM symbols, can be avoided. Simulation studies show a significant improvement in channel estimation MSE and less computing time compared to the conventional compressed channel estimation techniques.
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1988-01-01
Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.
Sparse matrix test collections
Duff, I.
1996-12-31
This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.
Kanerva, P.
1988-01-01
Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1988-01-01
Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.
Yin, Junming; Chen, Xi; Xing, Eric P.
2016-01-01
We consider the problem of sparse variable selection in nonparametric additive models, with the prior knowledge of the structure among the covariates to encourage those variables within a group to be selected jointly. Previous works either study the group sparsity in the parametric setting (e.g., group lasso), or address the problem in the nonparametric setting without exploiting the structural information (e.g., sparse additive models). In this paper, we present a new method, called group sparse additive models (GroupSpAM), which can handle group sparsity in additive models. We generalize the ℓ1/ℓ2 norm to Hilbert spaces as the sparsity-inducing penalty in GroupSpAM. Moreover, we derive a novel thresholding condition for identifying the functional sparsity at the group level, and propose an efficient block coordinate descent algorithm for constructing the estimate. We demonstrate by simulation that GroupSpAM substantially outperforms the competing methods in terms of support recovery and prediction accuracy in additive models, and also conduct a comparative experiment on a real breast cancer dataset.
NASA Astrophysics Data System (ADS)
Ni, Jiang Q.; Ho, Ka L.; Tse, Kai W.
1998-08-01
Conventional synthesis filters in subband systems lose their optimality when additive noise (due, for example, to signal quantization) disturbs the subband components. The multichannel representation of subband signals is combined with the statistical model of input signal to derive the multirate state-space model for the filter bank system with additive subband noises. Thus the signal reconstruction problem in subband systems can be formulated as the process of optimal state estimation in the equivalent multirate state-space model. Incorporated with the vector dynamic model, a 2D multirate state-space model suitable for 2D Kalman filtering is developed. The performance of the proposed 2D multirate Kalman filter can be further improved through adaptive segmentation of the object plane. The object plane is partitioned into disjoint regions based on their spatial activity, and different vector dynamical models are used to characterize the nonstationary object- plane distributions. Finally, computer simulations with the proposed 2D multirate Kalman filter give favorable results.
A relation between algebraic and transform-based reconstruction technique in computed tomography
NASA Astrophysics Data System (ADS)
Kiefhaber, S.; Rosenbaum, M.; Sauer-Greff, W.; Urbansky, R.
2013-07-01
In this contribution a coherent relation between the algebraic and the transform-based reconstruction technique for computed tomography is introduced using the mathematical means of two-dimensional signal processing. There are two advantages arising from that approach. First, the algebraic reconstruction technique can now be used efficiently regarding memory usage without considerations concerning the handling of large sparse matrices. Second, the relation grants a more intuitive understanding as to the convergence characteristics of the iterative method. Besides the gain in theoretical insight these advantages offer new possibilities for application-specific fine tuning of reconstruction techniques.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan
2017-04-01
Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials
Adaptive Grouping Distributed Compressive Sensing Reconstruction of Plant Hyperspectral Data.
Xu, Ping; Liu, Junfeng; Xue, Lingyun; Zhang, Jingcheng; Qiu, Bo
2017-06-07
With the development of hyperspectral technology, to establish an effective spectral data compressive reconstruction method that can improve data storage, transmission, and maintaining spectral information is critical for quantitative remote sensing research and application in vegetation. The spectral adaptive grouping distributed compressive sensing (AGDCS) algorithm is proposed, which enables a distributed compressed sensing reconstruction of plant hyperspectral data. The spectral characteristics of hyperspectral data are analyzed and the joint sparse model is constructed. The spectral bands are adaptively grouped and the hyperspectral data are compressed and reconstructed on the basis of grouping. The experimental results showed that, compared with orthogonal matching pursuit (OMP) and gradient projection for sparse reconstruction (GPSR), AGDCS can significantly improve the visual effect of image reconstruction in the spatial domain. The peak signal-to-noise ratio (PSNR) at a low sampling rate (the sampling rate is lower than 0.2) increases by 13.72 dB than OMP and 1.66 dB than GPSR. In the spectral domain, the average normalized root mean square error, the mean absolute percentage error, and the mean absolute error of AGDCS is 35.38%, 31.83%, and 33.33% lower than GPSR, respectively. Additionally, AGDCS can achieve relatively high reconstructed efficiency.
Adaptive Grouping Distributed Compressive Sensing Reconstruction of Plant Hyperspectral Data
Xu, Ping; Liu, Junfeng; Xue, Lingyun; Zhang, Jingcheng; Qiu, Bo
2017-01-01
With the development of hyperspectral technology, to establish an effective spectral data compressive reconstruction method that can improve data storage, transmission, and maintaining spectral information is critical for quantitative remote sensing research and application in vegetation. The spectral adaptive grouping distributed compressive sensing (AGDCS) algorithm is proposed, which enables a distributed compressed sensing reconstruction of plant hyperspectral data. The spectral characteristics of hyperspectral data are analyzed and the joint sparse model is constructed. The spectral bands are adaptively grouped and the hyperspectral data are compressed and reconstructed on the basis of grouping. The experimental results showed that, compared with orthogonal matching pursuit (OMP) and gradient projection for sparse reconstruction (GPSR), AGDCS can significantly improve the visual effect of image reconstruction in the spatial domain. The peak signal-to-noise ratio (PSNR) at a low sampling rate (the sampling rate is lower than 0.2) increases by 13.72 dB than OMP and 1.66 dB than GPSR. In the spectral domain, the average normalized root mean square error, the mean absolute percentage error, and the mean absolute error of AGDCS is 35.38%, 31.83%, and 33.33% lower than GPSR, respectively. Additionally, AGDCS can achieve relatively high reconstructed efficiency. PMID:28590433
Joint sparse representation based automatic target recognition in SAR images
NASA Astrophysics Data System (ADS)
Zhang, Haichao; Nasrabadi, Nasser M.; Huang, Thomas S.; Zhang, Yanning
2011-06-01
In this paper, we introduce a novel joint sparse representation based automatic target recognition (ATR) method using multiple views, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views for a single joint recognition decision. We cast the problem as a multi-variate regression model and recover the sparse representations for the multiple views simultaneously. The recognition is accomplished via classifying the target to the class which gives the minimum total reconstruction error accumulated across all the views. Extensive experiments have been carried out on Moving and Stationary Target Acquisition and Recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear Support Vector Machine (SVM), kernel SVM as well as a sparse representation based classifier. Experimental results demonstrate that the effectiveness as well as robustness of the proposed joint sparse representation ATR method.
SAR target recognition based on improved joint sparse representation
NASA Astrophysics Data System (ADS)
Cheng, Jian; Li, Lan; Li, Hongsheng; Wang, Feng
2014-12-01
In this paper, a SAR target recognition method is proposed based on the improved joint sparse representation (IJSR) model. The IJSR model can effectively combine multiple-view SAR images from the same physical target to improve the recognition performance. The classification process contains two stages. Convex relaxation is used to obtain support sample candidates with the ℓ 1-norm minimization in the first stage. The low-rank matrix recovery strategy is introduced to explore the final support samples and its corresponding sparse representation coefficient matrix in the second stage. Finally, with the minimal reconstruction residual strategy, we can make the SAR target classification. The experimental results on the MSTAR database show the recognition performance outperforms state-of-the-art methods, such as the joint sparse representation classification (JSRC) method and the sparse representation classification (SRC) method.
Multi-element array signal reconstruction with adaptive least-squares algorithms
NASA Technical Reports Server (NTRS)
Kumar, R.
1992-01-01
Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.
Konte, Tilen; Terpitz, Ulrich; Plemenitaš, Ana
2016-01-01
The basidiomycetous fungus Wallemia ichthyophaga grows between 1.7 and 5.1 M NaCl and is the most halophilic eukaryote described to date. Like other fungi, W. ichthyophaga detects changes in environmental salinity mainly by the evolutionarily conserved high-osmolarity glycerol (HOG) signaling pathway. In Saccharomyces cerevisiae, the HOG pathway has been extensively studied in connection to osmotic regulation, with a valuable knock-out strain collection established. In the present study, we reconstructed the architecture of the HOG pathway of W. ichthyophaga in suitable S. cerevisiae knock-out strains, through heterologous expression of the W. ichthyophaga HOG pathway proteins. Compared to S. cerevisiae, where the Pbs2 (ScPbs2) kinase of the HOG pathway is activated via the SHO1 and SLN1 branches, the interactions between the W. ichthyophaga Pbs2 (WiPbs2) kinase and the W. ichthyophaga SHO1 branch orthologs are not conserved: as well as evidence of poor interactions between the WiSho1 Src-homology 3 (SH3) domain and the WiPbs2 proline-rich motif, the absence of a considerable part of the osmosensing apparatus in the genome of W. ichthyophaga suggests that the SHO1 branch components are not involved in HOG signaling in this halophilic fungus. In contrast, the conserved activation of WiPbs2 by the S. cerevisiae ScSsk2/ScSsk22 kinase and the sensitivity of W. ichthyophaga cells to fludioxonil, emphasize the significance of two-component (SLN1-like) signaling via Group III histidine kinase. Combined with protein modeling data, our study reveals conserved and non-conserved protein interactions in the HOG signaling pathway of W. ichthyophaga and therefore significantly improves the knowledge of hyperosmotic signal processing in this halophilic fungus. PMID:27379041
Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals
Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo
2012-01-01
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized
Sepahvand, Majid; Abdali-Mohammadi, Fardin; Mardukhi, Farhad
2016-12-13
The development of sensors with the microelectromechanical systems technology expedites the emergence of new tools for human-computer interaction, such as inertial pens. These pens, which are used as writing tools, do not depend on a specific embedded hardware, and thus, they are inexpensive. Most of the available inertial pen character recognition approaches use the low-level features of inertial signals. This paper introduces a Persian/Arabic handwriting character recognition system for inertial-sensor-equipped pens. First, the motion trajectory of the inertial pen is reconstructed to estimate the position signals by using the theory of inertial navigation systems. The position signals are then used to extract high-level geometrical features. A new metric learning technique is then adopted to enhance the accuracy of character classification. To this end, a characteristic function is calculated for each character using a genetic programming algorithm. These functions form a metric kernel classifying all the characters. The experimental results show that the performance of the proposed method is superior to that of one of the state-of-the-art works in terms of recognizing Persian/Arabic handwriting characters.
Sparse representation for color image restoration.
Mairal, Julien; Elad, Michael; Sapiro, Guillermo
2008-01-01
Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper.
Compressive measurement and feature reconstruction method for autonomous star trackers
NASA Astrophysics Data System (ADS)
Yin, Hang; Yan, Ye; Song, Xin; Yang, Yueneng
2016-12-01
Compressive sensing (CS) theory provides a framework for signal reconstruction using a sub-Nyquist sampling rate. CS theory enables the reconstruction of a signal that is sparse or compressible from a small set of measurements. The current CS application in optical field mainly focuses on reconstructing the original image using optimization algorithms and conducts data processing in full-dimensional image, which cannot reduce the data processing rate. This study is based on the spatial sparsity of star image and proposes a new compressive measurement and reconstruction method that extracts the star feature from compressive data and directly reconstructs it to the original image for attitude determination. A pixel-based folding model that preserves the star feature and enables feature reconstruction is presented to encode the original pixel location into the superposed space. A feature reconstruction method is then proposed to extract the star centroid by compensating distortions and to decode the centroid without reconstructing the whole image, which reduces the sampling rate and data processing rate at the same time. The statistical results investigate the proportion of star distortion and false matching results, which verifies the correctness of the proposed method. The results also verify the robustness of the proposed method to a great extent and demonstrate that its performance can be improved by sufficient measurement in noise cases. Moreover, the result on real star images significantly ensures the correct star centroid estimation for attitude determination and confirms the feasibility of applying the proposed method in a star tracker.
Zheng, Jingyun; Liu, Yang; Hao, Zhixin
2015-01-01
We reconstructed the annual temperature anomaly series in Xinjiang during 1850–2001 based on three kinds of proxies, including 17 tree-ring width chronologies, one tree-ring δ13C series and two δ18O series of ice cores, and instrumental observation data. The low- and high-frequency signal decomposition for the raw temperature proxy data was obtained by a fast Fourier transform filter with a window size of 20 years, which was used to build a good relationship that explained the high variance between the temperature and the proxy data used for the reconstruction. The results showed that for 1850–2001, the temperature during most periods prior to the 1920s was lower than the mean temperature in the 20th century. Remarkable warming occurred in the 20th century at a rate of 0.85°C/100a, which was higher than that during the past 150 years. Two cold periods occurred before the 1870s and around the 1910s, and a relatively warm interval occurred around the 1940s. In addition, the temperature series showed a warming hiatus of approximately 20 years around the 1970s, and a rapid increase since the 1980s. PMID:26632814
Zheng, Jingyun; Liu, Yang; Hao, Zhixin
2015-01-01
We reconstructed the annual temperature anomaly series in Xinjiang during 1850-2001 based on three kinds of proxies, including 17 tree-ring width chronologies, one tree-ring δ13C series and two δ18O series of ice cores, and instrumental observation data. The low- and high-frequency signal decomposition for the raw temperature proxy data was obtained by a fast Fourier transform filter with a window size of 20 years, which was used to build a good relationship that explained the high variance between the temperature and the proxy data used for the reconstruction. The results showed that for 1850-2001, the temperature during most periods prior to the 1920s was lower than the mean temperature in the 20th century. Remarkable warming occurred in the 20th century at a rate of 0.85°C/100a, which was higher than that during the past 150 years. Two cold periods occurred before the 1870s and around the 1910s, and a relatively warm interval occurred around the 1940s. In addition, the temperature series showed a warming hiatus of approximately 20 years around the 1970s, and a rapid increase since the 1980s.
and Drayton Munster, Miroslav Stoyanov
2013-09-20
Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.
and Drayton Munster, Miroslav Stoyanov
2013-09-20
Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.
Protein crystal structure from non-oriented, single-axis sparse X-ray data
Wierman, Jennifer L.; Lan, Ti-Yen; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit; Gruner, Sol M.
2016-01-01
X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so `sparse' in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the
Dictionary learning algorithms for sparse representation.
Kreutz-Delgado, Kenneth; Murray, Joseph F; Rao, Bhaskar D; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J
2003-02-01
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an overcomplete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error).
Dictionary Learning Algorithms for Sparse Representation
Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811
Ahn, Jin Hwan; Jeong, Hwa Jae; Lee, Yong Seuk; Park, Jai Hyung; Lee, Jin Ho; Ko, Taeg Su
2016-08-01
The purposes of this study were as follows: 1) to determine the correlation between the bending angle of the anterior cruciate ligament (ACL) graft at the femoral tunnel and the magnetic resonance imaging (MRI) signal intensity of the ACL graft and 2) to analyze the difference in the MRI signal intensity of the reconstructed ACL graft in different areas of the graft after single-bundle hamstring autograft ACL (SB ACL) reconstruction using an outside-in (OI) technique with bone-sparing retro-reaming. Thirty-eight patients who underwent SB ACL reconstruction with the hamstring tendon autograft using the OI technique were enrolled in this study. All patients were assessed using three-dimensional computed tomography (CT) to evaluate femoral tunnel factors, including tunnel placement, tunnel length, tunnel diameter, and femoral tunnel bending angle. At a mean of 6.3±0.8months after surgery, 3.0-T MRI was used to evaluate the graft signal intensity using signal/noise quotient for high-signal-intensity lesions. Among various femoral tunnel factors, only the femoral tunnel bending angle in the coronal plane was significantly (p=0.003) correlated with the signal/noise quotient of the femoral intraosseous graft. The femoral intraosseous graft had significantly (p=0.009) higher signal intensity than the other graft zone. Five cases (13.2%) showed high-signal-intensity zones around the femoral tunnel but not around the tibial tunnel. After ACL reconstruction using the OI technique, the graft bending angle was found to be significantly correlated with the femoral intraosseous graft signal intensity, indicating that increased signal intensity by acute graft bending might be related to the maturation of the graft. This was a retrospective comparative study with Level III evidence. Copyright © 2015 Elsevier B.V. All rights reserved.
Blind spectral unmixing based on sparse nonnegative matrix factorization.
Yang, Zuyuan; Zhou, Guoxu; Xie, Shengli; Ding, Shuxue; Yang, Jun-Mei; Zhang, Jun
2011-04-01
Nonnegative matrix factorization (NMF) is a widely used method for blind spectral unmixing (SU), which aims at obtaining the endmembers and corresponding fractional abundances, knowing only the collected mixing spectral data. It is noted that the abundance may be sparse (i.e., the endmembers may be with sparse distributions) and sparse NMF tends to lead to a unique result, so it is intuitive and meaningful to constrain NMF with sparseness for solving SU. However, due to the abundance sum-to-one constraint in SU, the traditional sparseness measured by L0/L1-norm is not an effective constraint any more. A novel measure (termed as S-measure) of sparseness using higher order norms of the signal vector is proposed in this paper. It features the physical significance. By using the S-measure constraint (SMC), a gradient-based sparse NMF algorithm (termed as NMF-SMC) is proposed for solving the SU problem, where the learning rate is adaptively selected, and the endmembers and abundances are simultaneously estimated. In the proposed NMF-SMC, there is no pure index assumption and no need to know the exact sparseness degree of the abundance in prior. Yet, it does not require the preprocessing of dimension reduction in which some useful information may be lost. Experiments based on synthetic mixtures and real-world images collected by AVIRIS and HYDICE sensors are performed to evaluate the validity of the proposed method.
Knapp, Bettina; Kaderali, Lars
2013-01-01
Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.
Modified sparse regularization for electrical impedance tomography
Fan, Wenru Xue, Qian; Wang, Huaxiang; Cui, Ziqiang; Sun, Benyuan; Wang, Qi
2016-03-15
Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L{sub 1} norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.
Bauer, Sebastian; Berkels, Benjamin; Ettl, Svenja; Arold, Oliver; Hornegger, Joachim; Rumpf, Martin
2012-01-01
To manage respiratory motion in image-guided interventions a novel sparse-to-dense registration approach is presented. We apply an emerging laser-based active triangulation (AT) sensor that delivers sparse but highly accurate 3-D measurements in real-time. These sparse position measurements are registered with a dense reference surface extracted from planning data. Thereby a dense displacement field is reconstructed which describes the 4-D deformation of the complete patient body surface and recovers a multi-dimensional respiratory signal for application in respiratory motion management. The method is validated on real data from an AT prototype and synthetic data sampled from dense surface scans acquired with a structured light scanner. In a study on 16 subjects, the proposed algorithm achieved a mean reconstruction accuracy of +/- 0.22 mm w.r.t. ground truth data.
A non-iterative method for the electrical impedance tomography based on joint sparse recovery
NASA Astrophysics Data System (ADS)
Lee, Ok Kyun; Kang, Hyeonbae; Ye, Jong Chul; Lim, Mikyoung
2015-07-01
The purpose of this paper is to propose a non-iterative method for the inverse conductivity problem of recovering multiple small anomalies from the boundary measurements. When small anomalies are buried in a conducting object, the electric potential values inside the object can be expressed by integrals of densities with a common sparse support on the location of anomalies. Based on this integral expression, we formulate the reconstruction problem of small anomalies as a joint sparse recovery and present an efficient non-iterative recovery algorithm of small anomalies. Furthermore, we also provide a slightly modified algorithm to reconstruct an extended anomaly. We validate the effectiveness of the proposed algorithm over the linearized method and the multiple signal classification algorithm by numerical simulations. This work is supported by the Korean Ministry of Education, Sciences and Technology through NRF grant No. NRF-2010-0017532 (to H K), the Korean Ministry of Science, ICT & Future Planning; through NRF grant No. NRF-2013R1A1A3012931 (to M L), the R&D Convergence Program of NST (National Research Council of Science & Technology) of Republic of Korea (Grant CAP-13-3-KERI) (to O K L and J C Y).
NASA Astrophysics Data System (ADS)
Keller, J. Y.; Chabir, K.; Sauter, D.
2016-03-01
State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.
NASA Astrophysics Data System (ADS)
Poiata, N.; Satriano, C.; Vilotte, J. P.; Bernard, P.; Obara, K.
2015-12-01
Seismic radiation associated with transient deformations along the faults and subduction interfaces encompasses a variety of events, i.e., tectonic tremors, low-frequency earthquakes (LFE), very low-frequency earthquakes (VLFs), and slow-slip events (SSE), with a wide range of seismic moment and characteristic durations. Characterizing in space and time the complex sources of these slow earthquakes, and their relationship with background seismicity and large earthquakes generation, is of great importance for understanding the physics and mechanics of the processes of active deformations along the plate interfaces. We present here first developments towards a methodology for: (1) extracting the different frequency and scale components of observed tectonic tremor signal, using advanced time-frequency and time-scale signal representation such as Gabor transform scheme based on, e.g. Wilson bases or Modified Discrete Cosine Transform (MDCT) bases; (2) reconstructing their corresponding potential sources in space and time, using the array method of Poiata et al. (2015). The methodology is assessed using a data