Science.gov

Sample records for sparse signal reconstruction

  1. Robust Methods for Sensing and Reconstructing Sparse Signals

    ERIC Educational Resources Information Center

    Carrillo, Rafael E.

    2012-01-01

    Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…

  2. Generation of Rayleigh-wave dispersion images from multichannel seismic data using sparse signal reconstruction

    NASA Astrophysics Data System (ADS)

    Mun, Songchol; Bao, Yuequan; Li, Hui

    2015-11-01

    The accurate estimation of dispersion curves has been a key issue for ensuring high quality in geophysical surface wave exploration. Many studies have been carried out on the generation of a high-resolution dispersion image from array measurements. In this study, the sparse signal representation and reconstruction techniques are employed to obtain the high resolution Rayleigh-wave dispersion image from seismic wave data. First, a sparse representation of the seismic wave data is introduced, in which the signal is assumed to be sparse in terms of wave speed. Then, the sparse signal is reconstructed by optimization using l1-norm regularization, which gives the signal amplitude spectrum as a function of wave speed. A dispersion image in the f-v domain is generated by arranging the sparse spectra for all frequency slices in the frequency range. Finally, to show the efficiency of the proposed approach, the Surfbar-2 field test data, acquired by B. Luke and colleagues at the University of Nevada Las Vegas, are analysed. By comparing the real-field dispersion image with the results from other methods, the high mode-resolving ability of the proposed approach is demonstrated, particularly for a case with strongly coherent modes.

  3. Atomic library optimization for pulse ultrasonic sparse signal decomposition and reconstruction

    NASA Astrophysics Data System (ADS)

    Song, Shoupeng; Li, Yingxue; Dogandžić, Aleksandar

    2016-02-01

    Compressive sampling of pulse ultrasonic NDE signals could bring significant savings in the data acquisition process. Sparse representation of these signals using an atomic library is key to their interpretation and reconstruction from compressive samples. However, the obstacles to practical applicability of such representations are: large size of the atomic library and computational complexity of the sparse decomposition and reconstruction. To help solve these problems, we develop a method for optimizing the ranges of parameters of traditional Gabor-atom library to match a real pulse ultrasonic signal in terms of correlation. As a result of atomic-library optimization, the number of the atoms is greatly reduced. Numerical simulations compare the proposed approach with the traditional method. Simulation results show that both the time efficiency and signal reconstruction energy error are superior to the traditional one even with small-scale atomic library. The performance of the proposed method is also explored under different noise levels. Finally, we apply the proposed method to real pipeline ultrasonic testing data, and the results indicate that our reduced atomic library outperforms the traditional library.

  4. A reconstruction algorithm based on sparse representation for Raman signal processing under high background noise

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wang, X.; Wang, X.; Xu, Y.; Que, J.; He, H.; Wang, X.; Tang, M.

    2016-02-01

    Background noise is one of the main interference sources of the Raman spectroscopy measurement and imaging technique. In this paper, a sparse representation based algorithm is presented to process the Raman signals under high background noise. In contrast with the existing de-noising methods, the proposed method reconstructs the pure Raman signals by estimating the Raman peak information. The advantage of the proposed algorithm is its high anti-noise capacity and low pure Raman signal reduction contributed by its reconstruction principle. Meanwhile, the Batch-OMP algorithm is applied to accelerate the training of the sparse representation. Therefore, it is very suitable to be adopted in the Raman measurement or imaging instruments to observe fast dynamic processes where the scanning time has to be shortened and the signal-to-noise ratio (SNR) of the raw tested signal is reduced. In the simulation and experiment, the de-noising result obtained by the proposed algorithm was better than the traditional Savitzky-Golay (S-G) filter and the fixed-threshold wavelet de-noising algorithm.

  5. A fast algorithm for reconstruction of spectrally sparse signals in super-resolution

    NASA Astrophysics Data System (ADS)

    Cai, Jian-Feng; Liu, Suhui; Xu, Weiyu

    2015-08-01

    We propose a fast algorithm to reconstruct spectrally sparse signals from a small number of randomly observed time domain samples. Different from conventional compressed sensing where frequencies are discretized, we consider the super-resolution case where the frequencies can be any values in the normalized continuous frequency domain [0; 1). We first convert our signal recovery problem into a low rank Hankel matrix completion problem, for which we then propose an efficient feasible point algorithm named projected Wirtinger gradient algorithm(PWGA). The algorithm can be further accelerated by a scheme inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Numerical experiments are provided to illustrate the effectiveness of our proposed algorithm. Different from earlier approaches, our algorithm can solve problems of large scale efficiently.

  6. Decoupled 2D direction-of-arrival estimation based on sparse signal reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Cui, Xiaowei; Lu, Mingquan; Feng, Zhenming

    2015-12-01

    A new two-dimensional direction-of-arrival estimation algorithm called 2D- l 1-singular value decomposition (SVD) and its improved version called enhanced-2D- l 1-SVD are proposed in this paper. They are designed for rectangular arrays and can also be extended to rectangular arrays with faulty or missing elements. The key idea is to represent direction-of-arrival with two decoupled angles and then successively estimate them. Therefore, two-dimensional direction finding can be achieved by applying several times of one-dimensional sparse reconstruction-based direction finding methods instead of directly extending them to two-dimensional situation. Performance analysis and simulation results reveal that the proposed method has a much lower computational complexity and a similar statistical performance compared with the well-known l 1-SVD algorithm, which has several advantages over conventional direction finding techniques due to the application of sparse signal reconstruction. Moreover, 2D- l 1-SVD has better robustness to the assumed number of sources over l 1-SVD.

  7. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  8. A fast and accurate sparse continuous signal reconstruction by homotopy DCD with non-convex regularization.

    PubMed

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  9. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  10. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  11. Sparse signal reconstruction from polychromatic X-ray CT measurements via mass attenuation discretization

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2014-02-01

    We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer's law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ1-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.

  12. Sparse signal reconstruction from polychromatic X-ray CT measurements via mass attenuation discretization

    SciTech Connect

    Gu, Renliang; Dogandžić, Aleksandar

    2014-02-18

    We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer’s law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ{sub 1}-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.

  13. A unified approach to sparse signal processing

    NASA Astrophysics Data System (ADS)

    Marvasti, Farokh; Amini, Arash; Haddadi, Farzan; Soltanolkotabi, Mahdi; Khalaj, Babak Hossein; Aldroubi, Akram; Sanei, Saeid; Chambers, Janathon

    2012-12-01

    A unified view of the area of sparse signal processing is presented in tutorial form by bringing together various fields in which the property of sparsity has been successfully exploited. For each of these fields, various algorithms and techniques, which have been developed to leverage sparsity, are described succinctly. The common potential benefits of significant reduction in sampling rate and processing manipulations through sparse signal processing are revealed. The key application domains of sparse signal processing are sampling, coding, spectral estimation, array processing, component analysis, and multipath channel estimation. In terms of the sampling process and reconstruction algorithms, linkages are made with random sampling, compressed sensing, and rate of innovation. The redundancy introduced by channel coding in finite and real Galois fields is then related to over-sampling with similar reconstruction algorithms. The error locator polynomial (ELP) and iterative methods are shown to work quite effectively for both sampling and coding applications. The methods of Prony, Pisarenko, and MUltiple SIgnal Classification (MUSIC) are next shown to be targeted at analyzing signals with sparse frequency domain representations. Specifically, the relations of the approach of Prony to an annihilating filter in rate of innovation and ELP in coding are emphasized; the Pisarenko and MUSIC methods are further improvements of the Prony method under noisy environments. The iterative methods developed for sampling and coding applications are shown to be powerful tools in spectral estimation. Such narrowband spectral estimation is then related to multi-source location and direction of arrival estimation in array processing. Sparsity in unobservable source signals is also shown to facilitate source separation in sparse component analysis; the algorithms developed in this area such as linear programming and matching pursuit are also widely used in compressed sensing. Finally

  14. Compressed Sampling of Spectrally Sparse Signals Using Sparse Circulant Matrices

    NASA Astrophysics Data System (ADS)

    Xu, Guangjie; Wang, Huali; Sun, Lei; Zeng, Weijun; Wang, Qingguo

    2014-11-01

    Circulant measurement matrices constructed by partial cyclically shifts of one generating sequence, are easier to be implemented in hardware than widely used random measurement matrices; however, the diminishment of randomness makes it more sensitive to signal noise. Selecting a deterministic sequence with optimal periodic autocorrelation property (PACP) as generating sequence, would enhance the noise robustness of circulant measurement matrix, but this kind of deterministic circulant matrices only exists in the fixed periodic length. Actually, the selection of generating sequence doesn't affect the compressive performance of circulant measurement matrix but the subspace energy in spectrally sparse signals. Sparse circulant matrices, whose generating sequence is a sparse sequence, could keep the energy balance of subspaces and have similar noise robustness to deterministic circulant matrices. In addition, sparse circulant matrices have no restriction on length and are more suitable for the compressed sampling of spectrally sparse signals at arbitrary dimensionality.

  15. Robust Reconstruction of Complex Networks from Sparse Data

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Di, Zengru

    2015-01-01

    Reconstructing complex networks from measurable data is a fundamental problem for understanding and controlling collective dynamics of complex networked systems. However, a significant challenge arises when we attempt to decode structural information hidden in limited amounts of data accompanied by noise and in the presence of inaccessible nodes. Here, we develop a general framework for robust reconstruction of complex networks from sparse and noisy data. Specifically, we decompose the task of reconstructing the whole network into recovering local structures centered at each node. Thus, the natural sparsity of complex networks ensures a conversion from the local structure reconstruction into a sparse signal reconstruction problem that can be addressed by using the lasso, a convex optimization method. We apply our method to evolutionary games, transportation, and communication processes taking place in a variety of model and real complex networks, finding that universal high reconstruction accuracy can be achieved from sparse data in spite of noise in time series and missing data of partial nodes. Our approach opens new routes to the network reconstruction problem and has potential applications in a wide range of fields.

  16. Compressed sensing sparse reconstruction for coherent field imaging

    NASA Astrophysics Data System (ADS)

    Bei, Cao; Xiu-Juan, Luo; Yu, Zhang; Hui, Liu; Ming-Lai, Chen

    2016-04-01

    Return signal processing and reconstruction plays a pivotal role in coherent field imaging, having a significant influence on the quality of the reconstructed image. To reduce the required samples and accelerate the sampling process, we propose a genuine sparse reconstruction scheme based on compressed sensing theory. By analyzing the sparsity of the received signal in the Fourier spectrum domain, we accomplish an effective random projection and then reconstruct the return signal from as little as 10% of traditional samples, finally acquiring the target image precisely. The results of the numerical simulations and practical experiments verify the correctness of the proposed method, providing an efficient processing approach for imaging fast-moving targets in the future. Project supported by the National Natural Science Foundation of China (Grant No. 61505248) and the Fund from Chinese Academy of Sciences, the Light of “Western” Talent Cultivation Plan “Dr. Western Fund Project” (Grant No. Y429621213).

  17. Sparse decomposition learning based dynamic MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Zhu, Peifei; Zhang, Qieshi; Kamata, Sei-ichiro

    2015-02-01

    Dynamic MRI is widely used for many clinical exams but slow data acquisition becomes a serious problem. The application of Compressed Sensing (CS) demonstrated great potential to increase imaging speed. However, the performance of CS is largely depending on the sparsity of image sequence in the transform domain, where there are still a lot to be improved. In this work, the sparsity is exploited by proposed Sparse Decomposition Learning (SDL) algorithm, which is a combination of low-rank plus sparsity and Blind Compressed Sensing (BCS). With this decomposition, only sparsity component is modeled as a sparse linear combination of temporal basis functions. This enables coefficients to be sparser and remain more details of dynamic components comparing learning the whole images. A reconstruction is performed on the undersampled data where joint multicoil data consistency is enforced by combing Parallel Imaging (PI). The experimental results show the proposed methods decrease about 15~20% of Mean Square Error (MSE) compared to other existing methods.

  18. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  19. Multiband signal reconstruction for random equivalent sampling

    NASA Astrophysics Data System (ADS)

    Zhao, Y. J.; Liu, C. J.

    2014-10-01

    The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method.

  20. Multiband signal reconstruction for random equivalent sampling.

    PubMed

    Zhao, Y J; Liu, C J

    2014-10-01

    The random equivalent sampling (RES) is a sampling approach that can be applied to capture high speed repetitive signals with a sampling rate that is much lower than the Nyquist rate. However, the uneven random distribution of the time interval between the excitation pulse and the signal degrades the signal reconstruction performance. For sparse multiband signal sampling, the compressed sensing (CS) based signal reconstruction algorithm can tease out the band supports with overwhelming probability and reduce the impact of uneven random distribution in RES. In this paper, the mathematical model of RES behavior is constructed in the frequency domain. Based on the constructed mathematical model, the band supports of signal can be determined. Experimental results demonstrate that, for a signal with unknown sparse multiband, the proposed CS-based signal reconstruction algorithm is feasible, and the CS reconstruction algorithm outperforms the traditional RES signal reconstruction method. PMID:25362458

  1. Sparse representation in speech signal processing

    NASA Astrophysics Data System (ADS)

    Lee, Te-Won; Jang, Gil-Jin; Kwon, Oh-Wook

    2003-11-01

    We review the sparse representation principle for processing speech signals. A transformation for encoding the speech signals is learned such that the resulting coefficients are as independent as possible. We use independent component analysis with an exponential prior to learn a statistical representation for speech signals. This representation leads to extremely sparse priors that can be used for encoding speech signals for a variety of purposes. We review applications of this method for speech feature extraction, automatic speech recognition and speaker identification. Furthermore, this method is also suited for tackling the difficult problem of separating two sounds given only a single microphone.

  2. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  3. Sparse image reconstruction on the sphere: implications of a new sampling theorem.

    PubMed

    McEwen, Jason D; Puy, Gilles; Thiran, Jean-Philippe; Vandergheynst, Pierre; Van De Ville, Dimitri; Wiaux, Yves

    2013-06-01

    We study the impact of sampling theorems on the fidelity of sparse image reconstruction on the sphere. We discuss how a reduction in the number of samples required to represent all information content of a band-limited signal acts to improve the fidelity of sparse image reconstruction, through both the dimensionality and sparsity of signals. To demonstrate this result, we consider a simple inpainting problem on the sphere and consider images sparse in the magnitude of their gradient. We develop a framework for total variation inpainting on the sphere, including fast methods to render the inpainting problem computationally feasible at high resolution. Recently a new sampling theorem on the sphere was developed, reducing the required number of samples by a factor of two for equiangular sampling schemes. Through numerical simulations, we verify the enhanced fidelity of sparse image reconstruction due to the more efficient sampling of the sphere provided by the new sampling theorem. PMID:23475360

  4. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  5. Time-frequency signature sparse reconstruction using chirp dictionary

    NASA Astrophysics Data System (ADS)

    Nguyen, Yen T. H.; Amin, Moeness G.; Ghogho, Mounir; McLernon, Des

    2015-05-01

    This paper considers local sparse reconstruction of time-frequency signatures of windowed non-stationary radar returns. These signals can be considered instantaneously narrow-band, thus the local time-frequency behavior can be recovered accurately with incomplete observations. The typically employed sinusoidal dictionary induces competing requirements on window length. It confronts converse requests on the number of measurements for exact recovery, and sparsity. In this paper, we use chirp dictionary for each window position to determine the signal instantaneous frequency laws. This approach can considerably mitigate the problems of sinusoidal dictionary, and enable the utilization of longer windows for accurate time-frequency representations. It also reduces the picket fence by introducing a new factor, the chirp rate α. Simulation examples are provided, demonstrating the superior performance of local chirp dictionary over its sinusoidal counterpart.

  6. Sparse reconstruction of visual appearance for computer graphics and vision

    NASA Astrophysics Data System (ADS)

    Ramamoorthi, Ravi

    2011-09-01

    A broad range of problems in computer graphics rendering, appearance acquisition for graphics and vision, and imaging, involve sampling, reconstruction, and integration of high-dimensional (4D-8D) signals. For example, precomputation-based real-time rendering of glossy materials and intricate lighting effects like caustics, can involve (pre)-computing the response of the scene to different light and viewing directions, which is often a 6D dataset. Similarly, image-based appearance acquisition of facial details, car paint, or glazed wood, requires us to take images from different light and view directions. Even offline rendering of visual effects like motion blur from a fast-moving car, or depth of field, involves high-dimensional sampling across time and lens aperture. The same problems are also common in computational imaging applications such as light field cameras. In the past few years, computer graphics and computer vision researchers have made significant progress in subsequent analysis and compact factored or multiresolution representations for some of these problems. However, the initial full dataset must almost always still be acquired or computed by brute force. This is often prohibitively expensive, taking hours to days of computation and acquisition time, as well as being a challenge for memory usage and storage. For example, on the order of 10,000 megapixel images are needed for a 1 degree sampling of lights and views for high-frequency materials. We argue that dramatically sparser sampling and reconstruction of these signals is possible, before the full dataset is acquired or simulated. Our key idea is to exploit the structure of the data that often lies in lower-frequency, sparse, or low-dimensional spaces. Our framework will apply to a diverse set of problems such as sparse reconstruction of light transport matrices for relighting, sheared sampling and denoising for offline shadow rendering, time-coherent compressive sampling for appearance

  7. Beam hardening correction for sparse-view CT reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2015-03-01

    Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant's modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging.

  8. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  9. Multi-shell diffusion signal recovery from sparse measurements

    PubMed Central

    Rathi, Y.; Michailovich, O.; Laun, F.; Setsompop, K.; Grant, P. E.; Westin, C-F

    2014-01-01

    For accurate estimation of the ensemble average diffusion propagator (EAP), traditional multi-shell diffusion imaging (MSDI) approaches require acquisition of diffusion signals for a range of b-values. However, this makes the acquisition time too long for several types of patients, making it difficult to use in a clinical setting. In this work, we propose a new method for the reconstruction of diffusion signals in the entire q-space from highly under-sampled sets of MSDI data, thus reducing the scan time significantly. In particular, to sparsely represent the diffusion signal over multiple q-shells, we propose a novel extension to the framework of spherical ridgelets by accurately modeling the monotonically decreasing radial component of the diffusion signal. Further, we enforce the reconstructed signal to have smooth spatial regularity in the brain, by minimizing the total variation (TV) norm. We combine these requirements into a novel cost function and derive an optimal solution using the Alternating Directions Method of Multipliers (ADMM) algorithm. We use a physical phantom data set with known fiber crossing angle of 45° to determine the optimal number of measurements (gradient directions and b-values) needed for accurate signal recovery. We compare our technique with a state-of-the-art sparse reconstruction method (i.e., the SHORE method of (Cheng et al., 2010)) in terms of angular error in estimating the crossing angle, incorrect number of peaks detected, normalized mean squared error in signal recovery as well as error in estimating the return-to-origin probability (RTOP). Finally, we also demonstrate the behavior of the proposed technique on human in-vivo data sets. Based on these experiments, we conclude that using the proposed algorithm, at least 60 measurements (spread over three b-value shells) are needed for proper recovery of MSDI data in the entire q-space. PMID:25047866

  10. Accurate Sparse-Projection Image Reconstruction via Nonlocal TV Regularization

    PubMed Central

    Zhang, Yi; Zhang, Weihua; Zhou, Jiliu

    2014-01-01

    Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better. PMID:24592168

  11. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  12. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  13. Sparse-Coding-Based Computed Tomography Image Reconstruction

    PubMed Central

    Yoon, Gang-Joon

    2013-01-01

    Computed tomography (CT) is a popular type of medical imaging that generates images of the internal structure of an object based on projection scans of the object from several angles. There are numerous methods to reconstruct the original shape of the target object from scans, but they are still dependent on the number of angles and iterations. To overcome the drawbacks of iterative reconstruction approaches like the algebraic reconstruction technique (ART), while the recovery is slightly impacted from a random noise (small amount of ℓ2 norm error) and projection scans (small amount of ℓ1 norm error) as well, we propose a medical image reconstruction methodology using the properties of sparse coding. It is a very powerful matrix factorization method which each pixel point is represented as a linear combination of a small number of basis vectors. PMID:23576898

  14. MR image super-resolution reconstruction using sparse representation, nonlocal similarity and sparse derivative prior.

    PubMed

    Zhang, Di; He, Jiazhong; Zhao, Yun; Du, Minghui

    2015-03-01

    In magnetic resonance (MR) imaging, image spatial resolution is determined by various instrumental limitations and physical considerations. This paper presents a new algorithm for producing a high-resolution version of a low-resolution MR image. The proposed method consists of two consecutive steps: (1) reconstructs a high-resolution MR image from a given low-resolution observation via solving a joint sparse representation and nonlocal similarity L1-norm minimization problem; and (2) applies a sparse derivative prior based post-processing to suppress blurring effects. Extensive experiments on simulated brain MR images and two real clinical MR image datasets validate that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both quantitative measures and visual perception. PMID:25638262

  15. Sparse Reconstruction for Bioluminescence Tomography Based on the Semigreedy Method

    PubMed Central

    Guo, Wei; Jia, Kebin; Zhang, Qian; Liu, Xueyan; Feng, Jinchao; Qin, Chenghu; Ma, Xibo; Yang, Xin; Tian, Jie

    2012-01-01

    Bioluminescence tomography (BLT) is a molecular imaging modality which can three-dimensionally resolve the molecular processes in small animals in vivo. The ill-posedness nature of BLT problem makes its reconstruction bears nonunique solution and is sensitive to noise. In this paper, we proposed a sparse BLT reconstruction algorithm based on semigreedy method. To reduce the ill-posedness and computational cost, the optimal permissible source region was automatically chosen by using an iterative search tree. The proposed method obtained fast and stable source reconstruction from the whole body and imposed constraint without using a regularization penalty term. Numerical simulations on a mouse atlas, and in vivo mouse experiments were conducted to validate the effectiveness and potential of the method. PMID:22927887

  16. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  17. A Sparse Reconstruction Algorithm for Ultrasonic Images in Nondestructive Testing

    PubMed Central

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Junior, Flávio Neves; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan—about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  18. Statistical shape model reconstruction with sparse anomalous deformations: Application to intervertebral disc herniation.

    PubMed

    Neubert, Aleš; Fripp, Jurgen; Engstrom, Craig; Schwarz, Daniel; Weber, Marc-André; Crozier, Stuart

    2015-12-01

    Many medical image processing techniques rely on accurate shape modeling of anatomical features. The presence of shape abnormalities challenges traditional processing algorithms based on strong morphological priors. In this work, a sparse shape reconstruction from a statistical shape model is presented. It combines the advantages of traditional statistical shape models (defining a 'normal' shape space) and previously presented sparse shape composition (providing localized descriptors of anomalies). The algorithm was incorporated into our image segmentation and classification software. Evaluation was performed on simulated and clinical MRI data from 22 sciatica patients with intervertebral disc herniation, containing 35 herniated and 97 normal discs. Moderate to high correlation (R=0.73) was achieved between simulated and detected herniations. The sparse reconstruction provided novel quantitative features describing the herniation morphology and MRI signal appearance in three dimensions (3D). The proposed descriptors of local disc morphology resulted to the 3D segmentation accuracy of 1.07±1.00mm (mean absolute vertex-to-vertex mesh distance over the posterior disc region), and improved the intervertebral disc classification from 0.888 to 0.931 (area under receiver operating curve). The results show that the sparse shape reconstruction may improve computer-aided diagnosis of pathological conditions presenting local morphological alterations, as seen in intervertebral disc herniation. PMID:26060085

  19. Fast Forward Maximum entropy reconstruction of sparsely sampled data

    NASA Astrophysics Data System (ADS)

    Balsgart, Nicholas M.; Vosegaard, Thomas

    2012-10-01

    We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512 - 75) × 2 = 874 points per ν2 slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ˜450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.

  20. Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging

    PubMed Central

    Yu, Xingjian; Chen, Shuhang; Hu, Zhenghui; Liu, Meng; Chen, Yunmei; Shi, Pengcheng; Liu, Huafeng

    2015-01-01

    In dynamic Positron Emission Tomography (PET), an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets. PMID:26540274

  1. Depth reconstruction from sparse samples: representation, algorithm, and sampling.

    PubMed

    Liu, Lee-Kang; Chan, Stanley H; Nguyen, Truong Q

    2015-06-01

    The rapid development of 3D technology and computer vision applications has motivated a thrust of methodologies for depth acquisition and estimation. However, existing hardware and software acquisition methods have limited performance due to poor depth precision, low resolution, and high computational cost. In this paper, we present a computationally efficient method to estimate dense depth maps from sparse measurements. There are three main contributions. First, we provide empirical evidence that depth maps can be encoded much more sparsely than natural images using common dictionaries, such as wavelets and contourlets. We also show that a combined wavelet-contourlet dictionary achieves better performance than using either dictionary alone. Second, we propose an alternating direction method of multipliers (ADMM) for depth map reconstruction. A multiscale warm start procedure is proposed to speed up the convergence. Third, we propose a two-stage randomized sampling scheme to optimally choose the sampling locations, thus maximizing the reconstruction performance for a given sampling budget. Experimental results show that the proposed method produces high-quality dense depth estimates, and is robust to noisy measurements. Applications to real data in stereo matching are demonstrated. PMID:25769151

  2. Recursive Recovery of Sparse Signal Sequences From Compressive Measurements: A Review

    NASA Astrophysics Data System (ADS)

    Vaswani, Namrata; Zhan, Jinchun

    2016-07-01

    In this article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.

  3. Machinery vibration signal denoising based on learned dictionary and sparse representation

    NASA Astrophysics Data System (ADS)

    Guo, Liang; Gao, Hongli; Li, Jun; Huang, Haifeng; Zhang, Xiaochen

    2015-07-01

    Mechanical vibration signal denoising has been an import problem for machine damage assessment and health monitoring. Wavelet transfer and sparse reconstruction are the powerful and practical methods. However, those methods are based on the fixed basis functions or atoms. In this paper, a novel method is presented. The atoms used to represent signals are learned from the raw signal. And in order to satisfy the requirements of real-time signal processing, an online dictionary learning algorithm is adopted. Orthogonal matching pursuit is applied to extract the most pursuit column in the dictionary. At last, denoised signal is calculated with the sparse vector and learned dictionary. A simulation signal and real bearing fault signal are utilized to evaluate the improved performance of the proposed method through the comparison with kinds of denoising algorithms. Then Its computing efficiency is demonstrated by an illustrative runtime example. The results show that the proposed method outperforms current algorithms with efficiency calculation.

  4. Reconstructing spatially extended brain sources via enforcing multiple transform sparseness.

    PubMed

    Zhu, Min; Zhang, Wenbo; Dickens, Deanna L; Ding, Lei

    2014-02-01

    Accurate estimation of location and extent of neuronal sources from EEG/MEG remain challenging. In the present study, a new source imaging method, i.e. variation and wavelet based sparse source imaging (VW-SSI), is proposed to better estimate cortical source locations and extents. VW-SSI utilizes the L1-norm regularization method with the enforcement of transform sparseness in both variation and wavelet domains. The performance of the proposed method is assessed by both simulated and experimental MEG data, obtained from a language task and a motor task. Compared to L2-norm regularizations, VW-SSI demonstrates significantly improved capability in reconstructing multiple extended cortical sources with less spatial blurredness and less localization error. With the use of transform sparseness, VW-SSI overcomes the over-focused problem in classic SSI methods. With the use of two transformations, VW-SSI further indicates significantly better performance in estimating MEG source locations and extents than other SSI methods with single transformations. The present experimental results indicate that VW-SSI can successfully estimate neural sources (and their spatial coverage) located in close areas while responsible for different functions, i.e. temporal cortical sources for auditory and language processing, and sources on the pre-bank and post-bank of the central sulcus. Meantime, all other methods investigated in the present study fail to recover these phenomena. Precise estimation of cortical source locations and extents from EEG/MEG is of significance for applications in neuroscience and neurology. PMID:24103850

  5. Robust Simultaneous Registration and Segmentation with sparse error reconstruction.

    PubMed

    Ghosh, Pratim; Manjunath, B S

    2013-02-01

    We introduce a fast and efficient variational framework for Simultaneous Registration and Segmentation (SRS) applicable to a wide variety of image sequences. We demonstrate that a dense correspondence map (between consecutive frames) can be reconstructed correctly even in the presence of partial occlusion, shading, and reflections. The errors are efficiently handled by exploiting their sparse nature. In addition, the segmentation functional is reformulated using a dual Rudin-Osher-Fatemi (ROF) model for fast implementation. Moreover, nonparametric shape prior terms that are suited for this dual-ROF model are proposed. The efficacy of the proposed method is validated with extensive experiments on both indoor, outdoor natural and biological image sequences, demonstrating the higher accuracy and efficiency compared to various state-of-the-art methods. PMID:22547427

  6. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  7. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization. PMID:26358243

  8. Sparse reconstruction for direction-of-arrival estimation using multi-frequency co-prime arrays

    NASA Astrophysics Data System (ADS)

    BouDaher, Elie; Ahmad, Fauzia; Amin, Moeness G.

    2014-12-01

    In this paper, multi-frequency co-prime arrays are employed to perform direction-of-arrival (DOA) estimation with enhanced degrees of freedom (DOFs). Operation at multiple frequencies creates additional virtual elements in the difference co-array of the co-prime array corresponding to the reference frequency. Sparse reconstruction is then used to fully exploit the enhanced DOFs offered by the multi-frequency co-array, thereby increasing the number of resolvable sources. For the case where the sources have proportional spectra, the received signal vectors at the different frequencies are combined to form an equivalent single measurement vector model corresponding to the multi-frequency co-array. When the sources have nonproportional spectra, a group sparsity-based reconstruction approach is used to determine the direction of signal arrivals. Performance evaluation of the proposed multi-frequency approach is performed using numerical simulations for both cases of proportional and nonproportional source spectra.

  9. Sparse Reconstruction Techniques in Magnetic Resonance Imaging: Methods, Applications, and Challenges to Clinical Adoption.

    PubMed

    Yang, Alice C; Kretzler, Madison; Sudarski, Sonja; Gulani, Vikas; Seiberlich, Nicole

    2016-06-01

    The family of sparse reconstruction techniques, including the recently introduced compressed sensing framework, has been extensively explored to reduce scan times in magnetic resonance imaging (MRI). While there are many different methods that fall under the general umbrella of sparse reconstructions, they all rely on the idea that a priori information about the sparsity of MR images can be used to reconstruct full images from undersampled data. This review describes the basic ideas behind sparse reconstruction techniques, how they could be applied to improve MRI, and the open challenges to their general adoption in a clinical setting. The fundamental principles underlying different classes of sparse reconstructions techniques are examined, and the requirements that each make on the undersampled data outlined. Applications that could potentially benefit from the accelerations that sparse reconstructions could provide are described, and clinical studies using sparse reconstructions reviewed. Lastly, technical and clinical challenges to widespread implementation of sparse reconstruction techniques, including optimization, reconstruction times, artifact appearance, and comparison with current gold standards, are discussed. PMID:27003227

  10. Comparison of reconstruction algorithms for sparse-array detection photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Chaudhary, G.; Roumeliotis, M.; Carson, J. J. L.; Anastasio, M. A.

    2010-02-01

    A photoacoustic tomography (PAT) imaging system based on a sparse 2D array of detector elements and an iterative image reconstruction algorithm has been proposed, which opens the possibility for high frame-rate 3D PAT. The efficacy of this PAT implementation is highly influenced by the choice of the reconstruction algorithm. In recent years, a variety of new reconstruction algorithms have been proposed for medical image reconstruction that have been motivated by the emerging theory of compressed sensing. These algorithms have the potential to accurately reconstruct sparse objects from highly incomplete measurement data, and therefore may be highly suited for sparse array PAT. In this context, a sparse object is one that is described by a relatively small number of voxel elements, such as typically arises in blood vessel imaging. In this work, we investigate the use of a gradient projection-based iterative reconstruction algorithm for image reconstruction in sparse-array PAT. The algorithm seeks to minimize an 1-norm penalized least-squares cost function. By use of computer-simulation studies, we demonstrate that the gradient projection algorithm may further improve the efficacy of sparse-array PAT.

  11. Pulsed Terahertz Signal Reconstruction

    NASA Astrophysics Data System (ADS)

    Fletcher, J. R.; Swift, G. P.; Dai, DeChang; Chamberlain, J. M.; Upadhya, P. C.

    2007-12-01

    A procedure is outlined which can be used to determine the response of an experimental sample to a single, simple broadband frequency pulse in terahertz frequency time domain spectroscopy (TDS). The advantage that accrues from this approach is that oscillations and spurious signals (arising from a variety of sources in the TDS system or from ambient water vapor) can be suppressed. In consequence, small signals (arising from the interaction of the radiation with the sample) can be more readily observed in the presence of noise. Procedures for choosing key parameters and methods for eliminating further artifacts are described. In particular, the use of input functions which are based on the binomial distribution is described. These binomial functions are used to unscramble the sample response to a simple pulse: they have sufficient flexibility to allow for variations in the spectra of different terahertz sources, some of which have low frequency as well as high frequency cutoffs. The signal processing procedure is validated by simple reflection and transmission experiments using a gap between polytetrafluoroethylene (PTFE) plates to mimic a void within a larger material. It is shown that a resolution of 100μm is easily achievable in reflection geometry after signal processing.

  12. CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization

    PubMed Central

    Chen, Zijia; Zhou, Linghong

    2015-01-01

    Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV) minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods. PMID:26089962

  13. Filtered gradient compressive sensing reconstruction algorithm for sparse and structured measurement matrices

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri H.; Arguello, Henry

    2016-05-01

    Compressive sensing state-of-the-art proposes random Gaussian and Bernoulli as measurement matrices. Nev- ertheless, often the design of the measurement matrix is subject to physical constraints, and therefore it is frequently not possible that the matrix follows a Gaussian or Bernoulli distribution. Examples of these lim- itations are the structured and sparse matrices of the compressive X-Ray, and compressive spectral imaging systems. A standard algorithm for recovering sparse signals consists in minimizing an objective function that includes a quadratic error term combined with a sparsity-inducing regularization term. This problem can be solved using the iterative algorithms for solving linear inverse problems. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity. However, current algorithms are slow for getting a high quality image reconstruction because they do not exploit the structured and sparsity characteristics of the compressive measurement matrices. This paper proposes the development of a gradient-based algorithm for compressive sensing reconstruction by including a filtering step that yields improved quality using less iterations. This algorithm modifies the iterative solution such that it forces to converge to a filtered version of the residual AT y, where y is the measurement vector and A is the compressive measurement matrix. We show that the algorithm including the filtering step converges faster than the unfiltered version. We design various filters that are motivated by the structure of AT y. Extensive simulation results using various sparse and structured matrices highlight the relative performance gain over the existing iterative process.

  14. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Turek, Javier S.; Elad, Michael; Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  15. Compressive sensing of sparse radio frequency signals using optical mixing.

    PubMed

    Valley, George C; Sefler, George A; Shaw, T Justin

    2012-11-15

    We demonstrate an optical mixing system for measuring properties of sparse radio frequency (RF) signals using compressive sensing (CS). Two types of sparse RF signals are investigated: (1) a signal that consists of a few 0.4 ns pulses in a 26.8 ns window and (2) a signal that consists of a few sinusoids at different frequencies. The RF is modulated onto the intensity of a repetitively pulsed, wavelength-chirped optical field, and time-wavelength-space mapping is used to map the optical field onto a 118-pixel, one-dimensional spatial light modulator (SLM). The SLM pixels are programmed with a pseudo-random bit sequence (PRBS) to form one row of the CS measurement matrix, and the optical throughput is integrated with a photodiode to obtain one value of the CS measurement vector. Then the PRBS is changed to form the second row of the mixing matrix and a second value of the measurement vector is obtained. This process is performed 118 times so that we can vary the dimensions of the CS measurement matrix from 1×118 to 118×118 (square). We use the penalized ℓ(1) norm method with stopping parameter λ (also called basis pursuit denoising) to recover pulsed or sinusoidal RF signals as a function of the small dimension of the measurement matrix and stopping parameter. For a square matrix, we also find that penalized ℓ(1) norm recovery performs better than conventional recovery using matrix inversion. PMID:23164876

  16. Sparse signal representation and its applications in ultrasonic NDE.

    PubMed

    Zhang, Guang-Ming; Zhang, Cheng-Zhong; Harvey, David M

    2012-03-01

    Many sparse signal representation (SSR) algorithms have been developed in the past decade. The advantages of SSR such as compact representations and super resolution lead to the state of the art performance of SSR for processing ultrasonic non-destructive evaluation (NDE) signals. Choosing a suitable SSR algorithm and designing an appropriate overcomplete dictionary is a key for success. After a brief review of sparse signal representation methods and the design of overcomplete dictionaries, this paper addresses the recent accomplishments of SSR for processing ultrasonic NDE signals. The advantages and limitations of SSR algorithms and various overcomplete dictionaries widely-used in ultrasonic NDE applications are explored in depth. Their performance improvement compared to conventional signal processing methods in many applications such as ultrasonic flaw detection and noise suppression, echo separation and echo estimation, and ultrasonic imaging is investigated. The challenging issues met in practical ultrasonic NDE applications for example the design of a good dictionary are discussed. Representative experimental results are presented for demonstration. PMID:22040650

  17. Unbiased measurements of reconstruction fidelity of sparsely sampled magnetic resonance spectra.

    PubMed

    Wu, Qinglin; Coggins, Brian E; Zhou, Pei

    2016-01-01

    The application of sparse-sampling techniques to NMR data acquisition would benefit from reliable quality measurements for reconstructed spectra. We introduce a pair of noise-normalized measurements, and , for differentiating inadequate modelling from overfitting. While and can be used jointly for methods that do not enforce exact agreement between the back-calculated time domain and the original sparse data, the cross-validation measure is applicable to all reconstruction algorithms. We show that the fidelity of reconstruction is sensitive to changes in and that model overfitting results in elevated and reduced spectral quality. PMID:27459896

  18. Unbiased measurements of reconstruction fidelity of sparsely sampled magnetic resonance spectra

    PubMed Central

    Wu, Qinglin; Coggins, Brian E.; Zhou, Pei

    2016-01-01

    The application of sparse-sampling techniques to NMR data acquisition would benefit from reliable quality measurements for reconstructed spectra. We introduce a pair of noise-normalized measurements, and , for differentiating inadequate modelling from overfitting. While and can be used jointly for methods that do not enforce exact agreement between the back-calculated time domain and the original sparse data, the cross-validation measure is applicable to all reconstruction algorithms. We show that the fidelity of reconstruction is sensitive to changes in and that model overfitting results in elevated and reduced spectral quality. PMID:27459896

  19. Unbiased measurements of reconstruction fidelity of sparsely sampled magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Wu, Qinglin; Coggins, Brian E.; Zhou, Pei

    2016-07-01

    The application of sparse-sampling techniques to NMR data acquisition would benefit from reliable quality measurements for reconstructed spectra. We introduce a pair of noise-normalized measurements, and , for differentiating inadequate modelling from overfitting. While and can be used jointly for methods that do not enforce exact agreement between the back-calculated time domain and the original sparse data, the cross-validation measure is applicable to all reconstruction algorithms. We show that the fidelity of reconstruction is sensitive to changes in and that model overfitting results in elevated and reduced spectral quality.

  20. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization

    PubMed Central

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  1. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization.

    PubMed

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  2. Classification of transient signals using sparse representations over adaptive dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Myers, Kary L.; Pawley, Norma H.

    2011-06-01

    Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in background content and noise levels. The target classification decision is obtained in almost real-time via a parallel, vectorized implementation.

  3. Reconstruction of Graph Signals Through Percolation from Seeding Nodes

    NASA Astrophysics Data System (ADS)

    Segarra, Santiago; Marques, Antonio G.; Leus, Geert; Ribeiro, Alejandro

    2016-08-01

    New schemes to recover signals defined in the nodes of a graph are proposed. Our focus is on reconstructing bandlimited graph signals, which are signals that admit a sparse representation in a frequency domain related to the structure of the graph. Most existing formulations focus on estimating an unknown graph signal by observing its value on a subset of nodes. By contrast, in this paper, we study the problem of reconstructing a known graph signal using as input a graph signal that is non-zero only for a small subset of nodes (seeding nodes). The sparse signal is then percolated (interpolated) across the graph using a graph filter. Graph filters are a generalization of classical time-invariant systems and represent linear transformations that can be implemented distributedly across the nodes of the graph. Three setups are investigated. In the first one, a single simultaneous injection takes place on several nodes in the graph. In the second one, successive value injections take place on a single node. The third one is a generalization where multiple nodes inject multiple signal values. For noiseless settings, conditions under which perfect reconstruction is feasible are given, and the corresponding schemes to recover the desired signal are specified. Scenarios leading to imperfect reconstruction, either due to insufficient or noisy signal value injections, are also analyzed. Moreover, connections with classical interpolation in the time domain are discussed. The last part of the paper presents numerical experiments that illustrate the results developed through synthetic graph signals and two real-world signal reconstruction problems: influencing opinions in a social network and inducing a desired brain state in humans.

  4. Wavefront reconstruction in phase-shifting interferometry via sparse coding of amplitude and absolute phase.

    PubMed

    Katkovnik, V; Bioucas-Dias, J

    2014-08-01

    Phase-shifting interferometry is a coherent optical method that combines high accuracy with high measurement speeds. This technique is therefore desirable in many applications such as the efficient industrial quality inspection process. However, despite its advantageous properties, the inference of the object amplitude and the phase, herein termed wavefront reconstruction, is not a trivial task owing to the Poissonian noise associated with the measurement process and to the 2π phase periodicity of the observation mechanism. In this paper, we formulate the wavefront reconstruction as an inverse problem, where the amplitude and the absolute phase are assumed to admit sparse linear representations in suitable sparsifying transforms (dictionaries). Sparse modeling is a form of regularization of inverse problems which, in the case of the absolute phase, is not available to the conventional wavefront reconstruction techniques, as only interferometric phase modulo-2π is considered therein. The developed sparse modeling of the absolute phase solves two different problems: accuracy of the interferometric (wrapped) phase reconstruction and simultaneous phase unwrapping. Based on this rationale, we introduce the sparse phase and amplitude reconstruction (SPAR) algorithm. SPAR takes into full consideration the Poissonian (photon counting) measurements and uses the data-adaptive block-matching 3D (BM3D) frames as a sparse representation for the amplitude and for the absolute phase. SPAR effectiveness is documented by comparing its performance with that of competitors in a series of experiments. PMID:25121537

  5. [Research on PPG Signal Reconstruction Based on Compressed Sensing].

    PubMed

    Zhang, Aihua; Ou, Jiqing; Chou, Yongxin; Yang, Bin

    2016-01-01

    In order to improve the storage and transmission efficiency of dynamic photoplethysmography (PPG) signals in the detection process and reduce the redundancy of signals, the modified adaptive matching pursuit (MAMP) algorithm was proposed according to the sparsity of the PPG signal. The proposed algorithm which is based on reconstruction method of sparse adaptive matching pursuit (SAMP), could improve the accuracy of the sparsity estimation of signals by using both variable step size and the double threshold conditions. After experiments on the simulated and the actual PPG signals, the results show that the modified algorithm could estimate the sparsity of signals accurately and quickly, and had good anti-noise performance. Contrasting with SAMP and orthogonal matching pursuit (OMP), the reconstruction speed of the algorithm was faster and the accuracy was high. PMID:27197487

  6. Simplified signal processing for impedance spectroscopy with spectrally sparse sequences

    NASA Astrophysics Data System (ADS)

    Annus, P.; Land, R.; Reidla, M.; Ojarand, J.; Mughal, Y.; Min, M.

    2013-04-01

    Classical method for measurement of the electrical bio-impedance involves excitation with sinusoidal waveform. Sinusoidal excitation at fixed frequency points enables wide variety of signal processing options, most general of them being Fourier transform. Multiplication with two quadrature waveforms at desired frequency could be easily accomplished both in analogue and in digital domains, even simplest quadrature square waves can be considered, which reduces signal processing task in analogue domain to synchronous switching followed by low pass filter, and in digital domain requires only additions. So called spectrally sparse excitation sequences (SSS), which have been recently introduced into bio-impedance measurement domain, are very reasonable choice when simultaneous multifrequency excitation is required. They have many good properties, such as ease of generation and good crest factor compared to similar multisinusoids. Typically, the usage of discrete or fast Fourier transform in signal processing step is considered so far. Usage of simplified methods nevertheless would reduce computational burden, and enable simpler, less costly and less energy hungry signal processing platforms. Accuracy of the measurement with SSS excitation when using different waveforms for quadrature demodulation will be compared in order to evaluate the feasibility of the simplified signal processing. Sigma delta modulated sinusoid (binary signal) is considered to be a good alternative for a synchronous demodulation.

  7. Impact-force sparse reconstruction from highly incomplete and inaccurate measurements

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Chen, Xuefeng

    2016-08-01

    The classical l2-norm-based regularization methods applied for force reconstruction inverse problem require that the number of measurements should not be less than the number of unknown sources. Taking into account the sparse nature of impact-force in time domain, we develop a general sparse methodology based on minimizing l1-norm for solving the highly underdetermined model of impact-force reconstruction. A monotonic two-step iterative shrinkage/thresholding (MTWIST) algorithm is proposed to find the sparse solution to such an underdetermined model from highly incomplete and inaccurate measurements, which can be problematic with Tikhonov regularization. MTWIST is highly efficient for large-scale ill-posed problems since it mainly involves matrix-vector multiplies without matrix factorization. In sparsity frame, the proposed sparse regularization method can not only determine the actual impact location from many candidate sources but also simultaneously reconstruct the time history of impact-force. Simulation and experiment including single-source and two-source impact-force reconstruction are conducted on a simply supported rectangular plate and a shell structure to illustrate the effectiveness and applicability of MTWIST, respectively. Both the locations and force time histories of the single-source and two-source cases are accurately reconstructed from a single accelerometer, where the high noise level is considered in simulation and the primary noise in experiment is supposed to be colored noise. Meanwhile, the consecutive impact-forces reconstruction in a large-scale (greater than 104) sparse frame illustrates that MTWIST has advantages of computational efficiency and identification accuracy over Tikhonov regularization.

  8. Precise RFID localization in impaired environment through sparse signal recovery

    NASA Astrophysics Data System (ADS)

    Subedi, Saurav; Zhang, Yimin D.; Amin, Moeness G.

    2013-05-01

    Radio frequency identification (RFID) is a rapidly developing wireless communication technology for electronically identifying, locating, and tracking products, assets, and personnel. RFID has become one of the most important means to construct real-time locating systems (RTLS) that track and identify the location of objects in real time using simple, inexpensive tags and readers. The applicability and usefulness of RTLS techniques depend on their achievable accuracy. In particular, when multilateration-based localization techniques are exploited, the achievable accuracy primarily relies on the precision of the range estimates between a reader and the tags. Such range information can be obtained by using the received signal strength indicator (RSSI) and/or the phase difference of arrival (PDOA). In both cases, however, the accuracy is significantly compromised when the operation environment is impaired. In particular, multipath propagation significantly affects the measurement accuracy of both RSSI and phase information. In addition, because RFID systems are typically operated in short distances, RSSI and phase measurements are also coupled with the reader and tag antenna patterns, making accurate RFID localization very complicated and challenging. In this paper, we develop new methods to localize RFID tags or readers by exploiting sparse signal recovery techniques. The proposed method allows the channel environment and antenna patterns to be taken into account and be properly compensated at a low computational cost. As such, the proposed technique yields superior performance in challenging operation environments with the above-mentioned impairments.

  9. Direct reconstruction of enhanced signal in computed tomography perfusion

    NASA Astrophysics Data System (ADS)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua; Wang, Jing

    2016-04-01

    High imaging dose has been a concern in computed tomography perfusion (CTP) as repeated scans are performed at the same location of a patient. On the other hand, signal changes only occur at limited regions in CT acquired at different time points. In this work, we propose a new reconstruction strategy by effectively utilizing the initial phase high-quality CT to reconstruct the later phase CT acquired with a low-dose protocol. In the proposed strategy, initial high-quality CT is considered as a base image and enhanced signal (ES) is reconstructed directly by minimizing the penalized weighted least-square (PWLS) criterion. The proposed PWLS-ES strategy converts the conventional CT reconstruction into a sparse signal reconstruction problem. Digital and anthropomorphic phantom studies were performed to evaluate the performance of the proposed PWLS-ES strategy. Both phantom studies show that the proposed PWLS-ES method outperforms the standard iterative CT reconstruction algorithm based on the same PWLS criterion according to various quantitative metrics including root mean squared error (RMSE) and the universal quality index (UQI).

  10. Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; Painer, Sven; Grigat, Rolf-Rainer

    2015-03-01

    Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.

  11. Tomographic bioluminescence imaging reconstruction via a dynamically sparse regularized global method in mouse models.

    PubMed

    Liu, Kai; Tian, Jie; Qin, Chenghu; Yang, Xin; Zhu, Shouping; Han, Dong; Wu, Ping

    2011-04-01

    Generally, the performance of tomographic bioluminescence imaging is dependent on several factors, such as regularization parameters and initial guess of source distribution. In this paper, a global-inexact-Newton based reconstruction method, which is regularized by a dynamic sparse term, is presented for tomographic reconstruction. The proposed method can enhance higher imaging reliability and efficiency. In vivo mouse experimental reconstructions were performed to validate the proposed method. Reconstruction comparisons of the proposed method with other methods demonstrate the applicability on an entire region. Moreover, the reliable performance on a wide range of regularization parameters and initial unknown values were also investigated. Based on the in vivo experiment and a mouse atlas, the tolerance for optical property mismatch was evaluated with optical overestimation and underestimation. Additionally, the reconstruction efficiency was also investigated with different sizes of mouse grids. We showed that this method was reliable for tomographic bioluminescence imaging in practical mouse experimental applications. PMID:21529085

  12. Two-stage sparse representation-based face recognition with reconstructed images

    NASA Astrophysics Data System (ADS)

    Cheng, Guangtao; Song, Zhanjie; Lei, Yang; Han, Xiuning

    2014-09-01

    In order to address the challenges that both the training and testing images are contaminated by random pixels corruption, occlusion, and disguise, a robust face recognition algorithm based on two-stage sparse representation is proposed. Specifically, noises in the training images are first eliminated by low-rank matrix recovery. Then, by exploiting the first-stage sparse representation computed by solving a new extended ℓ1-minimization problem, noises in the testing image can be successfully removed. After the elimination, feature extraction techniques that are more discriminative but are sensitive to noise can be effectively performed on the reconstructed clean images, and the final classification is accomplished by utilizing the second-stage sparse representation obtained by solving the reduced ℓ1-minimization problem in a low-dimensional feature space. Extensive experiments are conducted on publicly available databases to verify the superiority and robustness of our algorithm.

  13. System matrix analysis for sparse-view iterative image reconstruction in X-ray CT.

    PubMed

    Wang, Linyuan; Zhang, Hanming; Cai, Ailong; Li, Yongl; Yan, Bin; Li, Lei; Hu, Guoen

    2015-01-01

    Iterative image reconstruction (IIR) with sparsity-exploiting methods, such as total variation (TV) minimization, used for investigations in compressive sensing (CS) claim potentially large reductions in sampling requirements. Quantifying this claim for computed tomography (CT) is non-trivial, as both the singularity of undersampled reconstruction and the sufficient view number for sparse-view reconstruction are ill-defined. In this paper, the singular value decomposition method is used to study the condition number and singularity of the system matrix and the regularized matrix. An estimation method of the empirical lower bound is proposed, which is helpful for estimating the number of projection views required for exact reconstruction. Simulation studies show that the singularity of the system matrices for different projection views is effectively reduced by regularization. Computing the condition number of a regularized matrix is necessary to provide a reference for evaluating the singularity and recovery potential of reconstruction algorithms using regularization. The empirical lower bound is helpful for estimating the projections view number with a sparse reconstruction algorithm. PMID:25567402

  14. Sparse asynchronous cortical generators can produce measurable scalp EEG signals.

    PubMed

    von Ellenrieder, Nicolás; Dan, Jonathan; Frauscher, Birgit; Gotman, Jean

    2016-09-01

    We investigate to what degree the synchronous activation of a smooth patch of cortex is necessary for observing EEG scalp activity. We perform extensive simulations to compare the activity generated on the scalp by different models of cortical activation, based on intracranial EEG findings reported in the literature. The spatial activation is modeled as a cortical patch of constant activation or as random sets of small generators (0.1 to 3cm(2) each) concentrated in a cortical region. Temporal activation models for the generation of oscillatory activity are either equal phase or random phase across the cortical patches. The results show that smooth or random spatial activation profiles produce scalp electric potential distributions with the same shape. Also, in the generation of oscillatory activity, multiple cortical generators with random phase produce scalp activity attenuated on average only 2 to 4 times compared to generators with equal phase. Sparse asynchronous cortical generators can produce measurable scalp EEG. This is a possible explanation for seemingly paradoxical observations of simultaneous disorganized intracranial activity and scalp EEG signals. Thus, the standard interpretation of scalp EEG might constitute an oversimplification of the underlying brain activity. PMID:27262240

  15. New shape models of asteroids reconstructed from sparse-in-time photometry

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna

    2015-08-01

    Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.

  16. Supervised Single-Channel Speech Separation via Sparse Decomposition Using Periodic Signal Models

    NASA Astrophysics Data System (ADS)

    Nakashizuka, Makoto; Okumura, Hiroyuki; Iiguni, Youji

    In this paper, we propose a method for supervised single-channel speech separation through sparse decomposition using periodic signal models. The proposed separation method employs sparse decomposition, which decomposes a signal into a set of periodic signals under a sparsity penalty. In order to achieve separation through sparse decomposition, the decomposed periodic signals have to be assigned to the corresponding sources. For the assignment of the periodic signal, we introduce clustering using a K-means algorithm to group the decomposed periodic signals into as many clusters as the number of speakers. After the clustering, each cluster is assigned to its corresponding speaker using preliminarily learnt codebooks. Through separation experiments, we compare our method with MaxVQ, which performs separation on the frequency spectrum domain. The experimental results in terms of signal-to-distortion ratio show that the proposed sparse decomposition method is comparable to the frequency domain approach and has less computational costs for assignment of speech components.

  17. Advances in thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Frendberg Beemer, Maria

    2015-05-01

    Since its introduction in 2001, the Thermographic Signal Reconstruction (TSR) method has emerged as one of the most widely used methods for enhancement and analysis of thermographic sequences, with applications extending beyond industrial NDT into biomedical research, art restoration and botany. The basic TSR process, in which a noise reduced replica of each pixel time history is created, yields improvement over unprocessed image data that is sufficient for many applications. However, examination of the resulting logarithmic time derivatives of each TSR pixel replica provides significant insight into the physical mechanisms underlying the active thermography process. The deterministic and invariant properties of the derivatives have enabled the successful implementation of automated defect recognition and measurement systems. Unlike most approaches to analysis of thermography data, TSR does not depend on flawbackground contrast, so that it can also be applied to characterization and measurement of thermal properties of flaw-free samples. We present a summary of recent advances in TSR, a review of the underlying theory and examples of its implementation.

  18. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  19. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    NASA Astrophysics Data System (ADS)

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-03-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.

  20. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    PubMed Central

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-01-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up. PMID:26940661

  1. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography.

    PubMed

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-01-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up. PMID:26940661

  2. Hybrid multilevel sparse reconstruction for a whole domain bioluminescence tomography using adaptive finite element.

    PubMed

    Yu, Jingjing; He, Xiaowei; Geng, Guohua; Liu, Fang; Jiao, L C

    2013-01-01

    Quantitative reconstruction of bioluminescent sources from boundary measurements is a challenging ill-posed inverse problem owing to the high degree of absorption and scattering of light through tissue. We present a hybrid multilevel reconstruction scheme by combining the ability of sparse regularization with the advantage of adaptive finite element method. In view of the characteristics of different discretization levels, two different inversion algorithms are employed on the initial coarse mesh and the succeeding ones to strike a balance between stability and efficiency. Numerical experiment results with a digital mouse model demonstrate that the proposed scheme can accurately localize and quantify source distribution while maintaining reconstruction stability and computational economy. The effectiveness of this hybrid reconstruction scheme is further confirmed with in vivo experiments. PMID:23533542

  3. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies.

    PubMed

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  4. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies

    PubMed Central

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  5. Evaluation of iterative sparse object reconstruction from few projections for 3-D rotational coronary angiography.

    PubMed

    Hansis, Eberhard; Schäfer, Dirk; Dössel, Olaf; Grass, Michael

    2008-11-01

    A 3-D reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular disease, compared to 2-D X-ray angiograms. Besides improved roadmapping, quantitative vessel analysis is possible. Due to the heart's motion, rotational coronary angiography typically provides only 5-10 projections for the reconstruction of each cardiac phase, which leads to a strongly undersampled reconstruction problem. Such an ill-posed problem can be approached with regularized iterative methods. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, the minimization of the mbiL(1) norm of the reconstructed image, favoring spatially sparse images, is a suitable regularization. Additional problems are overlaid background structures and projection truncation, which can be alleviated by background reduction using a morphological top-hat filter. This paper quantitatively evaluates image reconstruction based on these ideas on software phantom data, in terms of reconstructed absorption coefficients and vessel radii. Results for different algorithms and different input data sets are compared. First results for electrocardiogram-gated reconstruction from clinical catheter-based rotational X-ray coronary angiography are presented. Excellent 3-D image quality can be achieved. PMID:18955171

  6. A Sparse Reconstruction Approach for Identifying Gene Regulatory Networks Using Steady-State Experiment Data

    PubMed Central

    Zhang, Wanhong; Zhou, Tong

    2015-01-01

    Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can

  7. A new look at signal sparsity paradigm for low-dose computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Zhang, Hao; Moore, William; Liang, Zhengrong

    2016-03-01

    Signal sparsity in computed tomography (CT) image reconstruction field is routinely interpreted as sparse angular sampling around the patient body whose image is to be reconstructed. For CT clinical applications, while the normal tissues may be known and treated as sparse signals but the abnormalities inside the body are usually unknown signals and may not be treated as sparse signals. Furthermore, the locations and structures of abnormalities are also usually unknown, and this uncertainty adds in more challenges in interpreting signal sparsity for clinical applications. In this exploratory experimental study, we assume that once the projection data around the continuous body are discretized regardless at what sampling rate, the image reconstruction of the continuous body from the discretized data becomes a signal sparse problem. We hypothesize that a dense prior model describing the continuous body is a desirable choice for achieving an optimal solution for a given clinical task. We tested this hypothesis by adapting total variation stroke (TVS) model to describe the continuous body signals and showing the gain over the classic filtered backprojection (FBP) at a wide range of angular sampling rate. For the given clinical task of detecting lung nodules of size 5mm and larger, a consistent improvement of TVS over FBP on nodule detection was observed by an experienced radiologists from low sample rate to high sampling rate. This experimental outcome concurs with the expectation of the TVS model. Further investigation for theoretical insights and task-dependent evaluations is needed.

  8. Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples

    NASA Astrophysics Data System (ADS)

    Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.

    2016-06-01

    In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.

  9. Initial experience in primal-dual optimization reconstruction from sparse-PET patient data

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-03-01

    There exists interest in designing a PET system with reduced detectors due to cost concerns, while not significantly compromising the PET utility. Recently developed optimization-based algorithms, which have demonstrated the potential clinical utility in image reconstruction from sparse CT data, may be used for enabling such design of innovative PET systems. In this work, we investigate a PET configuration with reduced number of detectors, and carry out preliminary studies from patient data collected by use of such sparse-PET configuration. We consider an optimization problem combining Kullback-Leibler (KL) data fidelity with an image TV constraint, and solve it by using a primal-dual optimization algorithm developed by Chambolle and Pock. Results show that advanced algorithms may enable the design of innovative PET configurations with reduced number of detectors, while yielding potential practical PET utilities.

  10. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    PubMed

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. PMID:26773787

  11. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  12. Multimodal exploitation and sparse reconstruction for guided-wave structural health monitoring

    NASA Astrophysics Data System (ADS)

    Golato, Andrew; Santhanam, Sridhar; Ahmad, Fauzia; Amin, Moeness G.

    2015-05-01

    The presence of multiple modes in guided-wave structural health monitoring has been usually considered a nuisance and a variety of methods have been devised to ensure the presence of a single mode. However, valuable information regarding the nature of defects can be gleaned by including multiple modes in image recovery. In this paper, we propose an effective approach for localizing defects in thin plates, which involves inversion of a multimodal Lamb wave based model by means of sparse reconstruction. We consider not only the direct symmetric and anti-symmetric fundamental propagating Lamb modes, but also the defect-spawned mixed modes arising due to asymmetry of defects. Model-based dictionaries for the direct and spawned modes are created, which take into account the associated dispersion and attenuation through the medium. Reconstruction of the region of interest is performed jointly across the multiple modes by employing a group sparse reconstruction approach. Performance validation of the proposed defect localization scheme is provided using simulated data for an aluminum plate.

  13. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  14. Total Variation-Stokes Strategy for Sparse-View X-ray CT Image Reconstruction

    PubMed Central

    Liu, Yan; Ma, Jianhua; Lu, Hongbing; Wang, Ke; Zhang, Hao; Moore, William

    2014-01-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and/or other constraints, a piecewise-smooth X-ray computed tomography image can be reconstructed from sparse-view projection data. However, due to the piecewise constant assumption for the TV model, the reconstructed images are frequently reported to suffer from the blocky or patchy artifacts. To eliminate this drawback, we present a total variation-stokes-projection onto convex sets (TVS-POCS) reconstruction method in this paper. The TVS model is derived by introducing isophote directions for the purpose of recovering possible missing information in the sparse-view data situation. Thus the desired consistencies along both the normal and the tangent directions are preserved in the resulting images. Compared to the previous TV-based image reconstruction algorithms, the preserved consistencies by the TVS-POCS method are expected to generate noticeable gains in terms of eliminating the patchy artifacts and preserving subtle structures. To evaluate the presented TVS-POCS method, both qualitative and quantitative studies were performed using digital phantom, physical phantom and clinical data experiments. The results reveal that the presented method can yield images with several noticeable gains, measured by the universal quality index and the full-width-at-half-maximum merit, as compared to its corresponding TV-based algorithms. In addition, the results further indicate that the TVS-POCS method approaches to the gold standard result of the filtered back-projection reconstruction in the full-view data case as theoretically expected, while most previous iterative methods may fail in the full-view case because of their artificial textures in the results. PMID:24595347

  15. Data sinogram sparse reconstruction based on steering kernel regression and filtering strategies

    NASA Astrophysics Data System (ADS)

    Marquez, Miguel A.; Mojica, Edson; Arguello, Henry

    2016-05-01

    Computed tomography images have an impact in many applications such as medicine, and others. Recently, compressed sensing-based acquisition strategies have been proposed in order to reduce the x-ray radiation dose. However, these methods lose critical information of the sinogram. In this paper, a reconstruction method of sparse measurements from a sinogram is proposed. The proposed approach takes advantage of the redundancy of similar patches in the sinogram, and estimates a target pixel using a weighted average of its neighbors. Simulation results show that the proposed method obtained a gain up to 2 dB with respect to an l1 minimization algorithm.

  16. Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views

    NASA Astrophysics Data System (ADS)

    Xie, Huiqiao; Niu, Tianye; Yang, Yi; Ren, Yi; Tang, Xiangyang

    2016-03-01

    The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection - FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.

  17. Enhancement of multi-pass 3D circular SAR images using sparse reconstruction techniques

    NASA Astrophysics Data System (ADS)

    Ferrara, Matthew; Jackson, Julie A.; Austin, Christian

    2009-05-01

    This paper demonstrates image enhancement for wide-angle, multi-pass three-dimensional SAR applications. Without sufficient regularization, three-dimensional sparse-aperture imaging from realistic data-collection scenarios results in poor quality, low-resolution images. Sparsity-based image enhancement techniques may be used to resolve high-amplitude features in limited aspects of multi-pass imagery. Fusion of the enhanced images across multiple aspects in an approximate GLRT scheme results in a more informative view of the target. In this paper, we apply two sparse reconstruction techniques to measured data of a calibration top-hat and of a civilian vehicle observed in the AFRL publicly-released 2006 Circular SAR data set. First, we employ prominent-point autofocus in order to compensate for unknown platform motion and phase errors across multiple radar passes. Each sub-aperture of the autofocused phase history is digitally-spotlighted (spatially low-pass filtered) to eliminate contributions to the data due to features outside the region of interest, and then imaged with l1-regularized least squares and CoSaMP. The resulting sparse sub-aperture images are non-coherently combined to obtain a wide-angle, enhanced view of the target.

  18. A Fast Greedy Sparse Method of Current Sources Reconstruction for Ventricular Torsion Detection

    NASA Astrophysics Data System (ADS)

    Bing, Lu; Jiang, Shiqin; Chen, Mengpei; Zhao, Chen; Grönemeyer, D.; Hailer, B.; Van Leeuwen, P.

    2015-09-01

    A fast greedy sparse (FGS) method of cardiac equivalent current sources reconstruction is developed for non-invasive detection and quantitative analysis of individual left ventricular torsion. The cardiac magnetic field inverse problem is solved based on a distributed source model. The analysis of real 61-channel magnetocardiogram (MCG) data demonstrates that one or two dominant current source with larger strength can be identified efficiently by the FGS algorithm. Then, the left ventricle torsion during systole is examined on the basis of x, y and z coordination curves and angle change of reconstructed dominant current sources. The advantages of this method are non-invasive, visible, with higher sensitivity and resolution. It may enable the clinical detection of cardiac systolic and ejection dysfunction.

  19. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  20. Sparse-view X-ray CT Reconstruction via Total Generalized Variation Regularization

    PubMed Central

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-01-01

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as “PWLS-TGV” for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties. PMID:24842150

  1. Sparse-view x-ray CT reconstruction via total generalized variation regularization

    NASA Astrophysics Data System (ADS)

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-06-01

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as ‘PWLS-TGV’ for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties.

  2. Sparse-view x-ray CT reconstruction via total generalized variation regularization.

    PubMed

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-06-21

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as 'PWLS-TGV' for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties. PMID:24842150

  3. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection.

    PubMed

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-06-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96. PMID:26812706

  4. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  5. A classification-and-reconstruction approach for a single image super-resolution by a sparse representation

    NASA Astrophysics Data System (ADS)

    Fan, YingYing; Tanaka, Masayuki; Okutomi, Masatoshi

    2014-03-01

    A sparse representation is known as a very powerful tool to solve image reconstruction problem such as denoising and the single image super-resolution. In the sparse representation, it is assumed that an image patch or data can be approximated by a linear combination of a few bases selected from a given dictionary. A single overcomplete dictionary is usually learned with training patches. Dictionary learning methods almost are concerned about building a general over-complete dictionary on the assumption that the bases in dictionary can represent everything. However, using more appropriate dictionary, the sparse representation of patch can obtain better results. In this paper, we propose a classification-and-reconstruction approach with multiple dictionaries. Before learning dictionary for reconstruction, some representative bases can be used to classify all training patches from database and multiple dictionaries for reconstruction can be learned by classified patches respectively. In reconstruction phase, the patch of input image can be classified and the adaptive dictionary can be selected to use. We demonstrate that the proposed classification-and-reconstruction approach outperforms existing sparse representation with the single dictionary.

  6. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment

    PubMed Central

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  7. Sparse Reconstruction Challenge for diffusion MRI: Validation on a physical phantom to determine which acquisition scheme and analysis method to use?

    PubMed

    Ning, Lipeng; Laun, Frederik; Gur, Yaniv; DiBella, Edward V R; Deslauriers-Gauthier, Samuel; Megherbi, Thinhinane; Ghosh, Aurobrata; Zucchelli, Mauro; Menegaz, Gloria; Fick, Rutger; St-Jean, Samuel; Paquette, Michael; Aranda, Ramon; Descoteaux, Maxime; Deriche, Rachid; O'Donnell, Lauren; Rathi, Yogesh

    2015-12-01

    Diffusion magnetic resonance imaging (dMRI) is the modality of choice for investigating in-vivo white matter connectivity and neural tissue architecture of the brain. The diffusion-weighted signal in dMRI reflects the diffusivity of water molecules in brain tissue and can be utilized to produce image-based biomarkers for clinical research. Due to the constraints on scanning time, a limited number of measurements can be acquired within a clinically feasible scan time. In order to reconstruct the dMRI signal from a discrete set of measurements, a large number of algorithms have been proposed in recent years in conjunction with varying sampling schemes, i.e., with varying b-values and gradient directions. Thus, it is imperative to compare the performance of these reconstruction methods on a single data set to provide appropriate guidelines to neuroscientists on making an informed decision while designing their acquisition protocols. For this purpose, the SPArse Reconstruction Challenge (SPARC) was held along with the workshop on Computational Diffusion MRI (at MICCAI 2014) to validate the performance of multiple reconstruction methods using data acquired from a physical phantom. A total of 16 reconstruction algorithms (9 teams) participated in this community challenge. The goal was to reconstruct single b-value and/or multiple b-value data from a sparse set of measurements. In particular, the aim was to determine an appropriate acquisition protocol (in terms of the number of measurements, b-values) and the analysis method to use for a neuroimaging study. The challenge did not delve on the accuracy of these methods in estimating model specific measures such as fractional anisotropy (FA) or mean diffusivity, but on the accuracy of these methods to fit the data. This paper presents several quantitative results pertaining to each reconstruction algorithm. The conclusions in this paper provide a valuable guideline for choosing a suitable algorithm and the corresponding

  8. Sparse Bayesian framework applied to 3D super-resolution reconstruction in fetal brain MRI

    NASA Astrophysics Data System (ADS)

    Becerra, Laura C.; Velasco Toledo, Nelson; Romero Castro, Eduardo

    2015-01-01

    Fetal Magnetic Resonance (FMR) is an imaging technique that is becoming increasingly important as allows assessing brain development and thus make an early diagnostic of congenital abnormalities, spatial resolution is limited by the short acquisition time and the unpredictable fetus movements, in consequence the resulting images are characterized by non-parallel projection planes composed by anisotropic voxels. The sparse Bayesian representation is a flexible strategy which is able to model complex relationships. The Super-resolution is approached as a regression problem, the main advantage is the capability to learn data relations from observations. Quantitative performance evaluation was carried out using synthetic images, the proposed method demonstrates a better reconstruction quality compared with standard interpolation approach. The presented method is a promising approach to improve the information quality related with the 3-D fetal brain structure. It is important because allows assessing brain development and thus make an early diagnostic of congenital abnormalities.

  9. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2015-03-01

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  10. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  11. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  12. A sparse digital signal model for ultrasonic nondestructive evaluation of layered materials.

    PubMed

    Bochud, N; Gomez, A M; Rus, G; Peinado, A M

    2015-09-01

    Signal modeling has been proven to be an useful tool to characterize damaged materials under ultrasonic nondestructive evaluation (NDE). In this paper, we introduce a novel digital signal model for ultrasonic NDE of multilayered materials. This model borrows concepts from lattice filter theory, and bridges them to the physics involved in the wave-material interactions. In particular, the proposed theoretical framework shows that any multilayered material can be characterized by a transfer function with sparse coefficients. The filter coefficients are linked to the physical properties of the material and are analytically obtained from them, whereas a sparse distribution naturally arises and does not rely on heuristic approaches. The developed model is first validated with experimental measurements obtained from multilayered media consisting of homogeneous solids. Then, the sparse structure of the obtained digital filter is exploited through a model-based inverse problem for damage identification in a carbon fiber-reinforced polymer (CFRP) plate. PMID:26092090

  13. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  14. NUFFT-Based Iterative Image Reconstruction via Alternating Direction Total Variation Minimization for Sparse-View CT

    PubMed Central

    Yan, Bin; Jin, Zhao; Zhang, Hanming; Li, Lei; Cai, Ailong

    2015-01-01

    Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT). Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV) regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging. PMID:26120355

  15. Efficient Sparse Coding in Early Sensory Processing: Lessons from Signal Recovery

    PubMed Central

    Lörincz, András; Palotai, Zsolt; Szirtes, Gábor

    2012-01-01

    Sensory representations are not only sparse, but often overcomplete: coding units significantly outnumber the input units. For models of neural coding this overcompleteness poses a computational challenge for shaping the signal processing channels as well as for using the large and sparse representations in an efficient way. We argue that higher level overcompleteness becomes computationally tractable by imposing sparsity on synaptic activity and we also show that such structural sparsity can be facilitated by statistics based decomposition of the stimuli into typical and atypical parts prior to sparse coding. Typical parts represent large-scale correlations, thus they can be significantly compressed. Atypical parts, on the other hand, represent local features and are the subjects of actual sparse coding. When applied on natural images, our decomposition based sparse coding model can efficiently form overcomplete codes and both center-surround and oriented filters are obtained similar to those observed in the retina and the primary visual cortex, respectively. Therefore we hypothesize that the proposed computational architecture can be seen as a coherent functional model of the first stages of sensory coding in early vision. PMID:22396629

  16. Blind estimation of channel parameters and source components for EEG signals: a sparse factorization approach.

    PubMed

    Li, Yuanqing; Cichocki, Andrzej; Amari, Shun-Ichi

    2006-03-01

    In this paper, we use a two-stage sparse factorization approach for blindly estimating the channel parameters and then estimating source components for electroencephalogram (EEG) signals. EEG signals are assumed to be linear mixtures of source components, artifacts, etc. Therefore, a raw EEG data matrix can be factored into the product of two matrices, one of which represents the mixing matrix and the other the source component matrix. Furthermore, the components are sparse in the time-frequency domain, i.e., the factorization is a sparse factorization in the time frequency domain. It is a challenging task to estimate the mixing matrix. Our extensive analysis and computational results, which were based on many sets of EEG data, not only provide firm evidences supporting the above assumption, but also prompt us to propose a new algorithm for estimating the mixing matrix. After the mixing matrix is estimated, the source components are estimated in the time frequency domain using a linear programming method. In an example of the potential applications of our approach, we analyzed the EEG data that was obtained from a modified Sternberg memory experiment. Two almost uncorrelated components obtained by applying the sparse factorization method were selected for phase synchronization analysis. Several interesting findings were obtained, especially that memory-related synchronization and desynchronization appear in the alpha band, and that the strength of alpha band synchronization is related to memory performance. PMID:16566469

  17. Signal processing using sparse derivatives with applications to chromatograms and ECG

    NASA Astrophysics Data System (ADS)

    Ning, Xiaoran

    In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse

  18. Image reconstruction for sparse-view CT and interior CT—introduction to compressed sensing and differentiated backprojection

    PubMed Central

    Suzuki, Taizo; Rashed, Essam A.

    2013-01-01

    New designs of future computed tomography (CT) scanners called sparse-view CT and interior CT have been considered in the CT community. Since these CTs measure only incomplete projection data, a key to put these CT scanners to practical use is a development of advanced image reconstruction methods. After 2000, there was a large progress in this research area briefly summarized as follows. In the sparse-view CT, various image reconstruction methods using the compressed sensing (CS) framework have been developed towards reconstructing clinically feasible images from a reduced number of projection data. In the interior CT, several novel theoretical results on solution uniqueness and solution stability have been obtained thanks to the discovery of a new class of reconstruction methods called differentiated backprojection (DBP). In this paper, we mainly review this progress including mathematical principles of the CS image reconstruction and the DBP image reconstruction for readers unfamiliar with this area. We also show some experimental results from our past research to demonstrate that this progress is not only theoretically elegant but also works in practical imaging situations. PMID:23833728

  19. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    PubMed

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness. PMID:26169322

  20. Adaptive Sparse Signal Processing for Discrimination of Satellite-based Radiofrequency (RF) Recordings of Lightning Events

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Smith, D. A.; Heavner, M.; Hamlin, T.

    2014-12-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite, launched in 1997, provided a rich RF lightning database. Application of modern pattern recognition techniques to this dataset may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We extend sparse signal processing techniques to radiofrequency (RF) transient signals, and specifically focus on improved signature extraction using sparse representations in data-adaptive dictionaries. We present various processing options and classification results for on-board discharges, and discuss robustness and potential for capability development.

  1. A CT reconstruction approach from sparse projection with adaptive-weighted diagonal total-variation in biomedical application.

    PubMed

    Deng, Luzhen; Mi, Deling; He, Peng; Feng, Peng; Yu, Pengwei; Chen, Mianyi; Li, Zhichao; Wang, Jian; Wei, Biao

    2015-01-01

    For lack of directivity in Total Variation (TV) which only uses x-coordinate and y-coordinate gradient transform as its sparse representation approach during the iteration process, this paper brought in Adaptive-weighted Diagonal Total Variation (AwDTV) that uses the diagonal direction gradient to constraint reconstructed image and adds associated weights which are expressed as an exponential function and can be adaptively adjusted by the local image-intensity diagonal gradient for the purpose of preserving the edge details, then using the steepest descent method to solve the optimization problem. Finally, we did two sets of numerical simulation and the results show that the proposed algorithm can reconstruct high-quality CT images from few-views projection, which has lower Root Mean Square Error (RMSE) and higher Universal Quality Index (UQI) than Algebraic Reconstruction Technique (ART) and TV-based reconstruction method. PMID:26405935

  2. SU-E-I-45: Reconstruction of CT Images From Sparsely-Sampled Data Using the Logarithmic Barrier Method

    SciTech Connect

    Xu, H

    2014-06-01

    Purpose: To develop and investigate whether the logarithmic barrier (LB) method can result in high-quality reconstructed CT images using sparsely-sampled noisy projection data Methods: The objective function is typically formulated as the sum of the total variation (TV) and a data fidelity (DF) term with a parameter λ that governs the relative weight between them. Finding the optimized value of λ is a critical step for this approach to give satisfactory results. The proposed LB method avoid using λ by constructing the objective function as the sum of the TV and a log function whose augment is the DF term. Newton's method was used to solve the optimization problem. The algorithm was coded in MatLab2013b. Both Shepp-Logan phantom and a patient lung CT image were used for demonstration of the algorithm. Measured data were simulated by calculating the projection data using radon transform. A Poisson noise model was used to account for the simulated detector noise. The iteration stopped when the difference of the current TV and the previous one was less than 1%. Results: Shepp-Logan phantom reconstruction study shows that filtered back-projection (FBP) gives high streak artifacts for 30 and 40 projections. Although visually the streak artifacts are less pronounced for 64 and 90 projections in FBP, the 1D pixel profiles indicate that FBP gives noisier reconstructed pixel values than LB does. A lung image reconstruction is presented. It shows that use of 64 projections gives satisfactory reconstructed image quality with regard to noise suppression and sharp edge preservation. Conclusion: This study demonstrates that the logarithmic barrier method can be used to reconstruct CT images from sparsely-amped data. The number of projections around 64 gives a balance between the over-smoothing of the sharp demarcation and noise suppression. Future study may extend to CBCT reconstruction and improvement on computation speed.

  3. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  4. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  5. Low-rank + sparse (L+S) reconstruction for accelerated dynamic MRI with seperation of background and dynamic components

    NASA Astrophysics Data System (ADS)

    Otazo, Ricardo; Sodickson, Daniel K.; Candès, Emmanuel J.

    2013-09-01

    L+S matrix decomposition finds the low-rank (L) and sparse (S) components of a matrix M by solving the following convex optimization problem: min‖L‖*L+S matrix decomposition finds the low-rank (L) and sparse (S) components of a matrix M by solving the following convex optimization problem: ‖L ‖* + λ‖S‖1, subject to M=L+S, where ‖L‖* is the nuclear-norm or sum of singular values of L and ‖S‖1 is the 11-norm| or sum of absolute values of S. This work presents the application of the L+S decomposition to reconstruct incoherently undersampled dynamic MRI data as a superposition of a slowly or coherently changing background and sparse innovations. Feasibility of the method was tested in several accelerated dynamic MRI experiments including cardiac perfusion, time-resolved peripheral angiography and liver perfusion using Cartesian and radial sampling. The high acceleration and background separation enabled by L+S reconstruction promises to enhance spatial and temporal resolution and to enable background suppression without the need of subtraction or modeling.

  6. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia

  7. Assimilating irregularly spaced sparsely observed turbulent signals with hierarchical Bayesian reduced stochastic filters

    SciTech Connect

    Brown, Kristen A.; Harlim, John

    2013-02-15

    In this paper, we consider a practical filtering approach for assimilating irregularly spaced, sparsely observed turbulent signals through a hierarchical Bayesian reduced stochastic filtering framework. The proposed hierarchical Bayesian approach consists of two steps, blending a data-driven interpolation scheme and the Mean Stochastic Model (MSM) filter. We examine the potential of using the deterministic piecewise linear interpolation scheme and the ordinary kriging scheme in interpolating irregularly spaced raw data to regularly spaced processed data and the importance of dynamical constraint (through MSM) in filtering the processed data on a numerically stiff state estimation problem. In particular, we test this approach on a two-layer quasi-geostrophic model in a two-dimensional domain with a small radius of deformation to mimic ocean turbulence. Our numerical results suggest that the dynamical constraint becomes important when the observation noise variance is large. Second, we find that the filtered estimates with ordinary kriging are superior to those with linear interpolation when observation networks are not too sparse; such robust results are found from numerical simulations with many randomly simulated irregularly spaced observation networks, various observation time intervals, and observation error variances. Third, when the observation network is very sparse, we find that both the kriging and linear interpolations are comparable.

  8. Assimilating irregularly spaced sparsely observed turbulent signals with hierarchical Bayesian reduced stochastic filters

    NASA Astrophysics Data System (ADS)

    Brown, Kristen A.; Harlim, John

    2013-02-01

    In this paper, we consider a practical filtering approach for assimilating irregularly spaced, sparsely observed turbulent signals through a hierarchical Bayesian reduced stochastic filtering framework. The proposed hierarchical Bayesian approach consists of two steps, blending a data-driven interpolation scheme and the Mean Stochastic Model (MSM) filter. We examine the potential of using the deterministic piecewise linear interpolation scheme and the ordinary kriging scheme in interpolating irregularly spaced raw data to regularly spaced processed data and the importance of dynamical constraint (through MSM) in filtering the processed data on a numerically stiff state estimation problem. In particular, we test this approach on a two-layer quasi-geostrophic model in a two-dimensional domain with a small radius of deformation to mimic ocean turbulence. Our numerical results suggest that the dynamical constraint becomes important when the observation noise variance is large. Second, we find that the filtered estimates with ordinary kriging are superior to those with linear interpolation when observation networks are not too sparse; such robust results are found from numerical simulations with many randomly simulated irregularly spaced observation networks, various observation time intervals, and observation error variances. Third, when the observation network is very sparse, we find that both the kriging and linear interpolations are comparable.

  9. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.; Hamlin, Timothy D.; Light, Tess E.; Suszcynsky, David M.

    2013-05-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lighting database, comprising of five years of data recorded from its two RF payloads. While some classification work has been done previously on the FORTE RF database, application of modern pattern recognition techniques may advance lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification scenarios and future development.

  10. Fast and efficient fully 3D PET image reconstruction using sparse system matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-10-01

    Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.

  11. Sparse-view computed tomography image reconstruction via a combination of L(1) and SL(0) regularization.

    PubMed

    Qi, Hongliang; Chen, Zijia; Guo, Jingyu; Zhou, Linghong

    2015-01-01

    Low-dose computed tomography reconstruction is an important issue in the medical imaging domain. Sparse-view has been widely studied as a potential strategy. Compressed sensing (CS) method has shown great potential to reconstruct high-quality CT images from sparse-view projection data. Nonetheless, low-contrast structures tend to be blurred by the total variation (TV, L1-norm of the gradient image) regularization. Moreover, TV will produce blocky effects on smooth and edge regions. To overcome this limitation, this study has proposed an iterative image reconstruction algorithm by combining L1 regularization and smoothed L0 (SL0) regularization. SL0 is a smooth approximation of L0 norm and can solve the problem of L0 norm being sensitive to noise. To evaluate the proposed method, both qualitative and quantitative studies were conducted on a digital Shepp-Logan phantom and a real head phantom. Experimental comparative results have indicated that the proposed L1/SL0-POCS algorithm can effectively suppress noise and artifacts, as well as preserve more structural information compared to other existing methods. PMID:26405900

  12. Multiscale Transient Signal Detection: Localizing Transients in Geodetic Data Through Wavelet Transforms and Sparse Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Riel, B.; Simons, M.; Agram, P.

    2012-12-01

    Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic accumulation of strain in the crust. Over seismically and volcanically active regions, these signals are often challenging to detect due to noise and other modes of deformation. Geodetic datasets that provide precise measurements of surface displacement over wide areas are ideal for exploiting both the spatial and temporal coherence of transient signals. We present an extension to the Multiscale InSAR Time Series (MInTS) approach for analyzing geodetic data by combining the localization benefits of wavelet transforms (localizing signals in space) with sparse optimization techniques (localizing signals in time). Our time parameterization approach allows us to reduce geodetic time series to sparse, compressible signals with very few non-zero coefficients corresponding to transient events. We first demonstrate the temporal transient detection by analyzing GPS data over the Long Valley caldera in California and along the San Andreas fault near Parkfield, CA. For Long Valley, we are able to resolve the documented 2002-2003 uplift event with greater temporal precision. Similarly for Parkfield, we model the postseismic deformation by specific integrated basis splines characterized by timescales that are largely consistent with postseismic relaxation times. We then apply our method to ERS and Envisat InSAR datasets consisting of over 200 interferograms for Long Valley and over 100 interferograms for Parkfield. The wavelet transforms reduce the impact of spatially correlated atmospheric noise common in InSAR data since the wavelet coefficients themselves are essentially uncorrelated. The spatial density and extended temporal coverage of the InSAR data allows us to effectively localize ground deformation events in both space and time with greater precision than has been previously accomplished.

  13. Subthreshold membrane responses underlying sparse spiking to natural vocal signals in auditory cortex

    PubMed Central

    Perks, Krista Eva; Gentner, Timothy Q.

    2015-01-01

    Natural acoustic communication signals, such as speech, are typically high-dimensional with a wide range of co-varying spectro-temporal features at multiple timescales. The synaptic and network mechanisms for encoding these complex signals are largely unknown. We are investigating these mechanisms in high-level sensory regions of the songbird auditory forebrain, where single neurons show sparse, object-selective spiking responses to conspecific songs. Using whole-cell in-vivo patch clamp techniques in the caudal mesopallium and the caudal nidopallium of starlings, we examine song-driven subthreshold and spiking activity. We find that both the subthreshold and the spiking activity are reliable (i.e., the same song drives a similar response each time it is presented) and specific (i.e. responses to different songs are distinct). Surprisingly, however, the reliability and specificity of the sub-threshold response was uniformly high regardless of when the cell spiked, even for song stimuli that drove no spikes. We conclude that despite a selective and sparse spiking response, high-level auditory cortical neurons are under continuous, non-selective, stimulus-specific synaptic control. To investigate the role of local network inhibition in this synaptic control, we then recorded extracellularly while pharmacologically blocking local GABA-ergic transmission. This manipulation modulated the strength and the reliability of stimulus-driven spiking, consistent with a role for local inhibition in regulating the reliability of network activity and the stimulus specificity of the subthreshold response in single cells. We discuss these results in the context of underlying computations that could generate sparse, stimulus-selective spiking responses, and models for hierarchical pooling. PMID:25728189

  14. Subthreshold membrane responses underlying sparse spiking to natural vocal signals in auditory cortex.

    PubMed

    Perks, Krista E; Gentner, Timothy Q

    2015-03-01

    Natural acoustic communication signals, such as speech, are typically high-dimensional with a wide range of co-varying spectro-temporal features at multiple timescales. The synaptic and network mechanisms for encoding these complex signals are largely unknown. We are investigating these mechanisms in high-level sensory regions of the songbird auditory forebrain, where single neurons show sparse, object-selective spiking responses to conspecific songs. Using whole-cell in vivo patch clamp techniques in the caudal mesopallium and the caudal nidopallium of starlings, we examine song-driven subthreshold and spiking activity. We find that both the subthreshold and the spiking activity are reliable (i.e. the same song drives a similar response each time it is presented) and specific (i.e. responses to different songs are distinct). Surprisingly, however, the reliability and specificity of the subthreshold response was uniformly high regardless of when the cell spiked, even for song stimuli that drove no spikes. We conclude that despite a selective and sparse spiking response, high-level auditory cortical neurons are under continuous, non-selective, stimulus-specific synaptic control. To investigate the role of local network inhibition in this synaptic control, we then recorded extracellularly while pharmacologically blocking local GABAergic transmission. This manipulation modulated the strength and the reliability of stimulus-driven spiking, consistent with a role for local inhibition in regulating the reliability of network activity and the stimulus specificity of the subthreshold response in single cells. We discuss these results in the context of underlying computations that could generate sparse, stimulus-selective spiking responses, and models for hierarchical pooling. PMID:25728189

  15. Adaptive sparse signal processing of on-orbit lightning data using learned dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, D. I.; Hamlin, T.; Light, T. E.; Loveland, R. C.; Smith, D. A.; Suszcynsky, D. M.

    2012-12-01

    For the past two decades, there has been an ongoing research effort at Los Alamos National Laboratory (LANL) to learn more about the Earth's radiofrequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Arguably the richest satellite lightning database ever recorded is that from the Fast On-orbit Recording of Transient Events (FORTE) satellite, which returned at least five years of data from its two RF payloads after launch in 1997. While some classification work has been done previously on the LANL FORTE RF database, application of modern pattern recognition techniques may further lightning research in the scientific community and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. We now develop and implement new event classification capability on the FORTE database using state-of-the-art adaptive signal processing combined with compressive sensing and machine learning techniques. The focus of our work is improved feature extraction using sparse representations in learned dictionaries. Extracting classification features from RF signals typically relies on knowledge of the application domain in order to find feature vectors unique to a signal class and robust against background noise. Conventional localized data representations for RF transients using analytical dictionaries, such as a short-time Fourier basis or wavelets, can be suitable for analyzing some types of signals, but not others. Instead, we learn RF dictionaries directly from data, without relying on analytical constraints or additional knowledge about the signal characteristics, using several established machine learning algorithms. Sparse classification features are extracted via matching pursuit search over the learned dictionaries, and used in conjunction with a statistical classifier to distinguish between lightning types. We present preliminary results of our work and discuss classification performance

  16. Multilayer material characterization using thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Beemer, Maria Frendberg

    2016-02-01

    Active-thermography has become a well-established Nondestructive Testing (NDT) method for detection of subsurface flaws. In its simplest form, flaw detection is based on visual identification of contrast between a flaw and local intact regions in an IR image sequence of the surface temperature as the sample responds to thermal stimulation. However, additional information and insight can be obtained from the sequence, even in the absence of a flaw, through analysis of the logarithmic derivatives of individual pixel time histories using the Thermographic Signal Reconstruction (TSR) method. For example, the response of a flaw-free multilayer sample to thermal stimulation can be viewed as a simple transition between the responses of infinitely thick samples of the individual constituent layers over the lifetime of the thermal diffusion process. The transition is represented compactly and uniquely by the logarithmic derivatives, based on the ratio of thermal effusivities of the layers. A spectrum of derivative responses relative to thermal effusivity ratios allows prediction of the time scale and detectability of the interface, and measurement of the thermophysical properties of one layer if the properties of the other are known. A similar transition between steady diffusion states occurs for flat bottom holes, based on the hole aspect ratio.

  17. On the estimation of brain signal entropy from sparse neuroimaging data

    PubMed Central

    Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus

    2016-01-01

    Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961

  18. Robust detection of premature ventricular contractions using sparse signal decomposition and temporal features.

    PubMed

    Manikandan, M Sabarimalai; Ramkumar, Barathram; Deshpande, Pranav S; Choudhary, Tilendra

    2015-12-01

    An automated noise-robust premature ventricular contraction (PVC) detection method is proposed based on the sparse signal decomposition, temporal features, and decision rules. In this Letter, the authors exploit sparse expansion of electrocardiogram (ECG) signals on mixed dictionaries for simultaneously enhancing the QRS complex and reducing the influence of tall P and T waves, baseline wanders, and muscle artefacts. They further investigate a set of ten generalised temporal features combined with decision-rule-based detection algorithm for discriminating PVC beats from non-PVC beats. The accuracy and robustness of the proposed method is evaluated using 47 ECG recordings from the MIT/BIH arrhythmia database. Evaluation results show that the proposed method achieves an average sensitivity of 89.69%, and specificity 99.63%. Results further show that the proposed decision-rule-based algorithm with ten generalised features can accurately detect different patterns of PVC beats (uniform and multiform, couplets, triplets, and ventricular tachycardia) in presence of other normal and abnormal heartbeats. PMID:26713158

  19. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  20. An algorithm for extraction of periodic signals from sparse, irregularly sampled data

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1994-01-01

    Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.

  1. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  2. Learning distance function for regression-based 4D pulmonary trunk model reconstruction estimated from sparse MRI data

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Tsymbal, Alexey; Ionasec, Razvan; Georgescu, Bogdan; Zhou, Shaohua K.; Hornegger, Joachim; Comaniciu, Dorin

    2011-03-01

    Congenital heart defect (CHD) is the most common birth defect and a frequent cause of death for children. Tetralogy of Fallot (ToF) is the most often occurring CHD which affects in particular the pulmonary valve and trunk. Emerging interventional methods enable percutaneous pulmonary valve implantation, which constitute an alternative to open heart surgery. While minimal invasive methods become common practice, imaging and non-invasive assessment tools become crucial components in the clinical setting. Cardiac computed tomography (CT) and cardiac magnetic resonance imaging (cMRI) are techniques with complementary properties and ability to acquire multiple non-invasive and accurate scans required for advance evaluation and therapy planning. In contrary to CT which covers the full 4D information over the cardiac cycle, cMRI often acquires partial information, for example only one 3D scan of the whole heart in the end-diastolic phase and two 2D planes (long and short axes) over the whole cardiac cycle. The data acquired in this way is called sparse cMRI. In this paper, we propose a regression-based approach for the reconstruction of the full 4D pulmonary trunk model from sparse MRI. The reconstruction approach is based on learning a distance function between the sparse MRI which needs to be completed and the 4D CT data with the full information used as the training set. The distance is based on the intrinsic Random Forest similarity which is learnt for the corresponding regression problem of predicting coordinates of unseen mesh points. Extensive experiments performed on 80 cardiac CT and MR sequences demonstrated the average speed of 10 seconds and accuracy of 0.1053mm mean absolute error for the proposed approach. Using the case retrieval workflow and local nearest neighbour regression with the learnt distance function appears to be competitive with respect to "black box" regression with immediate prediction of coordinates, while providing transparency to the

  3. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit.

    PubMed

    Wang, Kunpeng; Chai, Yi; Su, Chunxiao

    2013-08-01

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted [script-l]1-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors. PMID:24007049

  4. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    NASA Astrophysics Data System (ADS)

    Wang, Kunpeng; Chai, Yi; Su, Chunxiao

    2013-08-01

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ1-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.

  5. Cosparsity-based Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhao, Yuxin; Wang, Wenwu; Hao, Yanling

    2015-12-01

    The cosparse analysis model has been introduced as an interesting alternative to the standard sparse synthesis model. Given a set of corrupted measurements, finding a signal belonging to this model is known as analysis pursuit, which is an important problem in analysis model based sparse representation. Several pursuit methods have already been proposed, such as the methods based on l 1-relaxation and greedy approaches based on the cosparsity of the signal. This paper presents a novel greedy-like algorithm, called Cosparsity-based Stagewise Matching Pursuit (CSMP), where the cosparsity of the target signal is estimated adaptively with a stagewise approach composed of forward and backward processes. In the forward process, the cosparsity is estimated and the signal is approximated, followed by the refinement of the cosparsity and the signal in the backward process. As a result, the target signal can be reconstructed without the prior information of the cosparsity level. Experiments show that the performance of the proposed algorithm is comparable to those of the l 1-relaxation and Analysis Subspace Pursuit (ASP)/Analysis Compressive Sampling Matching Pursuit (ACoSaMP) in noiseless case and better than that of Greedy Analysis Pursuit (GAP) in noisy case.

  6. Range resolution improvement of eyesafe ladar testbed (ELT) measurements using sparse signal deconvolution

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Gunther, Jacob H.

    2014-06-01

    The Eyesafe Ladar Test-bed (ELT) is an experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited off-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped-for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The first step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with different delays/ranges. Using this dictionary ladar returns from a wide variety of surface configurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coefficient vector to be sparse. Other constraints such as the non-negativity of the coefficients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques.

  7. WISE data and sparse photometry used for shape reconstruction of asteroids

    NASA Astrophysics Data System (ADS)

    Ďurech, Josef; Hanuš, Josef; Alí-Lagoa, Victor M.; Delbo, Marco; Oszkiewicz, Dagmara A.

    2016-01-01

    Asteroid disk-integrated sparse-in-time photometry can be used for determination of shapes and spin states of asteroids by the lightcurve inversion method. To clearly distinguish the correct solution of the rotation period from other minima in the parameter space, data with good photometric accuracy are needed. We show that if the low-quality sparse photometry obtained from ground-based astrometric surveys is combined with data from the Wide-field Infrared Survey Explorer (WISE) satellite, the correct rotation period can be successfully derived. Although WISE observed in mid-IR wavelengths, we show that for the period and spin determination, these data can be modelled as reflected light. The absolute fluxes are not required since only relative variation of the flux over the rotation is sufficient to determine the period. We also discuss the potential of combining all WISE data with the Lowell photometric database to create physical models of thousands of asteroids.

  8. Signal discovery, limits, and uncertainties with sparse on/off measurements: an objective bayesian analysis

    SciTech Connect

    Knoetig, Max L.

    2014-08-01

    For decades researchers have studied the On/Off counting problem where a measured rate consists of two parts. One part is due to a signal process and the other is due to a background process, the magnitudes for both of which are unknown. While most frequentist methods are adequate for large number counts, they cannot be applied to sparse data. Here, I want to present a new objective Bayesian solution that only depends on three parameters: the number of events in the signal region, the number of events in the background region, and the ratio of the exposure for both regions. First, the probability of the counts only being due to background is derived analytically. Second, the marginalized posterior for the signal parameter is also derived analytically. With this two-step approach it is easy to calculate the signal's significance, strength, uncertainty, or upper limit in a unified way. This approach is valid without restrictions for any number count, including zero, and may be widely applied in particle physics, cosmic-ray physics, and high-energy astrophysics. In order to demonstrate the performance of this approach, I apply the method to gamma-ray burst data.

  9. Weak signal detection in hyperspectral imagery using sparse matrix transform (SMT) covariance estimation

    SciTech Connect

    Theiler, James P; Cao, Guangzhi; Bouman, Charles A

    2009-01-01

    Many detection algorithms in hyperspectral image analysis, from well-characterized gaseous and solid targets to deliberately uncharacterized anomalies and anomlous changes, depend on accurately estimating the covariance matrix of the background. In practice, the background covariance is estimated from samples in the image, and imprecision in this estimate can lead to a loss of detection power. In this paper, we describe the sparse matrix transform (SMT) and investigate its utility for estimating the covariance matrix from a limited number of samples. The SMT is formed by a product of pairwise coordinate (Givens) rotations, which can be efficiently estimated using greedy optimization. Experiments on hyperspectral data show that the estimate accurately reproduces even small eigenvalues and eigenvectors. In particular, we find that using the SMT to estimate the covariance matrix used in the adaptive matched filter leads to consistently higher signal-to-noise ratios.

  10. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  11. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians. PMID:27203627

  12. Bottom-Up Visual Saliency Estimation With Deep Autoencoder-Based Sparse Reconstruction.

    PubMed

    Xia, Chen; Qi, Fei; Shi, Guangming

    2016-06-01

    Research on visual perception indicates that the human visual system is sensitive to center-surround (C-S) contrast in the bottom-up saliency-driven attention process. Different from the traditional contrast computation of feature difference, models based on reconstruction have emerged to estimate saliency by starting from original images themselves instead of seeking for certain ad hoc features. However, in the existing reconstruction-based methods, the reconstruction parameters of each area are calculated independently without taking their global correlation into account. In this paper, inspired by the powerful feature learning and data reconstruction ability of deep autoencoders, we construct a deep C-S inference network and train it with the data sampled randomly from the entire image to obtain a unified reconstruction pattern for the current image. In this way, global competition in sampling and learning processes can be integrated into the nonlocal reconstruction and saliency estimation of each pixel, which can achieve better detection results than the models with separate consideration on local and global rarity. Moreover, by learning from the current scene, the proposed model can achieve the feature extraction and interaction simultaneously in an adaptive way, which can form a better generalization ability to handle more types of stimuli. Experimental results show that in accordance with different inputs, the network can learn distinct basic features for saliency modeling in its code layer. Furthermore, in a comprehensive evaluation on several benchmark data sets, the proposed method can outperform the existing state-of-the-art algorithms. PMID:26800552

  13. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring.

    PubMed

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-01-01

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040

  14. Sparse matrix beamforming and image reconstruction for real-time 2D HIFU monitoring using Harmonic Motion Imaging for Focused Ultrasound (HMIFU) with in vitro validation

    PubMed Central

    Hou, Gary Y.; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E.

    2015-01-01

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a recently developed High-Intensity Focused Ultrasound (HIFU) treatment monitoring method. HMIFU utilizes an Amplitude-Modulated (fAM = 25 Hz) HIFU beam to induce a localized focal oscillatory motion, which is simultaneously estimated and imaged by confocally-aligned imaging transducer. HMIFU feasibilities have been previously shown in silico, in vitro, and in vivo in 1-D or 2-D monitoring of HIFU treatment. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system composed of a 93-element HIFU transducer (fcenter = 4.5MHz) and coaxially-aligned 64-element phased array (fcenter = 2.5MHz) for displacement excitation and motion estimation, respectively. A single transmit beam with divergent beam transmit was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface. The present work developed and implemented a sparse matrix beamforming onto a fully-integrated, clinically relevant system, which can stream displacement images up to 15 Hz using a GPU-based processing, an increase of 100 fold in rate of streaming displacement images compared to conventional CPU-based conventional beamforming and reconstruction processing. The achieved feedback rate is also currently the fastest and only approach that does not require interrupting the HIFU treatment amongst the acoustic radiation force based HIFU imaging techniques. Results in phantom experiments showed reproducible displacement imaging, and monitoring of twenty two in vitro HIFU treatments using the new 2D system showed a

  15. Real time reconstruction of quasiperiodic multi parameter physiological signals

    NASA Astrophysics Data System (ADS)

    Ganeshapillai, Gartheeban; Guttag, John

    2012-12-01

    A modern intensive care unit (ICU) has automated analysis systems that depend on continuous uninterrupted real time monitoring of physiological signals such as electrocardiogram (ECG), arterial blood pressure (ABP), and photo-plethysmogram (PPG). These signals are often corrupted by noise, artifacts, and missing data. We present an automated learning framework for real time reconstruction of corrupted multi-parameter nonstationary quasiperiodic physiological signals. The key idea is to learn a patient-specific model of the relationships between signals, and then reconstruct corrupted segments using the information available in correlated signals. We evaluated our method on MIT-BIH arrhythmia data, a two-channel ECG dataset with many clinically significant arrhythmias, and on the CinC challenge 2010 data, a multi-parameter dataset containing ECG, ABP, and PPG. For each, we evaluated both the residual distance between the original signals and the reconstructed signals, and the performance of a heartbeat classifier on a reconstructed ECG signal. At an SNR of 0 dB, the average residual distance on the CinC data was roughly 3% of the energy in the signal, and on the arrhythmia database it was roughly 16%. The difference is attributable to the large amount of diversity in the arrhythmia database. Remarkably, despite the relatively high residual difference, the classification accuracy on the arrhythmia database was still 98%, indicating that our method restored the physiologically important aspects of the signal.

  16. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm. PMID:23154621

  17. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-01

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  18. Sparse-view spectral CT reconstruction using spectral patch-based low-rank penalty.

    PubMed

    Kim, Kyungsang; Ye, Jong Chul; Worstell, William; Ouyang, Jinsong; Rakvongthai, Yothin; El Fakhri, Georges; Li, Quanzheng

    2015-03-01

    Spectral computed tomography (CT) is a promising technique with the potential for improving lesion detection, tissue characterization, and material decomposition. In this paper, we are interested in kVp switching-based spectral CT that alternates distinct kVp X-ray transmissions during gantry rotation. This system can acquire multiple X-ray energy transmissions without additional radiation dose. However, only sparse views are generated for each spectral measurement; and the spectra themselves are limited in number. To address these limitations, we propose a penalized maximum likelihood method using spectral patch-based low-rank penalty, which exploits the self-similarity of patches that are collected at the same position in spectral images. The main advantage is that the relatively small number of materials within each patch allows us to employ the low-rank penalty that is less sensitive to intensity changes while preserving edge directions. In our optimization formulation, the cost function consists of the Poisson log-likelihood for X-ray transmission and the nonconvex patch-based low-rank penalty. Since the original cost function is difficult to minimize directly, we propose an optimization method using separable quadratic surrogate and concave convex procedure algorithms for the log-likelihood and penalty terms, which results in an alternating minimization that provides a computational advantage because each subproblem can be solved independently. We performed computer simulations and a real experiment using a kVp switching-based spectral CT with sparse-view measurements, and compared the proposed method with conventional algorithms. We confirmed that the proposed method improves spectral images both qualitatively and quantitatively. Furthermore, our GPU implementation significantly reduces the computational cost. PMID:25532170

  19. Airborne gravimetry data sparse reconstruction via L1-norm convex quadratic programming

    NASA Astrophysics Data System (ADS)

    Yang, Ya-Peng; Wu, Mei-Ping; Tang, Gang

    2015-06-01

    In practice, airborne gravimetry is a sub-Nyquist sampling method because of the restrictions imposed by national boundaries, financial cost, and database size. In this study, we analyze the sparsity of airborne gravimetry data by using the discrete Fourier transform and propose a reconstruction method based on the theory of compressed sensing for large-scale gravity anomaly data. Consequently, the reconstruction of the gravity anomaly data is transformed to a L1-norm convex quadratic programming problem. We combine the preconditioned conjugate gradient algorithm (PCG) and the improved interior-point method (IPM) to solve the convex quadratic programming problem. Furthermore, a flight test was carried out with the homegrown strapdown airborne gravimeter SGA-WZ. Subsequently, we reconstructed the gravity anomaly data of the flight test, and then, we compared the proposed method with the linear interpolation method, which is commonly used in airborne gravimetry. The test results show that the PCG-IPM algorithm can be used to reconstruct large-scale gravity anomaly data with higher accuracy and more effectiveness than the linear interpolation method.

  20. Source Reconstruction for Spectrally-resolved Bioluminescence Tomography with Sparse A priori Information

    PubMed Central

    Lu, Yujie; Zhang, Xiaoqun; Douraghy, Ali; Stout, David; Tian, Jie; Chan, Tony F.; Chatziioannou, Arion F.

    2009-01-01

    Through restoration of the light source information in small animals in vivo, optical molecular imaging, such as fluorescence molecular tomography (FMT) and bioluminescence tomography (BLT), can depict biological and physiological changes observed using molecular probes. A priori information plays an indispensable role in tomographic reconstruction. As a type of a priori information, the sparsity characteristic of the light source has not been sufficiently considered to date. In this paper, we introduce a compressed sensing method to develop a new tomographic algorithm for spectrally-resolved bioluminescence tomography. This method uses the nature of the source sparsity to improve the reconstruction quality with a regularization implementation. Based on verification of the inverse crime, the proposed algorithm is validated with Monte Carlo-based synthetic data and the popular Tikhonov regularization method. Testing with different noise levels and single/multiple source settings at different depths demonstrates the improved performance of this algorithm. Experimental reconstruction with a mouse-shaped phantom further shows the potential of the proposed algorithm. PMID:19434138

  1. Detecting transient signals in geodetic time series using sparse estimation techniques

    NASA Astrophysics Data System (ADS)

    Riel, Bryan; Simons, Mark; Agram, Piyush; Zhan, Zhongwhen

    2014-06-01

    We present a new method for automatically detecting transient deformation signals from geodetic time series. We cast the detection problem as a least squares procedure where the design matrix corresponds to a highly overcomplete, nonorthogonal dictionary of displacement functions in time that resemble transient signals of various timescales. The addition of a sparsity-inducing regularization term to the cost function limits the total number of dictionary elements needed to reconstruct the signal. Sparsity-inducing regularization enhances interpretability of the resultant time-dependent model by localizing the dominant timescales and onset times of the transient signals. Transient detection can then be performed using convex optimization software where detection sensitivity is dependent on the strength of the applied sparsity-inducing regularization. To assess uncertainties associated with estimation of the dictionary coefficients, we compare solutions with those found through a Bayesian inference approach to sample the full model space for each dictionary element. In addition to providing uncertainty bounds on the coefficients and confirming the optimization results, Bayesian sampling reveals trade-offs between dictionary elements that have nearly equal probability in modeling a transient signal. Thus, we can rigorously assess the probabilities of the occurrence of transient signals and their characteristic temporal evolution. The detection algorithm is applied on several synthetic time series and real observed GPS time series for the Cascadia region. For the latter data set, we incorporate a spatial weighting scheme that self-adjusts to the local network density and filters for spatially coherent signals. The weighting allows for the automatic detection of repeating slow slip events.

  2. Adaptive sparse signal processing of satellite-based radio frequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2014-05-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events are dispersed through the ionosphere and appear as broadband nonlinear chirps at a receiver on-orbit. They occur in the presence of additive noise and structured clutter, making their classification challenging. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. Conventional feature extraction techniques using analytical dictionaries, such as a short-time Fourier basis or wavelets, are not comprehensively suitable for analyzing the broadband RF pulses under consideration here. We explore an alternative approach based on non-analytical dictionaries learned directly from data, and extend two dictionary learning algorithms, K-SVD and Hebbian, for use with satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features, and discuss their performance in terms of event classification. We also use principal component analysis to analyze and compare the respective learned dictionary spaces to the real data space.

  3. Distributed Signal Decorrelation and Detection in Multi View Camera Networks Using the Vector Sparse Matrix Transform.

    PubMed

    Bachega, Leonardo R; Hariharan, Srikanth; Bouman, Charles A; Shroff, Ness B

    2015-12-01

    This paper introduces the vector sparse matrix transform (vector SMT), a new decorrelating transform suitable for performing distributed processing of high-dimensional signals in sensor networks. We assume that each sensor in the network encodes its measurements into vector outputs instead of scalar ones. The proposed transform decorrelates a sequence of pairs of vector outputs, until these vectors are decorrelated. In our experiments, we simulate distributed anomaly detection by a network of cameras, monitoring a spatial region. Each camera records an image of the monitored environment from its particular viewpoint and outputs a vector encoding the image. Our results, with both artificial and real data, show that the proposed vector SMT transform effectively decorrelates image measurements from the multiple cameras in the network while maintaining low overall communication energy consumption. Since it enables joint processing of the multiple vector outputs, our method provides significant improvements to anomaly detection accuracy when compared with the baseline case when the images are processed independently. PMID:26415179

  4. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    SciTech Connect

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  5. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  6. Chaotic signal reconstruction with application to noise radar system

    NASA Astrophysics Data System (ADS)

    Liu, Lidong; Hu, Jinfeng; He, Zishu; Han, Chunlin; Li, Huiyong; Li, Jun

    2011-12-01

    Chaotic signals are potentially attractive in engineering applications, most of which require an accurate estimation of the actual chaotic signal from a noisy background. In this article, we present an improved symbolic dynamics-based method (ISDM) for accurate estimating the initial condition of chaotic signal corrupted by noise. Then, a new method, called piecewise estimation method (PEM), for chaotic signal reconstruction based on ISDM is proposed. The reconstruction performance using PEM is much better than that using the existing initial condition estimation methods. Next, PEM is applied in a noncoherent reception noise radar scheme and an improved noncoherent reception scheme is given. The simulation results show that the improved noncoherent scheme has better correlation performance and range resolution especially at low signal-to-noise ratios (SNRs).

  7. Signal enhanced holographic fluorescence microscopy with guide-star reconstruction

    PubMed Central

    Jang, Changwon; Clark, David C.; Kim, Jonghyun; Lee, Byoungho; Kim, Myung K.

    2016-01-01

    We propose a signal enhanced guide-star reconstruction method for holographic fluorescence microscopy. In the late 00’s, incoherent digital holography started to be vigorously studied by several groups to overcome the limitations of conventional digital holography. The basic concept of incoherent digital holography is to acquire the complex hologram from incoherent light by utilizing temporal coherency of a spatially incoherent light source. The advent of incoherent digital holography opened new possibility of holographic fluorescence microscopy (HFM), which was difficult to achieve with conventional digital holography. However there has been an important issue of low and noisy signal in HFM which slows down the system speed and degrades the imaging quality. When guide-star reconstruction is adopted, the image reconstruction gives an improved result compared to the conventional propagation reconstruction method. The guide-star reconstruction method gives higher imaging signal-to-noise ratio since the acquired complex point spread function provides optimal system-adaptive information and can restore the signal buried in the noise more efficiently. We present theoretical explanation and simulation as well as experimental results. PMID:27446653

  8. Signal enhanced holographic fluorescence microscopy with guide-star reconstruction.

    PubMed

    Jang, Changwon; Clark, David C; Kim, Jonghyun; Lee, Byoungho; Kim, Myung K

    2016-04-01

    We propose a signal enhanced guide-star reconstruction method for holographic fluorescence microscopy. In the late 00's, incoherent digital holography started to be vigorously studied by several groups to overcome the limitations of conventional digital holography. The basic concept of incoherent digital holography is to acquire the complex hologram from incoherent light by utilizing temporal coherency of a spatially incoherent light source. The advent of incoherent digital holography opened new possibility of holographic fluorescence microscopy (HFM), which was difficult to achieve with conventional digital holography. However there has been an important issue of low and noisy signal in HFM which slows down the system speed and degrades the imaging quality. When guide-star reconstruction is adopted, the image reconstruction gives an improved result compared to the conventional propagation reconstruction method. The guide-star reconstruction method gives higher imaging signal-to-noise ratio since the acquired complex point spread function provides optimal system-adaptive information and can restore the signal buried in the noise more efficiently. We present theoretical explanation and simulation as well as experimental results. PMID:27446653

  9. A single-frame terahertz image super-resolution reconstruction method based on sparse representation theory

    NASA Astrophysics Data System (ADS)

    Li, Yue; Zhao, Yuan-meng; Deng, Chao; Zhang, Cunlin

    2014-11-01

    Terrorist attacks make the public safety issue becoming the focus of national attention. Passive terahertz security instrument can help overcomesome shortcomings with current security instruments. Terahertz wave has a strong penetrating power which can pass through clothes without harming human bodies and detected objects. However, in the lab experiments, we found that original terahertz imagesobtained by passive terahertz technique were often too vague to detect the objects of interest. Prior studies suggest that learning-based image super-resolution reconstruction(SRR) method can solve this problem. To our knowledge, we applied the learning-based image SRR method for the first time in single-frame passive terahertz image processing. Experimental results showed that the processed passive terahertz images wereclearer and easier to identify suspicious objects than the original images. We also compare our method with three conventional methods and our method show greater advantage over the other methods.

  10. Reconstruction of ocean circulation from sparse data using the adjoint method: LGM and the present

    NASA Astrophysics Data System (ADS)

    Kurahashi-Nakamura, T.; Losch, M. J.; Paul, A.; Mulitza, S.; Schulz, M.

    2010-12-01

    tailored to be used with a source-to-source compiler to generate exact and efficient adjoint model code. To mimic real geological data, we carried out verification experiments with artificial data (temperature and salinity) sampled from a simulation with the MITgcm, and we examined how well the original model ocean was reconstructed through the adjoint method with our model. Through these ‘identical twin experiments’, we evaluated the performance and usefulness of the model to obtain guidelines for future experiments with real data.

  11. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release. PMID:25732072

  12. TH-E-17A-02: High-Pitch and Sparse-View Helical 4D CT Via Iterative Image Reconstruction Method Based On Tensor Framelet

    SciTech Connect

    Guo, M; Nam, H; Li, R; Xing, L; Gao, H

    2014-06-15

    Purpose: 4D CT is routinely performed during radiation therapy treatment planning of thoracic and abdominal cancers. Compared with the cine mode, the helical mode is advantageous in temporal resolution. However, a low pitch (∼0.1) for 4D CT imaging is often required instead of the standard pitch (∼1) for static imaging, since standard image reconstruction based on analytic method requires the low-pitch scanning in order to satisfy the data sufficient condition when reconstructing each temporal frame individually. In comparison, the flexible iterative method enables the reconstruction of all temporal frames simultaneously, so that the image similarity among frames can be utilized to possibly perform high-pitch and sparse-view helical 4D CT imaging. The purpose of this work is to investigate such an exciting possibility for faster imaging with lower dose. Methods: A key for highpitch and sparse-view helical 4D CT imaging is the simultaneous reconstruction of all temporal frames using the prior that temporal frames are continuous along the temporal direction. In this work, such a prior is regularized through the sparsity transform based on spatiotemporal tensor framelet (TF) as a multilevel and high-order extension of total variation transform. Moreover, GPU-based fast parallel computing of X-ray transform and its adjoint together with split Bregman method is utilized for solving the 4D image reconstruction problem efficiently and accurately. Results: The simulation studies based on 4D NCAT phantoms were performed with various pitches (i.e., 0.1, 0.2, 0.5, and 1) and sparse views (i.e., 400 views per rotation instead of standard >2000 views per rotation), using 3D iterative individual reconstruction method based on 3D TF and 4D iterative simultaneous reconstruction method based on 4D TF respectively. Conclusion: The proposed TF-based simultaneous 4D image reconstruction method enables high-pitch and sparse-view helical 4D CT with lower dose and faster speed.

  13. Food Reconstruction Using Isotopic Transferred Signals (FRUITS): A Bayesian Model for Diet Reconstruction

    PubMed Central

    Fernandes, Ricardo; Millard, Andrew R.; Brabec, Marek; Nadeau, Marie-Josée; Grootes, Pieter

    2014-01-01

    Human and animal diet reconstruction studies that rely on tissue chemical signatures aim at providing estimates on the relative intake of potential food groups. However, several sources of uncertainty need to be considered when handling data. Bayesian mixing models provide a natural platform to handle diverse sources of uncertainty while allowing the user to contribute with prior expert information. The Bayesian mixing model FRUITS (Food Reconstruction Using Isotopic Transferred Signals) was developed for use in diet reconstruction studies. FRUITS incorporates the capability to account for dietary routing, that is, the contribution of different food fractions (e.g. macronutrients) towards a dietary proxy signal measured in the consumer. FRUITS also provides relatively straightforward means for the introduction of prior information on the relative dietary contributions of food groups or food fractions. This type of prior may originate, for instance, from physiological or metabolic studies. FRUITS performance was tested using simulated data and data from a published controlled animal feeding experiment. The feeding experiment data was selected to exemplify the application of the novel capabilities incorporated into FRUITS but also to illustrate some of the aspects that need to be considered when handling data within diet reconstruction studies. FRUITS accurately predicted dietary intakes, and more precise estimates were obtained for dietary scenarios in which expert prior information was included. FRUITS represents a useful tool to achieve accurate and precise food intake estimates in diet reconstruction studies within different scientific fields (e.g. ecology, forensics, archaeology, and dietary physiology). PMID:24551057

  14. Gearbox fault diagnosis using adaptive zero phase time-varying filter based on multi-scale chirplet sparse signal decomposition

    NASA Astrophysics Data System (ADS)

    Wu, Chunyan; Liu, Jian; Peng, Fuqiang; Yu, Dejie; Li, Rong

    2013-07-01

    When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To overcome this drawback, the zero phase filter is introduced to the mentioned filter, and a fault diagnosis method for speed-changing gearbox is proposed. Firstly, the gear meshing frequency of each gearbox is estimated by chirplet path pursuit. Then, according to the estimated gear meshing frequencies, an adaptive zero phase time-varying filter(AZPTF) is designed to filter the original signal. Finally, the basis for fault diagnosis is acquired by the envelope order analysis to the filtered signal. The signal consisting of two time-varying amplitude modulation and frequency modulation(AM-FM) signals is respectively analyzed by ATF and AZPTF based on MCSSD. The simulation results show the variances between the original signals and the filtered signals yielded by AZPTF based on MCSSD are 13.67 and 41.14, which are far less than variances (323.45 and 482.86) between the original signals and the filtered signals obtained by ATF based on MCSSD. The experiment results on the vibration signals of gearboxes indicate that the vibration signals of the two speed-changing gearboxes installed on one foundation bed can be separated by AZPTF effectively. Based on the demodulation information of the vibration signal of each gearbox, the fault diagnosis can be implemented. Both simulation and experiment examples prove that the proposed filter can extract a mono-component time-varying AM-FM signal from the multi-component time-varying AM-FM signal without distortion.

  15. Multidimensional filter bank signal reconstruction from multichannel acquisition.

    PubMed

    Law, Ka Lung; Do, Minh N

    2011-02-01

    We study the theory and algorithms of an optimal use of multidimensional signal reconstruction from multichannel acquisition by using a filter bank setup. Suppose that we have an N-channel convolution system, referred to as N analysis filters, in M dimensions. Instead of taking all the data and applying multichannel deconvolution, we first reduce the collected data set by an integer M×M uniform sampling matrix [Formula: see text], and then search for a synthesis polyphase matrix which could perfectly reconstruct any input discrete signal. First, we determine the existence of perfect reconstruction (PR) systems for a given set of finite-impulse response (FIR) analysis filters. Second, we present an efficient algorithm to find a sampling matrix with maximum sampling rate and to find a FIR PR synthesis polyphase matrix for a given set of FIR analysis filters. Finally, once a particular FIR PR synthesis polyphase matrix is found, we can characterize all FIR PR synthesis matrices, and then find an optimal one according to design criteria including robust reconstruction in the presence of noise. PMID:20729172

  16. Simultaneous Reconstruction of Multiple Signaling Pathways via the Prize-Collecting Steiner Forest Problem

    PubMed Central

    Tuncbag, Nurcan; Braunstein, Alfredo; Pagnani, Andrea; Huang, Shao-Shan Carol; Chayes, Jennifer; Borgs, Christian; Zecchina, Riccardo

    2013-01-01

    Abstract Signaling and regulatory networks are essential for cells to control processes such as growth, differentiation, and response to stimuli. Although many “omic” data sources are available to probe signaling pathways, these data are typically sparse and noisy. Thus, it has been difficult to use these data to discover the cause of the diseases and to propose new therapeutic strategies. We overcome these problems and use “omic” data to reconstruct simultaneously multiple pathways that are altered in a particular condition by solving the prize-collecting Steiner forest problem. To evaluate this approach, we use the well-characterized yeast pheromone response. We then apply the method to human glioblastoma data, searching for a forest of trees, each of which is rooted in a different cell-surface receptor. This approach discovers both overlapping and independent signaling pathways that are enriched in functionally and clinically relevant proteins, which could provide the basis for new therapeutic strategies. Although the algorithm was not provided with any information about the phosphorylation status of receptors, it identifies a small set of clinically relevant receptors among hundreds present in the interactome. PMID:23383998

  17. Adaptive multimode signal reconstruction from time–frequency representations

    PubMed Central

    Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick

    2016-01-01

    This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM–FM signals by their time–frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM–FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184

  18. Adaptive multimode signal reconstruction from time-frequency representations.

    PubMed

    Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick; McLaughlin, Stephen

    2016-04-13

    This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM-FM signals by their time-frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM-FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184

  19. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies

    PubMed Central

    Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker—single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,limε→0N(2)/N(1)=ϕ12(θ), compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals (PN(i)<ε→0). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol. PMID:27380176

  20. Reconstructing signals from noisy data with unknown signal and noise covariance.

    PubMed

    Oppermann, Niels; Robbers, Georg; Ensslin, Torsten A

    2011-10-01

    We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present. PMID:22181098

  1. Sparse Sensing of Aerodynamic Loads on Insect Wings

    NASA Astrophysics Data System (ADS)

    Manohar, Krithika; Brunton, Steven; Kutz, J. Nathan

    2015-11-01

    We investigate how insects use sparse sensors on their wings to detect aerodynamic loading and wing deformation using a coupled fluid-structure model given periodically flapping input motion. Recent observations suggest that insects collect sensor information about their wing deformation to inform control actions for maneuvering and rejecting gust disturbances. Given a small number of point measurements of the chordwise aerodynamic loads from the sparse sensors, we reconstruct the entire chordwise loading using sparsesensing - a signal processing technique that reconstructs a signal from a small number of measurements using l1 norm minimization of sparse modal coefficients in some basis. We compare reconstructions from sensors randomly sampled from probability distributions biased toward different regions along the wing chord. In this manner, we determine the preferred regions along the chord for sensor placement and for estimating chordwise loads to inform control decisions in flight.

  2. Joint surface reconstruction and 4D deformation estimation from sparse data and prior knowledge for marker-less Respiratory motion tracking

    SciTech Connect

    Berkels, Benjamin; Rumpf, Martin; Bauer, Sebastian; Ettl, Svenja; Arold, Oliver; Hornegger, Joachim

    2013-09-15

    Purpose: The intraprocedural tracking of respiratory motion has the potential to substantially improve image-guided diagnosis and interventions. The authors have developed a sparse-to-dense registration approach that is capable of recovering the patient's external 3D body surface and estimating a 4D (3D + time) surface motion field from sparse sampling data and patient-specific prior shape knowledge.Methods: The system utilizes an emerging marker-less and laser-based active triangulation (AT) sensor that delivers sparse but highly accurate 3D measurements in real-time. These sparse position measurements are registered with a dense reference surface extracted from planning data. Thereby a dense displacement field is recovered, which describes the spatio-temporal 4D deformation of the complete patient body surface, depending on the type and state of respiration. It yields both a reconstruction of the instantaneous patient shape and a high-dimensional respiratory surrogate for respiratory motion tracking. The method is validated on a 4D CT respiration phantom and evaluated on both real data from an AT prototype and synthetic data sampled from dense surface scans acquired with a structured-light scanner.Results: In the experiments, the authors estimated surface motion fields with the proposed algorithm on 256 datasets from 16 subjects and in different respiration states, achieving a mean surface reconstruction accuracy of ±0.23 mm with respect to ground truth data—down from a mean initial surface mismatch of 5.66 mm. The 95th percentile of the local residual mesh-to-mesh distance after registration did not exceed 1.17 mm for any subject. On average, the total runtime of our proof of concept CPU implementation is 2.3 s per frame, outperforming related work substantially.Conclusions: In external beam radiation therapy, the approach holds potential for patient monitoring during treatment using the reconstructed surface, and for motion-compensated dose delivery using

  3. Energy efficient acquisition and reconstruction of EEG signals.

    PubMed

    Singh, W; Shukla, A; Deb, S; Majumdar, A

    2014-01-01

    In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Previous Compressed Sensing (CS) based solutions to EEG tele-monitoring over WBAN's could only reduce the communication cost. In this work, we propose a matrix completion based formulation that can also reduce the energy consumption for sensing. We test our method with state-of-the-art CS based techniques and find that the reconstruction accuracy from our method is significantly better and that too at considerably less energy consumption. Our method is also tested for post-reconstruction signal classification where it outperforms previous CS based techniques. At the heart of the system is an Analog to Information Converter (AIC) implemented in 65nm CMOS technology. The pseudorandom clock generator enables random under-sampling and subsequent conversion by the 12-bit Successive Approximation Register Analog to Digital Converter (SAR ADC). AIC achieves a sample rate of 0.5 KS/s, an ENOB 9.54 bits, and consumes 108 nW from 1 V power supply. PMID:25570198

  4. On signals faint and sparse: The ACICA algorithm for blind de-trending of exoplanetary transits with low signal-to-noise

    SciTech Connect

    Waldmann, I. P.

    2014-01-01

    Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here we present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.

  5. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  6. Getting a decent (but sparse) signal to the brain for users of cochlear implants.

    PubMed

    Wilson, Blake S

    2015-04-01

    The challenge in getting a decent signal to the brain for users of cochlear implants (CIs) is described. A breakthrough occurred in 1989 that later enabled most users to understand conversational speech with their restored hearing alone. Subsequent developments included stimulation in addition to that provided with a unilateral CI, either with electrical stimulation on both sides or with acoustic stimulation in combination with a unilateral CI, the latter for persons with residual hearing at low frequencies in either or both ears. Both types of adjunctive stimulation produced further improvements in performance for substantial fractions of patients. Today, the CI and related hearing prostheses are the standard of care for profoundly deaf persons and ever-increasing indications are now allowing persons with less severe losses to benefit from these marvelous technologies. The steps in achieving the present levels of performance are traced, and some possibilities for further improvements are mentioned. This article is part of a Special Issue entitled . PMID:25500178

  7. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    DOEpatents

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  8. Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection.

    PubMed

    Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue

    2016-01-01

    Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection. PMID:27529245

  9. Reconstruction of complex signals using minimum Rényi information.

    PubMed

    Frieden, B R; Bajkova, A T

    1995-07-10

    An information divergence, such as Shannon mutual information, measures the distance between two probability-density functions (or images). A wide class of such measures, called α divergences, with desirable properties such as convexity over all space, was defined by Amari. Rényi's information Dα is an α divergence. Because of its convexity property, the minimum of Dα is easily attained. Minimization accomplishes minimum distance (maximum resemblance) between an unknown image and a known reference image. Such a biasing effect permits complex images, such as occur in inverse syntheticaperture- radar imaging, to be well reconstructed. The algorithm permits complex amplitudes to replace the probabilities in the Rényi form. The bias image may be constructed as a smooth version of the linear, Fourier reconstruction of the data. Examples on simulated complex image data with and without noise indicate that the Rényi reconstruction approach permits superresolution in low-noise cases and higher fidelity than ordinary, linear reconstructions in higher-noise cases. PMID:21052233

  10. A Novel Reconstruction Framework for Time-Encoded Signals with Integrate-and-Fire Neurons.

    PubMed

    Florescu, Dorian; Coca, Daniel

    2015-09-01

    Integrate-and-fire neurons are time encoding machines that convert the amplitude of an analog signal into a nonuniform, strictly increasing sequence of spike times. Under certain conditions, the encoded signals can be reconstructed from the nonuniform spike time sequences using a time decoding machine. Time encoding and time decoding methods have been studied using the nonuniform sampling theory for band-limited spaces, as well as for generic shift-invariant spaces. This letter proposes a new framework for studying IF time encoding and decoding by reformulating the IF time encoding problem as a uniform sampling problem. This framework forms the basis for two new algorithms for reconstructing signals from spike time sequences. We demonstrate that the proposed reconstruction algorithms are faster, and thus better suited for real-time processing, while providing a similar level of accuracy, compared to the standard reconstruction algorithm. PMID:26161820

  11. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  12. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  13. New signal processing technique for density profile reconstruction using reflectometry

    SciTech Connect

    Clairet, F.; Bottereau, C.; Ricaud, B.; Briolle, F.; Heuraux, S.

    2011-08-15

    Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10{sup 16} m{sup -1}. For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.

  14. New signal processing technique for density profile reconstruction using reflectometry.

    PubMed

    Clairet, F; Ricaud, B; Briolle, F; Heuraux, S; Bottereau, C

    2011-08-01

    Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10(16) m(-1). For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma. PMID:21895243

  15. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    SciTech Connect

    Ensslin, Torsten A.; Frommert, Mona

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.

  16. Reconstruction of the signal produced by a directional sound source from remote multi-microphone recordings.

    PubMed

    Guarato, Francesco; Hallam, John; Matsuo, Ikuo

    2011-09-01

    A mathematical method for reconstructing the signal produced by a directional sound source from knowledge of the same signal in the far field, i.e., microphone recordings, is developed. The key idea is to compute inverse filters that compensate for the directional filtering of the signal by the sound source directivity, using a least-square error optimization strategy. Previous work pointed out how the method strongly depends on arrival times of signal in the microphone recordings. Two strategies are used in this paper for calculating the time shifts that are afterward taken as inputs, together with source directivity, for the reconstruction. The method has been tested in a laboratory environment, where ground truth was available, with a Polaroid transducer as source. The reconstructions are similar with both strategies. The performance of the method also depends on source orientation. PMID:21895106

  17. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

    PubMed Central

    Yoshida, Ryo; West, Mike

    2010-01-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data. PMID:20890391

  18. Iterative Sparse Approximation of the Gravitational Potential

    NASA Astrophysics Data System (ADS)

    Telschow, R.

    2012-04-01

    In recent applications in the approximation of gravitational potential fields, several new challenges arise. We are concerned with a huge quantity of data (e.g. in case of the Earth) or strongly irregularly distributed data points (e.g. in case of the Juno mission to Jupiter), where both of these problems bring the established approximation methods to their limits. Our novel method, which is a matching pursuit, however, iteratively chooses a best basis out of a large redundant family of trial functions to reconstruct the signal. It is independent of the data points which makes it possible to take into account a much higher amount of data and, furthermore, handle irregularly distributed data, since the algorithm is able to combine arbitrary spherical basis functions, i.e., global as well as local trial functions. This additionaly results in a solution, which is sparse in the sense that it features more basis functions where the signal has a higher local detail density. Summarizing, we get a method which reconstructs large quantities of data with a preferably low number of basis functions, combining global as well as several localizing functions to a sparse basis and a solution which is locally adapted to the data density and also to the detail density of the signal.

  19. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  20. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape size, and/or position) as needed to best fit the data.

  1. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos

    2002-01-01

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  2. Reconstruction of digital spectrum from periodic nonuniformly sampled signals in offset linear canonical transform domain

    NASA Astrophysics Data System (ADS)

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Jiang, Congmei; Li, Yi

    2015-08-01

    Periodic nonuniform sampling is a special case of nonuniform sampling. It arises in a broad range of applications due to imperfect timebase or random events. As the offset linear canonical transform (OLCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to explore the spectral analysis and reconstruction for periodic nonuniformly sampled signals in the OLCT domain. In this paper, we address the problem of spectral analysis and reconstruction for periodic nonuniformly sampled signals associated with the OLCT. Firstly, detailed spectral analysis of periodic nonuniformly sampled one-dimensional signals has been performed. By applying the results, a relationship between the discrete and continuous spectrum has been deduced, and a method to reconstruct the digital spectrum from periodic nonuniformly sampled signals in one-dimensional case has been proposed. Then, we extend that theories to the two-dimensional case. Finally, the simulation results are presented to show the advantage and effectiveness of the methods. Detailed spectral analysis of periodic nonuniformly sampled one-dimensional signals in the OLCT domain have been performed. A relationship between the discrete and continuous OLCT spectrum has been deduced. A method to reconstruct the digital spectrum from periodic nonuniformly sampled signals in one-dimensional case has been proposed. All the theories have been extended to the two-dimensional case.

  3. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    PubMed Central

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  4. Tomographic Reconstruction of Breast Characteristics Using Transmitted Ultrasound Signals

    NASA Astrophysics Data System (ADS)

    Sandhu, Gursharan; Li, Cuiping; Duric, Neb; Huang, Zhi-Feng

    2012-10-01

    X-ray Mammography has been the standard technique for the detection of breast cancer. However, it uses ionizing radiation, and can cause severe discomfort. It also has low spatial resolution, and can be prone to misdiagnosis. Techniques such as X-ray CT and MRI alleviate some of these issues but are costly. Researchers at Karmanos Cancer Institute developed a tomographic ultrasound device which is able to reconstruct the reflectivity, attenuation, and sound speed characteristics of the breast. A patient places her breast into a ring array of transducers immersed in a water bath, and the device scanning the breast yields a 3d reconstruction. Our work focuses on improving algorithms for attenuation and sound speed imaging. Current time-of-flight tomography provides relatively low resolution images. Improvements are made by considering diffraction effects with the use of the low resolution image as a seed to the Born approximation. Ultimately, full waveform inversion will be used to obtain images with resolution comparable to MRI.

  5. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT

    PubMed Central

    Choudhary, Kumari Sonal; Rohatgi, Neha; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-01-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend. PMID:27253373

  6. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    PubMed

    Choudhary, Kumari Sonal; Rohatgi, Neha; Halldorsson, Skarphedinn; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-06-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend. PMID:27253373

  7. Performance analysis of compressive ghost imaging based on different signal reconstruction techniques.

    PubMed

    Kang, Yan; Yao, Yin-Ping; Kang, Zhi-Hua; Ma, Lin; Zhang, Tong-Yi

    2015-06-01

    We present different signal reconstruction techniques for implementation of compressive ghost imaging (CGI). The different techniques are validated on the data collected from ghost imaging with the pseudothermal light experimental system. Experiment results show that the technique based on total variance minimization gives high-quality reconstruction of the imaging object with less time consumption. The different performances among these reconstruction techniques and their parameter settings are also analyzed. The conclusion thus offers valuable information to promote the implementation of CGI in real applications. PMID:26367039

  8. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  9. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    NASA Astrophysics Data System (ADS)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  10. Structural Reconstruction of Protein-Protein Complexes Involved in Intracellular Signaling.

    PubMed

    Kirsch, Klára; Sok, Péter; Reményi, Attila

    2016-01-01

    Signaling complexes within the cell convert extracellular cues into physiological outcomes. Their assembly involves signaling enzymes, allosteric regulators and scaffold proteins that often contain long stretches of disordered protein regions, display multi-domain architectures, and binding affinity between individual components is low. These features are indispensable for their central roles as dynamic information processing hubs, on the other hand they also make reconstruction of structurally homogeneous complex samples highly challenging. In this present chapter we discuss protein machinery which influences extracellular signal reception, intracellular pathway activity, and cytoskeletal or transcriptional activity. PMID:27165334

  11. Reconstruction of the temporal signaling network in Salmonella-infected human cells

    PubMed Central

    Budak, Gungor; Eren Ozsoy, Oyku; Aydin Son, Yesim; Can, Tolga; Tuncbag, Nurcan

    2015-01-01

    Salmonella enterica is a bacterial pathogen that usually infects its host through food sources. Translocation of the pathogen proteins into the host cells leads to changes in the signaling mechanism either by activating or inhibiting the host proteins. Given that the bacterial infection modifies the response network of the host, a more coherent view of the underlying biological processes and the signaling networks can be obtained by using a network modeling approach based on the reverse engineering principles. In this work, we have used a published temporal phosphoproteomic dataset of Salmonella-infected human cells and reconstructed the temporal signaling network of the human host by integrating the interactome and the phosphoproteomic dataset. We have combined two well-established network modeling frameworks, the Prize-collecting Steiner Forest (PCSF) approach and the Integer Linear Programming (ILP) based edge inference approach. The resulting network conserves the information on temporality, direction of interactions, while revealing hidden entities in the signaling, such as the SNARE binding, mTOR signaling, immune response, cytoskeleton organization, and apoptosis pathways. Targets of the Salmonella effectors in the host cells such as CDC42, RHOA, 14-3-3δ, Syntaxin family, Oxysterol-binding proteins were included in the reconstructed signaling network although they were not present in the initial phosphoproteomic data. We believe that integrated approaches, such as the one presented here, have a high potential for the identification of clinical targets in infectious diseases, especially in the Salmonella infections. PMID:26257716

  12. Investigation on magnetoacoustic signal generation with magnetic induction and its application to electrical conductivity reconstruction.

    PubMed

    Ma, Qingyu; He, Bin

    2007-08-21

    A theoretical study on the magnetoacoustic signal generation with magnetic induction and its applications to electrical conductivity reconstruction is conducted. An object with a concentric cylindrical geometry is located in a static magnetic field and a pulsed magnetic field. Driven by Lorentz force generated by the static magnetic field, the magnetically induced eddy current produces acoustic vibration and the propagated sound wave is received by a transducer around the object to reconstruct the corresponding electrical conductivity distribution of the object. A theory on the magnetoacoustic waveform generation for a circular symmetric model is provided as a forward problem. The explicit formulae and quantitative algorithm for the electrical conductivity reconstruction are then presented as an inverse problem. Computer simulations were conducted to test the proposed theory and assess the performance of the inverse algorithms for a multi-layer cylindrical model. The present simulation results confirm the validity of the proposed theory and suggest the feasibility of reconstructing electrical conductivity distribution based on the proposed theory on the magnetoacoustic signal generation with magnetic induction. PMID:17671355

  13. Imaging correlography with sparse collecting apertures

    NASA Astrophysics Data System (ADS)

    Idell, Paul S.; Fienup, J. R.

    1987-01-01

    This paper investigates the possibility of implementing an imaging correlography system with sparse arrays of intensity detectors. The theory underlying the image formation process for imaging correlography is reviewed, emphasizing the spatial filtering effects that sparse collecting apertures have on the reconstructed imagery. Image recovery with sparse arrays of intensity detectors through the use of computer experiments in which laser speckle measurements are digitally simulated is then demonstrated. It is shown that the quality of imagery reconstructed using this technique is visibly enhanced when appropriate filtering techniques are applied. A performance tradeoff between collecting array redundancy and the number of speckle pattern measurements is briefly discussed.

  14. Crack growth sparse pursuit for wind turbine blade

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Yang, Zhibo; Zhang, Han; Du, Zhaohui; Chen, Xuefeng

    2015-01-01

    One critical challenge to achieving reliable wind turbine blade structural health monitoring (SHM) is mainly caused by composite laminates with an anisotropy nature and a hard-to-access property. The typical pitch-catch PZTs approach generally detects structural damage with both measured and baseline signals. However, the accuracy of imaging or tomography by delay-and-sum approaches based on these signals requires improvement in practice. Via the model of Lamb wave propagation and the establishment of a dictionary that corresponds to scatters, a robust sparse reconstruction approach for structural health monitoring comes into view for its promising performance. This paper proposes a neighbor dictionary that identifies the first crack location through sparse reconstruction and then presents a growth sparse pursuit algorithm that can precisely pursue the extension of the crack. An experiment with the goal of diagnosing a composite wind turbine blade with an artificial crack is performed, and it validates the proposed approach. The results give competitively accurate crack detection with the correct locations and extension length.

  15. Correlation Detection Based on the Reconstructed Excitation Signal of Electromagnetic Seismic Vibrator

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Jiang, T.; Xu, X.; Jia, H.

    2014-12-01

    Correlation detection method is generally used to detect seismic data of electromagnetic seismic vibrator, which is widely applicated for shallow mineral prospecting. By analyzing field seismic data from electromagnetic and hydraulic seismic vibrators in mining area, we find when media underground is complex or the base-plate of vibrator is coupled poorly with ground, there is a 9.30 m positioning precision error and false multiple waves in the electromagnetic vibrator data reference to hydraulic vibrator data. The paper analyzes the theoretical reason of above problems by studying how the signal of electromagnetic vibrator is excited, then proposes a new method of correlation detection based on the reconstructed excitation signal (CDBRES). CDBRES includes following steps. First, it extracts the direct wave signal from seismometer near base-plate of electromagnetic vibrator. Next, it reconstructs the excitation signal according to the extracted direct wave. Then, it detects the seismic data using cross-correlation with the reconstructed excitation signal as a reference. Finally, it uses spectrum whitening to improve detection quality. We simulate with ray-tracing method, and simulation results show that the reconstructed excitation signal is extremely consistence with the ideal excitation signal, the correlation coefficient between them is up to 0.9869. And the signal of electromagnetic vibrator is detected correctly with CDBRES method. Then a field comparison experiment between hydraulic vibrator MiniVib T15000 and electromagnetic vibrator PHVS 500 was carried out near a copper and nickel deposit area. Their output force are 30000N and 300N, respectively. Though there is a great output force difference, the detection result of PHVS 500 using CDBRES method is still consistent with MiniVib T15000. Reference to the MiniVib T15000, the positioning error of PHVS 500 is only 0.93m in relatively stronger noise level. In addition, false multiple waves are invisible. In

  16. New direction of arrival estimation of coherent signals based on reconstructing matrix under unknown mutual coupling

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Li, Weixing; Zhang, Yue; Chen, Zengping

    2016-01-01

    A direction of arrival (DOA) estimation algorithm for coherent signals in the presence of unknown mutual coupling is proposed. A group of auxiliary sensors in a uniform linear array are applied to eliminate the effects on the orthogonality of subspaces brought by mutual coupling. Then, a Toeplitz matrix, whose rank is independent of the coherency between impinging signals, is reconstructed to eliminate the rank loss of the spatial covariance matrix. Therefore, the signal and noise subspaces can be estimated properly. This method can estimate the DOAs of coherent signals under unknown mutual coupling accurately without any iteration and calibration sources. It has a low computational burden and high accuracy. Simulation results demonstrate the effectiveness of the algorithm.

  17. Dynamic analysis of heartbeat rate signals of epileptics using multidimensional phase space reconstruction approach

    NASA Astrophysics Data System (ADS)

    Su, Zhi-Yuan; Wu, Tzuyin; Yang, Po-Hua; Wang, Yeng-Tseng

    2008-04-01

    The heartbeat rate signal provides an invaluable means of assessing the sympathetic-parasympathetic balance of the human autonomic nervous system and thus represents an ideal diagnostic mechanism for detecting a variety of disorders such as epilepsy, cardiac disease and so forth. The current study analyses the dynamics of the heartbeat rate signal of known epilepsy sufferers in order to obtain a detailed understanding of the heart rate pattern during a seizure event. In the proposed approach, the ECG signals are converted into heartbeat rate signals and the embedology theorem is then used to construct the corresponding multidimensional phase space. The dynamics of the heartbeat rate signal are then analyzed before, during and after an epileptic seizure by examining the maximum Lyapunov exponent and the correlation dimension of the attractors in the reconstructed phase space. In general, the results reveal that the heartbeat rate signal transits from an aperiodic, highly-complex behaviour before an epileptic seizure to a low dimensional chaotic motion during the seizure event. Following the seizure, the signal trajectories return to a highly-complex state, and the complex signal patterns associated with normal physiological conditions reappear.

  18. Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries

    PubMed Central

    Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.

    2015-01-01

    Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494

  19. Information field theory for cosmological perturbation reconstruction and nonlinear signal analysis

    NASA Astrophysics Data System (ADS)

    Enßlin, Torsten A.; Frommert, Mona; Kitaura, Francisco S.

    2009-11-01

    We develop information field theory (IFT) as a means of Bayesian inference on spatially distributed signals, the information fields. A didactical approach is attempted. Starting from general considerations on the nature of measurements, signals, noise, and their relation to a physical reality, we derive the information Hamiltonian, the source field, propagator, and interaction terms. Free IFT reproduces the well-known Wiener-filter theory. Interacting IFT can be diagrammatically expanded, for which we provide the Feynman rules in position-, Fourier-, and spherical-harmonics space, and the Boltzmann-Shannon information measure. The theory should be applicable in many fields. However, here, two cosmological signal recovery problems are discussed in their IFT formulation. (1) Reconstruction of the cosmic large-scale structure matter distribution from discrete galaxy counts in incomplete galaxy surveys within a simple model of galaxy formation. We show that a Gaussian signal, which should resemble the initial density perturbations of the Universe, observed with a strongly nonlinear, incomplete and Poissonian-noise affected response, as the processes of structure and galaxy formation and observations provide, can be reconstructed thanks to the virtue of a response-renormalization flow equation. (2) We design a filter to detect local nonlinearities in the cosmic microwave background, which are predicted from some early-Universe inflationary scenarios, and expected due to measurement imperfections. This filter is the optimal Bayes’ estimator up to linear order in the nonlinearity parameter and can be used even to construct sky maps of nonlinearities in the data.

  20. Fast multi-dimensional NMR acquisition and processing using the sparse FFT.

    PubMed

    Hassanieh, Haitham; Mayzel, Maxim; Shi, Lixin; Katabi, Dina; Orekhov, Vladislav Yu

    2015-09-01

    Increasing the dimensionality of NMR experiments strongly enhances the spectral resolution and provides invaluable direct information about atomic interactions. However, the price tag is high: long measurement times and heavy requirements on the computation power and data storage. We introduce sparse fast Fourier transform as a new method of NMR signal collection and processing, which is capable of reconstructing high quality spectra of large size and dimensionality with short measurement times, faster computations than the fast Fourier transform, and minimal storage for processing and handling of sparse spectra. The new algorithm is described and demonstrated for a 4D BEST-HNCOCA spectrum. PMID:26123316

  1. Reconstruction of signal in plastic scintillator of PET using Tikhonov regularization.

    PubMed

    Raczynski, Lech

    2015-08-01

    The new concept of Time of Flight Positron Emission Tomography (TOF-PET) detection system, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The Jagiellonian-PET (J-PET) detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on idea from the Tikhonov regularization method, is presented. From the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long plastic scintillator strip. It is shown that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction from 1.05 cm to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. PMID:26736869

  2. Sparse cortical source localization using spatio-temporal atoms.

    PubMed

    Korats, Gundars; Ranta, Radu; Le Cam, Steven; Louis-Dorr, Valérie

    2015-08-01

    This paper addresses the problem of sparse localization of cortical sources from scalp EEG recordings. Localization algorithms use propagation model under spatial and/or temporal constraints, but their performance highly depends on the data signal-to-noise ratio (SNR). In this work we propose a dictionary based sparse localization method which uses a data driven spatio-temporal dictionary to reconstruct the measurements using Single Best Replacement (SBR) and Continuation Single Best Replacement (CSBR) algorithms. We tested and compared our methods with the well-known MUSIC and RAP-MUSIC algorithms on simulated realistic data. Tests were carried out for different noise levels. The results show that our method has a strong advantage over MUSIC-type methods in case of synchronized sources. PMID:26737185

  3. Signal transformation in erosional landscapes: insights for reconstructing tectonic history from sediment flux records

    NASA Astrophysics Data System (ADS)

    Li, Q.; Gasparini, N. M.; Straub, K. M.

    2015-12-01

    Changes in tectonics can affect erosion rates across a mountain belt, leading to non-steady sediment flux delivery to fluvial transport systems. The sediment flux signal produced from time-varying tectonics may eventually be recorded in a depositional basin. However, before the sediment flux from an erosional watershed is fed to the downstream transport system and preserved in sedimentary deposits, tectonic signals can be distorted or even destroyed as they are transformed into a sediment-flux signal that is exported out of a watershed . In this study, we use the Channel-Hillslope Integrated Landscape Development (CHILD) model to explore how the sediment flux delivered from a mountain watershed responds to non-steady rock uplift. We observe that (1) a non-linear relationship between the erosion response and tectonic perturbation can lead to a sediment-flux signal that is out of phase with the change in uplift rate; (2) in some cases in which the uplift perturbation is short, the sediment flux signal may contain no record of the change; (3) uplift rates interpreted from sediment flux at the outlet of a transient erosional landscape are likely to be underestimated. All these observations highlight the difficulty in accurately reconstructing tectonic history from sediment flux records. Results from this study will help to constrain what tectonic signals may be evident in the sediment flux delivered from an erosional system and therefore have the potential to be recorded in stratigraphy, ultimately improving our ability to interpret stratigraphy.

  4. Bayesian reconstruction of gravitational wave burst signals from simulations of rotating stellar core collapse and bounce

    SciTech Connect

    Roever, Christian; Bizouard, Marie-Anne; Christensen, Nelson; Dimmelmeier, Harald; Heng, Ik Siong; Meyer, Renate

    2009-11-15

    Presented in this paper is a technique that we propose for extracting the physical parameters of a rotating stellar core collapse from the observation of the associated gravitational wave signal from the collapse and core bounce. Data from interferometric gravitational wave detectors can be used to provide information on the mass of the progenitor model, precollapse rotation, and the nuclear equation of state. We use waveform libraries provided by the latest numerical simulations of rotating stellar core collapse models in general relativity, and from them create an orthogonal set of eigenvectors using principal component analysis. Bayesian inference techniques are then used to reconstruct the associated gravitational wave signal that is assumed to be detected by an interferometric detector. Posterior probability distribution functions are derived for the amplitudes of the principal component analysis eigenvectors, and the pulse arrival time. We show how the reconstructed signal and the principal component analysis eigenvector amplitude estimates may provide information on the physical parameters associated with the core collapse event.

  5. TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information

    PubMed Central

    Struck, Torsten H

    2014-01-01

    Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118

  6. Towards robust topology of sparsely sampled data.

    PubMed

    Correa, Carlos D; Lindstrom, Peter

    2011-12-01

    Sparse, irregular sampling is becoming a necessity for reconstructing large and high-dimensional signals. However, the analysis of this type of data remains a challenge. One issue is the robust selection of neighborhoods--a crucial part of analytic tools such as topological decomposition, clustering and gradient estimation. When extracting the topology of sparsely sampled data, common neighborhood strategies such as k-nearest neighbors may lead to inaccurate results, either due to missing neighborhood connections, which introduce false extrema, or due to spurious connections, which conceal true extrema. Other neighborhoods, such as the Delaunay triangulation, are costly to compute and store even in relatively low dimensions. In this paper, we address these issues. We present two new types of neighborhood graphs: a variation on and a generalization of empty region graphs, which considerably improve the robustness of neighborhood-based analysis tools, such as topological decomposition. Our findings suggest that these neighborhood graphs lead to more accurate topological representations of low- and high- dimensional data sets at relatively low cost, both in terms of storage and computation time. We describe the implications of our work in the analysis and visualization of scalar functions, and provide general strategies for computing and applying our neighborhood graphs towards robust data analysis. PMID:22034302

  7. Separation and reconstruction of BCG and EEG signals during continuous EEG and fMRI recordings

    PubMed Central

    Xia, Hongjing; Ruan, Dan; Cohen, Mark S.

    2014-01-01

    Despite considerable effort to remove it, the ballistocardiogram (BCG) remains a major artifact in electroencephalographic data (EEG) acquired inside magnetic resonance imaging (MRI) scanners, particularly in continuous (as opposed to event-related) recordings. In this study, we have developed a new Direct Recording Prior Encoding (DRPE) method to extract and separate the BCG and EEG components from contaminated signals, and have demonstrated its performance by comparing it quantitatively to the popular Optimal Basis Set (OBS) method. Our modified recording configuration allows us to obtain representative bases of the BCG- and EEG-only signals. Further, we have developed an optimization-based reconstruction approach to maximally incorporate prior knowledge of the BCG/EEG subspaces, and of the signal characteristics within them. Both OBS and DRPE methods were tested with experimental data, and compared quantitatively using cross-validation. In the challenging continuous EEG studies, DRPE outperforms the OBS method by nearly sevenfold in separating the continuous BCG and EEG signals. PMID:25002836

  8. Acquisition and reconstruction of Raman and fluorescence signals for rat leg imaging

    NASA Astrophysics Data System (ADS)

    Demers, Jennifer-Lynn; Pogue, Brian; Leblond, Frederic; Esmonde-White, Francis; Okagbare, Paul; Morris, Michael

    2011-03-01

    Recovery of Raman or Fluorescence signatures from within thin tissues benefits from model-based estimation of where the signal came from, especially if the signal passes through layers in which the absorption or scattering signatures distort the signal. Estimation of the signal strength requires appropriate normalization or model-based recovery, but the key to achieving good results is a good model of light transport. While diffusion models are routinely used for optical tomography of tissue, there's some thought that more precise radiation transport modeling is required for accurate estimation. However, diffusion is often used for small animal imaging, because it's a practical approach, which doesn't require knowledge of the scatter phase function at each point in the tissue. The question asked in this study is, whether experimentally acquired data in small volumes such as a rodent leg can be accurately modeled and reconstructed using diffusion theory. This study uses leg geometries extracted from animal CT scans and liquid phantoms to study the diffusion approximations. The preliminary results show that under certain conditions the collected data follows the expected trend.

  9. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  10. SparsePZ: Sparse Representation of Photometric Redshift PDFs

    NASA Astrophysics Data System (ADS)

    Carrasco Kind, Matias; Brunner, R. J.

    2015-11-01

    SparsePZ uses sparse basis representation to fully represent individual photometric redshift probability density functions (PDFs). This approach requires approximately half the parameters for the same multi-Gaussian fitting accuracy, and has the additional advantage that an entire PDF can be stored by using a 4-byte integer per basis function. Only 10-20 points per galaxy are needed to reconstruct both the individual PDFs and the ensemble redshift distribution, N(z), to an accuracy of 99.9 per cent when compared to the one built using the original PDFs computed with a resolution of δz = 0.01, reducing the required storage of 200 original values by a factor of 10-20. This basis representation can be directly extended to a cosmological analysis, thereby increasing computational performance without losing resolution or accuracy.

  11. Cylinder pressure reconstruction based on complex radial basis function networks from vibration and speed signals

    NASA Astrophysics Data System (ADS)

    Johnsson, Roger

    2006-11-01

    Methods to measure and monitor the cylinder pressure in internal combustion engines can contribute to reduced fuel consumption, noise and exhaust emissions. As direct measurements of the cylinder pressure are expensive and not suitable for measurements in vehicles on the road indirect methods which measure cylinder pressure have great potential value. In this paper, a non-linear model based on complex radial basis function (RBF) networks is proposed for the reconstruction of in-cylinder pressure pulse waveforms. Input to the network is the Fourier transforms of both engine structure vibration and crankshaft speed fluctuation. The primary reason for the use of Fourier transforms is that different frequency regions of the signals are used for the reconstruction process. This approach also makes it easier to reduce the amount of information that is used as input to the RBF network. The complex RBF network was applied to measurements from a 6-cylinder ethanol powered diesel engine over a wide range of running conditions. Prediction accuracy was validated by comparing a number of parameters between the measured and predicted cylinder pressure waveform such as maximum pressure, maximum rate of pressure rise and indicated mean effective pressure. The performance of the network was also evaluated for a number of untrained running conditions that differ both in speed and load from the trained ones. The results for the validation set were comparable to the trained conditions.

  12. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes

    PubMed Central

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-01-01

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable. PMID:27110789

  13. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes.

    PubMed

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-01-01

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable. PMID:27110789

  14. Sparsity: a ubiquitous but unexplored property of geophysical signals for multi-scale modeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Fouofula-Georgiou, E.; Ebtehaj, A. M.

    2012-04-01

    Sparsity: a ubiquitous but unexplored property of geophysical signals for multi-scale modeling and reconstruction Efi Foufoula-Georgiou and Ardeshir Mohammad Ebtehaj Department of Civil Engineering and National Center for Earth-surface Dynamics University of Minnesota, Minneapolis, MN 55414 Many geophysical processes exhibit variability over a wide range of scales. Yet, in numerical modeling or remote sensing observations not all of this variability is explicitly resolved due to limitations in computational resources or sensor configurations. As a result, sub-grid scale parameterizations and downscaling/upscaling representations are essential. Such representations take advantage of scale invariance which has been theoretically or empirically documented in a wide range of geophysical processes, including precipitation, soil moisture, and topography. Here we present a new direction in the field of multi-scale analysis and reconstruction. It capitalizes on the fact that most geophysical signals are naturally redundant, due to spatial dependence and coherence over a range of scales, and thus when projected onto an appropriate space (e.g, Fourier or wavelet) only a few representation coefficients are non-zero -- this property is called sparsity. The sparsity can serve as a priori knowledge to properly regularize the otherwise ill-posed inverse problem of creating information at scales smaller than resolved, which is at the heart of sub-grid scale and downscaling parameterizations. The same property of sparsity is also shown to play a revolutionary role in revisiting the problem of optimal estimation of non-Gaussian processes. Theoretical concepts are borrowed from the new field of compressive sampling and super-resolution and the merits of the methodology are demonstrated using examples from precipitation downscaling, multi-scale data fusion and data assimilation.

  15. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  16. Sparse-view image reconstruction in inverse-geometry CT (IGCT) for fast, low-dose, volumetric dental X-ray imaging

    NASA Astrophysics Data System (ADS)

    Hong, D. K.; Cho, H. S.; Oh, J. E.; Je, U. K.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Park, Y. O.; Choi, S. I.; Koo, Y. S.; Cho, H. M.

    2012-12-01

    As a new direction for computed tomography (CT) imaging, inverse-geometry CT (IGCT) has been recently introduced and is intended to overcome limitations in conventional cone-beam CT (CBCT) such as the cone-beam artifacts, imaging dose, temporal resolution, scatter, cost, and so on. While the CBCT geometry consists of X-rays emanating from a small focal spot and collimated toward a larger detector, the IGCT geometry employs a large-area scanned source array with the Xray beams collimated toward a smaller-area detector. In this research, we explored an effective IGCT reconstruction algorithm based on the total-variation (TV) minimization method and studied the feasibility of the IGCT geometry for potential applications to fast, low-dose volumetric dental X-ray imaging. We implemented the algorithm, performed systematic simulation works, and evaluated the imaging characteristics quantitatively. Although much engineering and validation works are required to achieve clinical implementation, our preliminary results have demonstrated a potential for improved volumetric imaging with reduced dose.

  17. Climate signal age effects in boreal tree-rings: Lessons to be learned for paleoclimatic reconstructions

    NASA Astrophysics Data System (ADS)

    Konter, Oliver; Büntgen, Ulf; Carrer, Marco; Timonen, Mauri; Esper, Jan

    2016-06-01

    Age-related alternation in the sensitivity of tree-ring width (TRW) to climate variability has been reported for different forest species and environments. The resulting growth-climate response patterns are, however, often inconsistent and similar assessments using maximum latewood density (MXD) are still missing. Here, we analyze climate signal age effects (CSAE, age-related changes in the climate sensitivity of tree growth) in a newly aggregated network of 692 Pinus sylvestris L. TRW and MXD series from northern Fennoscandia. Although summer temperature sensitivity of TRW (rAll = 0.48) ranges below that of MXD (rAll = 0.76), it declines for both parameters as cambial age increases. Assessment of CSAE for individual series further reveals decreasing correlation values as a function of time. This declining signal strength remains temporally robust and negative for MXD, while age-related trends in TRW exhibit resilient meanderings of positive and negative trends. Although CSAE are significant and temporally variable in both tree-ring parameters, MXD is more suitable for the development of climate reconstructions. Our results indicate that sampling of young and old trees, and testing for CSAE, should become routine for TRW and MXD data prior to any paleoclimatic endeavor.

  18. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  19. Sparse principal component analysis in cancer research

    PubMed Central

    Hsu, Ying-Lin; Huang, Po-Yu; Chen, Dung-Tsa

    2015-01-01

    A critical challenging component in analyzing high-dimensional data in cancer research is how to reduce the dimension of data and how to extract relevant features. Sparse principal component analysis (PCA) is a powerful statistical tool that could help reduce data dimension and select important variables simultaneously. In this paper, we review several approaches for sparse PCA, including variance maximization (VM), reconstruction error minimization (REM), singular value decomposition (SVD), and probabilistic modeling (PM) approaches. A simulation study is conducted to compare PCA and the sparse PCAs. An example using a published gene signature in a lung cancer dataset is used to illustrate the potential application of sparse PCAs in cancer research. PMID:26719835

  20. Signal reconstruction of the slow wave and spike potential from electrogastrogram.

    PubMed

    Qin, Shujia; Ding, Wei; Miao, Lei; Xi, Ning; Li, Hongyi; Yang, Chunmin

    2015-01-01

    The gastric slow wave and the spike potential can correspondingly represent the rhythm and the intensity of stomach motility. Because of the filtering effect of biological tissue, electrogastrogram (EGG) cannot measure the spike potential on the abdominal surface in the time domain. Thus, currently the parameters of EGG adopted by clinical applications are only the characteristics of the slow wave, such as the dominant frequency, the dominant power and the instability coefficients. The limitation of excluding the spike potential analyses hinders EGG from being a diagnosis to comprehensively reveal the motility status of the stomach. To overcome this defect, this paper a) presents an EGG reconstruction method utilizing the specified signal components decomposed by the discrete wavelet packet transform, and b) obtains a frequency band for the human gastric spike potential through fasting and postprandial cutaneous EGG experiments for twenty-five human volunteers. The results indicate the lower bound of the human gastric spike potential frequency is 0.96±0.20 Hz (58±12 cpm), and the upper bound is 1.17±0.23 Hz (70±14 cpm), both of which have not been reported before to the best of our knowledge. As an auxiliary validation of the proposed method, synchronous serosa-surface EGG acquisitions are carried out for two dogs. The frequency band results for the gastric spike potential of the two dogs are respectively 0.83-0.90 Hz (50-54 cpm) and 1.05-1.32 Hz (63-79 cpm). They lie in the reference range 50-80 cpm proposed in previous literature, showing the feasibility of the reconstruction method in this paper. PMID:26405915

  1. Balanced Sparse Model for Tight Frames in Compressed Sensing Magnetic Resonance Imaging

    PubMed Central

    Liu, Yunsong; Cai, Jian-Feng; Zhan, Zhifang; Guo, Di; Ye, Jing; Chen, Zhong; Qu, Xiaobo

    2015-01-01

    Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B). PMID:25849209

  2. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    PubMed Central

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-01

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612

  3. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  4. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    PubMed Central

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  5. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    PubMed

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating. PMID:25815921

  6. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  7. Reconstructing the Nature of the First Cosmic Sources from the Anisotropic 21-cm Signal

    NASA Astrophysics Data System (ADS)

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-01

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z ˜15 ), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  8. Protein crystal structure from non-oriented, single-axis sparse X-ray data.

    PubMed

    Wierman, Jennifer L; Lan, Ti-Yen; Tate, Mark W; Philipp, Hugh T; Elser, Veit; Gruner, Sol M

    2016-01-01

    X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so 'sparse' in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the EMC algorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ∼200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using the EMC algorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. This suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of the EMC algorithm even in cases where the data are

  9. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online

    PubMed Central

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988

  10. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  11. Online sparse representation for remote sensing compressed-sensed video sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  12. Paleoenvironmental reconstruction of Lake Azul (Azores archipelago, Portugal) and its implications for the NAO signal.

    NASA Astrophysics Data System (ADS)

    Jesús Rubio, Maria; Sanchez, Guiomar; Saez, Alberto; Vázquez-Loureiro, David; Bao, Roberto; José Pueyo, Juan; Gómez-Paccard, Miriam; Gonçalves, Vitor; Raposeiro, Pedro M.; Francus, Pierre; Hernández, Armand; Margalef, Olga; Buchaca, Teresa; Pla, Sergi; Barreiro-Lostres, Fernando; Valero-Garcés, Blas L.; Giralt, Santiago

    2013-04-01

    radiocarbon date at the base of this fine mixture manifests the record for the last ca 650 cal. years B.P., which corresponds to the last recorded eruption. The dark brown layers are dominated by organic matter (low XRF signal and almost no diatoms) whereas light brown facies are mainly made up of terrigenous particles (high XRF signal and high content of benthic diatoms) and vascular plant macroremains. Bulk organic matter analyses have revealed that algae constitute the main compound of the organic fraction. However, the organic matter in the dark layers is composed by C3 plants, coherent with the clastic nature of this facies deposited during flood events. Increase of precipitation, ruled by the negative phase of the NAO, together with the steep borders of the Sete Cidades crater prompts a substantial increase in the erosion of the catchment and hence an enhancement of runoff that reaches Azul Lake and the occurrence of the flood events. Therefore, identifying, characterizing and counting the dark layers would allow to reconstruct the intensity and periodicity of the negative phase of the NAO climate mode.

  13. Sparse and redundant representations for inverse problems and recognition

    NASA Astrophysics Data System (ADS)

    Patel, Vishal M.

    Sparse and redundant representation of data enables the description of signals as linear combinations of a few atoms from a dictionary. In this dissertation, we study applications of sparse and redundant representations in inverse problems and object recognition. Furthermore, we propose two novel imaging modalities based on the recently introduced theory of Compressed Sensing (CS). This dissertation consists of four major parts. In the first part of the dissertation, we study a new type of deconvolution algorithm that is based on estimating the image from a shearlet decomposition. Shearlets provide a multi-directional and multi-scale decomposition that has been mathematically shown to represent distributed discontinuities such as edges better than traditional wavelets. We develop a deconvolution algorithm that allows for the approximation inversion operator to be controlled on a multi-scale and multi-directional basis. Furthermore, we develop a method for the automatic determination of the threshold values for the noise shrinkage for each scale and direction without explicit knowledge of the noise variance using a generalized cross validation method. In the second part of the dissertation, we study a reconstruction method that recovers highly undersampled images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. Our method makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. We will demonstrate by experiments that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. In the third part of the dissertation, we introduce a novel Synthetic Aperture Radar (SAR) imaging modality which can provide a high resolution map of the spatial distribution of targets and terrain using a significantly reduced number of needed

  14. Group sparsity based spectrum estimation of harmonic speech signals

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin D.; Wang, Ben

    2015-05-01

    Spectrum analysis of speech signals is important for their detection, recognition, and separation. Speech signals are nonstationary with time-varying frequencies which, when analyzed by Fourier analysis over a short time window, exhibit harmonic spectra, i.e., the fundamental frequencies are accompanied by multiple associated harmonic frequencies. With proper modeling, such harmonic signal components can be cast as group sparse and solved using group sparse signal reconstruction methods. In this case, all harmonic components contribute to effective signal detection and fundamental frequency estimation with improved reliability and spectrum resolution. The estimation of the fundamental frequency signature is implemented using the block sparse Bayesian learning technique, which is known to provide high-resolution spectrum estimations. Simulation results confirm the superiority of the proposed technique when compared to the conventional STFT-based methods.

  15. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    SciTech Connect

    Chartrand, Rick

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  16. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  17. Sparse representation with kernels.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Chia, Liang-Tien

    2013-02-01

    Recent research has shown the initial success of sparse coding (Sc) in solving many computer vision tasks. Motivated by the fact that kernel trick can capture the nonlinear similarity of features, which helps in finding a sparse representation of nonlinear features, we propose kernel sparse representation (KSR). Essentially, KSR is a sparse coding technique in a high dimensional feature space mapped by an implicit mapping function. We apply KSR to feature coding in image classification, face recognition, and kernel matrix approximation. More specifically, by incorporating KSR into spatial pyramid matching (SPM), we develop KSRSPM, which achieves a good performance for image classification. Moreover, KSR-based feature coding can be shown as a generalization of efficient match kernel and an extension of Sc-based SPM. We further show that our proposed KSR using a histogram intersection kernel (HIK) can be considered a soft assignment extension of HIK-based feature quantization in the feature coding process. Besides feature coding, comparing with sparse coding, KSR can learn more discriminative sparse codes and achieve higher accuracy for face recognition. Moreover, KSR can also be applied to kernel matrix approximation in large scale learning tasks, and it demonstrates its robustness to kernel matrix approximation, especially when a small fraction of the data is used. Extensive experimental results demonstrate promising results of KSR in image classification, face recognition, and kernel matrix approximation. All these applications prove the effectiveness of KSR in computer vision and machine learning tasks. PMID:23014744

  18. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    PubMed

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method. PMID:26542318

  19. Characterizing heterogeneity among virus particles by stochastic 3D signal reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Gong, Yunye; Wang, Qiu; Zheng, Yili; Doerschuk, Peter C.

    2015-09-01

    In single-particle cryo electron microscopy, many electron microscope images each of a single instance of a biological particle such as a virus or a ribosome are measured and the 3-D electron scattering intensity of the particle is reconstructed by computation. Because each instance of the particle is imaged separately, it should be possible to characterize the heterogeneity of the different instances of the particle as well as a nominal reconstruction of the particle. In this paper, such an algorithm is described and demonstrated on the bacteriophage Hong Kong 97. The algorithm is a statistical maximum likelihood estimator computed by an expectation maximization algorithm implemented in Matlab software.

  20. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  1. Pollen reconstructions, tree-rings and early climate data from Minnesota, USA: a cautionary tale of bias and signal attentuation

    NASA Astrophysics Data System (ADS)

    St-Jacques, J. M.; Cumming, B. F.; Smol, J. P.; Sauchyn, D.

    2015-12-01

    High-resolution proxy reconstructions are essential to assess the rate and magnitude of anthropogenic global warming. High-resolution pollen records are being critically examined for the production of accurate climate reconstructions of the last millennium, often as extensions of tree-ring records. Past climate inference from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have directly altered vegetation, and hence modern pollen deposition is a product of landscape disturbance and climate, unlike in the past with its dominance of climate-derived processes. This could cause serious bias in pollen reconstructions. In the US Midwest, direct human impacts have greatly altered the vegetation and pollen rain since Euro-American settlement in the mid-19th century. Using instrumental climate data from the early 1800s from Fort Snelling (Minnesota), we assessed the bias from the conventional method of inferring climate from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The pre-settlement calibration set provides more accurate reconstructions of 19th century temperature than the modern set does. When both calibration sets are used to reconstruct temperatures since AD 1116 from a varve-dated pollen record from Lake Mina, Minnesota, the conventional method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 oC, resulting in an overestimation of Little Ice Age temperature and an underestimation of anthropogenic warming. We also compared the pollen-inferred moisture reconstruction to a four-century tree-ring-inferred moisture record from Minnesota and Dakotas, which shows that the tree-ring reconstruction is biased towards dry conditions and records wet periods relatively poorly, giving a false impression of regional aridity. The tree-ring chronology also suggests varve

  2. Grassmannian sparse representations

    NASA Astrophysics Data System (ADS)

    Azary, Sherif; Savakis, Andreas

    2015-05-01

    We present Grassmannian sparse representations (GSR), a sparse representation Grassmann learning framework for efficient classification. Sparse representation classification offers a powerful approach for recognition in a variety of contexts. However, a major drawback of sparse representation methods is their computational performance and memory utilization for high-dimensional data. A Grassmann manifold is a space that promotes smooth surfaces where points represent subspaces and the relationship between points is defined by the mapping of an orthogonal matrix. Grassmann manifolds are well suited for computer vision problems because they promote high between-class discrimination and within-class clustering, while offering computational advantages by mapping each subspace onto a single point. The GSR framework combines Grassmannian kernels and sparse representations, including regularized least squares and least angle regression, to improve high accuracy recognition while overcoming the drawbacks of performance and dependencies on high dimensional data distributions. The effectiveness of GSR is demonstrated on computationally intensive multiview action sequences, three-dimensional action sequences, and face recognition datasets.

  3. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  4. Millennial precipitation reconstruction for the Jemez Mountains, New Mexico, reveals changingb drought signal

    USGS Publications Warehouse

    Touchan, R.; Woodhouse, C.A.; Meko, D.M.; Allen, C.

    2011-01-01

    Drought is a recurring phenomenon in the American Southwest. Since the frequency and severity of hydrologic droughts and other hydroclimatic events are of critical importance to the ecology and rapidly growing human population of this region, knowledge of long-term natural hydroclimatic variability is valuable for resource managers and policy-makers. An October-June precipitation reconstruction for the period AD 824-2007 was developed from multi-century tree-ring records of Pseudotsuga menziesii (Douglas-fir), Pinus strobiformis (Southwestern white pine) and Pinus ponderosa (Ponderosa pine) for the Jemez Mountains in Northern New Mexico. Calibration and verification statistics for the period 1896-2007 show a high level of skill, and account for a significant portion of the observed variance (>50%) irrespective of which period is used to develop or verify the regression model. Split-sample validation supports our use of a reconstruction model based on the full period of reliable observational data (1896-2007). A recent segment of the reconstruction (2000-2006) emerges as the driest 7-year period sensed by the trees in the entire record. That this period was only moderately dry in precipitation anomaly likely indicates accentuated stress from other factors, such as warmer temperatures. Correlation field maps of actual and reconstructed October-June total precipitation, sea surface temperatures and 500-mb geopotential heights show characteristics that are similar to those indicative of El Ni??o-Southern Oscillation patterns, particularly with regard to ocean and atmospheric conditions in the equatorial and north Pacific. Our 1184-year reconstruction of hydroclimatic variability provides long-term perspective on current and 20th century wet and dry events in Northern New Mexico, is useful to guide expectations of future variability, aids sustainable water management, provides scenarios for drought planning and as inputs for hydrologic models under a broader range of

  5. Protein crystal structure from non-oriented, single-axis sparse X-ray data

    PubMed Central

    Wierman, Jennifer L.; Lan, Ti-Yen; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit; Gruner, Sol M.

    2016-01-01

    X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so ‘sparse’ in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the EMC algorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ∼200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using the EMC algorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. This suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of the EMC algorithm even in cases where the data

  6. Structured Multifrontal Sparse Solver

    Energy Science and Technology Software Center (ESTSC)

    2014-05-01

    StruMF is an algebraic structured preconditioner for the interative solution of large sparse linear systems. The preconditioner corresponds to a multifrontal variant of sparse LU factorization in which some dense blocks of the factors are approximated with low-rank matrices. It is algebraic in that it only requires the linear system itself, and the approximation threshold that determines the accuracy of individual low-rank approximations. Favourable rank properties are obtained using a block partitioning which is amore » refinement of the partitioning induced by nested dissection ordering.« less

  7. Increasing signal-to-noise ratio of reconstructed digital holograms by using light spatial noise portrait of camera's photosensor

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.

    2015-01-01

    Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.

  8. Sparse representation of complex MRI images.

    PubMed

    Nandakumar, Hari Prasad; Ji, Jim

    2008-01-01

    Sparse representation of images acquired from Magnet Resonance Imaging (MRI) has several potential applications. MRI is unique in that the raw images are complex. Complex wavelet transforms (CWT) can be used to produce flexible signal representations when compared to Discrete Wavelet Transform (DWT). In this work, five different schemes using CWT or DWT are tested for sparse representation of MRI images which are in the form of complex values, separate real/imaginary, or separate magnitude/phase. The experimental results on real in-vivo MRI images show that appropriate CWT, e.g., dual-tree CWT (DTCWT), can achieve sparsity better than DWT with similar Mean Square Error. PMID:19162677

  9. A boostrap algorithm for temporal signal reconstruction in the presence of noise from its fractional Fourier transformed intensity spectra

    SciTech Connect

    Tan, Cheng-Yang; /Fermilab

    2011-02-01

    A bootstrap algorithm for reconstructing the temporal signal from four of its fractional Fourier intensity spectra in the presence of noise is described. An optical arrangement is proposed which realises the bootstrap method for the measurement of ultrashort laser pulses. The measurement of short laser pulses which are less than 1 ps is an ongoing challenge in optical physics. One reason is that no oscilloscope exists today which can directly measure the time structure of these pulses and so it becomes necessary to invent other techniques which indirectly provide the necessary information for temporal pulse reconstruction. One method called FROG (frequency resolved optical gating) has been in use since 19911 and is one of the popular methods for recovering these types of short pulses. The idea behind FROG is the use of multiple time-correlated pulse measurements in the frequency domain for the reconstruction. Multiple data sets are required because only intensity information is recorded and not phase, and thus by collecting multiple data sets, there is enough redundant measurements to yield the original time structure, but not necessarily uniquely (or even up to an arbitrary constant phase offset). The objective of this paper is to describe another method which is simpler than FROG. Instead of collecting many auto-correlated data sets, only two spectral intensity measurements of the temporal signal are needed in the absence of noise. The first can be from the intensity components of its usual Fourier transform and the second from its FrFT (fractional Fourier transform). In the presence of noise, a minimum of four measurements are required with the same FrFT order but with two different apertures. Armed with these two or four measurements, a unique solution up to a constant phase offset can be constructed.

  10. On ECG reconstruction using weighted-compressive sensing

    PubMed Central

    Kassim, Ashraf A.

    2014-01-01

    The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices. PMID:26609381

  11. On ECG reconstruction using weighted-compressive sensing.

    PubMed

    Zonoobi, Dornoosh; Kassim, Ashraf A

    2014-06-01

    The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices. PMID:26609381

  12. An infrared image super-resolution reconstruction method based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei

    2016-05-01

    Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.

  13. Neuromagnetic source reconstruction

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.; Leahy, R.M.

    1994-12-31

    In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.

  14. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  15. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  16. Robust Fringe Projection Profilometry via Sparse Representation.

    PubMed

    Budianto; Lun, Daniel P K

    2016-04-01

    In this paper, a robust fringe projection profilometry (FPP) algorithm using the sparse dictionary learning and sparse coding techniques is proposed. When reconstructing the 3D model of objects, traditional FPP systems often fail to perform if the captured fringe images have a complex scene, such as having multiple and occluded objects. It introduces great difficulty to the phase unwrapping process of an FPP system that can result in serious distortion in the final reconstructed 3D model. For the proposed algorithm, it encodes the period order information, which is essential to phase unwrapping, into some texture patterns and embeds them to the projected fringe patterns. When the encoded fringe image is captured, a modified morphological component analysis and a sparse classification procedure are performed to decode and identify the embedded period order information. It is then used to assist the phase unwrapping process to deal with the different artifacts in the fringe images. Experimental results show that the proposed algorithm can significantly improve the robustness of an FPP system. It performs equally well no matter the fringe images have a simple or complex scene, or are affected due to the ambient lighting of the working environment. PMID:26890867

  17. SAR Image despeckling via sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang

    2014-11-01

    SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.

  18. A fatal shot with a signal flare--a crime reconstruction.

    PubMed

    Brozek-Mucha, Zuzanna

    2009-05-01

    A reconstruction of an incident of a fatal wounding of a football fan with a parachute flare was performed. Physical and chemical examinations of the victim's trousers and parts of a flare removed from the wound in his leg were performed by means of an optical microscope and a scanning electron microscope coupled with an energy dispersive X-ray spectrometer. Signs of burning were seen on the front upper part of the trousers, including a 35-40 mm circular hole with melted and charred edges. Postblast residue present on the surface of the trousers contained strontium, magnesium, potassium, and chlorine. Also the case files--the medical reports and the witnesses' testimonies--were thoroughly studied. It has been found that the evidence collected in the case supported the version of the victim being shot by another person from a distance. PMID:19432745

  19. Model-free reconstruction of excitatory neuronal connectivity from calcium imaging signals.

    PubMed

    Stetter, Olav; Battaglia, Demian; Soriano, Jordi; Geisel, Theo

    2012-01-01

    A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized

  20. Reconstruction of the High-Osmolarity Glycerol (HOG) Signaling Pathway from the Halophilic Fungus Wallemia ichthyophaga in Saccharomyces cerevisiae

    PubMed Central

    Konte, Tilen; Terpitz, Ulrich; Plemenitaš, Ana

    2016-01-01

    The basidiomycetous fungus Wallemia ichthyophaga grows between 1.7 and 5.1 M NaCl and is the most halophilic eukaryote described to date. Like other fungi, W. ichthyophaga detects changes in environmental salinity mainly by the evolutionarily conserved high-osmolarity glycerol (HOG) signaling pathway. In Saccharomyces cerevisiae, the HOG pathway has been extensively studied in connection to osmotic regulation, with a valuable knock-out strain collection established. In the present study, we reconstructed the architecture of the HOG pathway of W. ichthyophaga in suitable S. cerevisiae knock-out strains, through heterologous expression of the W. ichthyophaga HOG pathway proteins. Compared to S. cerevisiae, where the Pbs2 (ScPbs2) kinase of the HOG pathway is activated via the SHO1 and SLN1 branches, the interactions between the W. ichthyophaga Pbs2 (WiPbs2) kinase and the W. ichthyophaga SHO1 branch orthologs are not conserved: as well as evidence of poor interactions between the WiSho1 Src-homology 3 (SH3) domain and the WiPbs2 proline-rich motif, the absence of a considerable part of the osmosensing apparatus in the genome of W. ichthyophaga suggests that the SHO1 branch components are not involved in HOG signaling in this halophilic fungus. In contrast, the conserved activation of WiPbs2 by the S. cerevisiae ScSsk2/ScSsk22 kinase and the sensitivity of W. ichthyophaga cells to fludioxonil, emphasize the significance of two-component (SLN1-like) signaling via Group III histidine kinase. Combined with protein modeling data, our study reveals conserved and non-conserved protein interactions in the HOG signaling pathway of W. ichthyophaga and therefore significantly improves the knowledge of hyperosmotic signal processing in this halophilic fungus. PMID:27379041

  1. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  2. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  3. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  4. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  5. Optical sparse aperture imaging.

    PubMed

    Miller, Nicholas J; Dierking, Matthew P; Duncan, Bradley D

    2007-08-10

    The resolution of a conventional diffraction-limited imaging system is proportional to its pupil diameter. A primary goal of sparse aperture imaging is to enhance resolution while minimizing the total light collection area; the latter being desirable, in part, because of the cost of large, monolithic apertures. Performance metrics are defined and used to evaluate several sparse aperture arrays constructed from multiple, identical, circular subapertures. Subaperture piston and/or tilt effects on image quality are also considered. We selected arrays with compact nonredundant autocorrelations first described by Golay. We vary both the number of subapertures and their relative spacings to arrive at an optimized array. We report the results of an experiment in which we synthesized an image from multiple subaperture pupil fields by masking a large lens with a Golay array. For this experiment we imaged a slant edge feature of an ISO12233 resolution target in order to measure the modulation transfer function. We note the contrast reduction inherent in images formed through sparse aperture arrays and demonstrate the use of a Wiener-Helstrom filter to restore contrast in our experimental images. Finally, we describe a method to synthesize images from multiple subaperture focal plane intensity images using a phase retrieval algorithm to obtain estimates of subaperture pupil fields. Experimental results from synthesizing an image of a point object from multiple subaperture images are presented, and weaknesses of the phase retrieval method for this application are discussed. PMID:17694146

  6. Annual Temperature Reconstruction by Signal Decomposition and Synthesis from Multi-Proxies in Xinjiang, China, from 1850 to 2001

    PubMed Central

    Zheng, Jingyun; Liu, Yang; Hao, Zhixin

    2015-01-01

    We reconstructed the annual temperature anomaly series in Xinjiang during 1850–2001 based on three kinds of proxies, including 17 tree-ring width chronologies, one tree-ring δ13C series and two δ18O series of ice cores, and instrumental observation data. The low- and high-frequency signal decomposition for the raw temperature proxy data was obtained by a fast Fourier transform filter with a window size of 20 years, which was used to build a good relationship that explained the high variance between the temperature and the proxy data used for the reconstruction. The results showed that for 1850–2001, the temperature during most periods prior to the 1920s was lower than the mean temperature in the 20th century. Remarkable warming occurred in the 20th century at a rate of 0.85°C/100a, which was higher than that during the past 150 years. Two cold periods occurred before the 1870s and around the 1910s, and a relatively warm interval occurred around the 1940s. In addition, the temperature series showed a warming hiatus of approximately 20 years around the 1970s, and a rapid increase since the 1980s. PMID:26632814

  7. Central-north China precipitation as reconstructed from the Qing dynasty: Signal of the Antarctic Atmospheric Oscillation

    NASA Astrophysics Data System (ADS)

    Wang, Huijun; Fan, Ke

    2005-12-01

    Based on the long-term Central-north China precipitation (CNCP) time series reconstructed from the Qing Dynasty Official Document, the relationship between CNCP and the Antarctic Atmospheric Oscillation (AAO) in June-July is examined. The analysis yields a (significant) negative correlation of -0.22. The signal of AAO in CNCP is further studied through analyses of the atmospheric general circulation variability related to AAO. It follows that AAO-related variability of convergence and convection over the tropical western Pacific can exert impact on the circulation condition and precipitation in north China (actually, the precipitation in the Yangtze River Valley as well) through atmospheric teleconnection known as the East Asia-Pacific (or Pacific-Japan) teleconnection wave pattern. There is also an AAO-connected wave train in the vorticity field at high troposphere over Eurasia, providing an anti-cyclonic circulation in central-north China favorable to the decline of precipitation in positive phase of AAO.

  8. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  9. k-t Group sparse: a method for accelerating dynamic MRI.

    PubMed

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G

    2011-10-01

    Compressed sensing (CS) is a data-reduction technique that has been applied to speed up the acquisition in MRI. However, the use of this technique in dynamic MR applications has been limited in terms of the maximum achievable reduction factor. In general, noise-like artefacts and bad temporal fidelity are visible in standard CS MRI reconstructions when high reduction factors are used. To increase the maximum achievable reduction factor, additional or prior information can be incorporated in the CS reconstruction. Here, a novel CS reconstruction method is proposed that exploits the structure within the sparse representation of a signal by enforcing the support components to be in the form of groups. These groups act like a constraint in the reconstruction. The information about the support region can be easily obtained from training data in dynamic MRI acquisitions. The proposed approach was tested in two-dimensional cardiac cine MRI with both downsampled and undersampled data. Results show that higher acceleration factors (up to 9-fold), with improved spatial and temporal quality, can be obtained with the proposed approach in comparison to the standard CS reconstructions. PMID:21394781

  10. Imaging method for downward-looking sparse linear array three-dimensional synthetic aperture radar based on reweighted atomic norm

    NASA Astrophysics Data System (ADS)

    Bao, Qian; Han, Kuoye; Lin, Yun; Zhang, Bingchen; Liu, Jianguo; Hong, Wen

    2016-01-01

    We propose an imaging algorithm for downward-looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) in the circumstance of cross-track sparse and nonuniform array configuration. Considering the off-grid effect and the resolution improvement, the algorithm combines pseudo-polar formatting algorithm, reweighed atomic norm minimization (RANM), and a parametric relaxation-based cyclic approach (RELAX) to improve the imaging performance with a reduced number of array antennas. RANM is employed in the cross-track imaging after pseudo-polar formatting the DLSLA 3-D SAR echo signal, then the reconstructed results are refined by RELAX. By taking advantage of the reweighted scheme, RANM can improve the resolution of the atomic norm minimization, and outperforms discretized compressive sensing schemes that suffer from off-grid effect. The simulated and real data experiments of DLSLA 3-D SAR verify the performance of the proposed algorithm.

  11. Ultrawideband compressed sensing of arbitrary multi-tone sparse radio frequencies using spectrally encoded ultrafast laser pulses.

    PubMed

    Bosworth, Bryan T; Stroud, Jasper R; Tran, Dung N; Tran, Trac D; Chin, Sang; Foster, Mark A

    2015-07-01

    We demonstrate a photonic system for pseudorandom sampling of multi-tone sparse radio-frequency (RF) signals in an 11.95-GHz bandwidth using <1% of the measurements required for Nyquist sampling. Pseudorandom binary sequence (PRBS) patterns are modulated onto highly chirped laser pulses, encoding the patterns onto the optical spectra. The pulses are partially compressed to increase the effective sampling rate by 2.07×, modulated with the RF signal, and fully compressed yielding optical integration of the PRBS-RF inner product prior to photodetection. This yields a 266× reduction in the required electronic sampling rate. We introduce a joint-sparsity-based matching-pursuit reconstruction via bagging to achieve accurate recovery of tones at arbitrary frequencies relative to the reconstruction basis. PMID:26125363

  12. Input reconstruction for networked control systems subject to deception attacks and data losses on control signals

    NASA Astrophysics Data System (ADS)

    Keller, J. Y.; Chabir, K.; Sauter, D.

    2016-03-01

    State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.

  13. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  14. Novel method for hit-position reconstruction using voltage signals in plastic scintillators and its application to Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Raczyński, L.; Moskal, P.; Kowalski, P.; Wiślicki, W.; Bednarski, T.; Białas, P.; Czerwiński, E.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kozik, T.; Krzemień, W.; Kubicz, E.; Molenda, M.; Moskal, I.; Niedźwiecki, Sz.; Pałka, M.; Pawlik-Niedźwiecka, M.; Rudy, Z.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Zieliński, M.; Zoń, N.

    2014-11-01

    Currently inorganic scintillator detectors are used in all commercial Time of Flight Positron Emission Tomograph (TOF-PET) devices. The J-PET collaboration investigates a possibility of construction of a PET scanner from plastic scintillators which would allow for single bed imaging of the whole human body. This paper describes a novel method of hit-position reconstruction based on sampled signals and an example of an application of the method for a single module with a 30 cm long plastic strip, read out on both ends by Hamamatsu R4998 photomultipliers. The sampling scheme to generate a vector with samples of a PET event waveform with respect to four user-defined amplitudes is introduced. The experimental setup provides irradiation of a chosen position in the plastic scintillator strip with an annihilation gamma quanta of energy 511 keV. The statistical test for a multivariate normal (MVN) distribution of measured vectors at a given position is developed, and it is shown that signals sampled at four thresholds in a voltage domain are approximately normally distributed variables. With the presented method of a vector analysis made out of waveform samples acquired with four thresholds, we obtain a spatial resolution of about 1 cm and a timing resolution of about 80 ps (σ).

  15. Dictionary Learning Algorithms for Sparse Representation

    PubMed Central

    Kreutz-Delgado, Kenneth; Murray, Joseph F.; Rao, Bhaskar D.; Engan, Kjersti; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial “25 words or less”), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations. Experiments were performed using synthetic data and natural images. For complete dictionaries, we demonstrate that our algorithms have improved performance over other independent component analysis (ICA) methods, measured in terms of signal-to-noise ratios of separated sources. In the overcomplete case, we show that the true underlying dictionary and sparse sources can be accurately recovered. In tests with natural images, learned overcomplete dictionaries are shown to have higher coding efficiency than complete dictionaries; that is, images encoded with an over-complete dictionary have both higher compression (fewer bits per pixel) and higher accuracy (lower mean square error). PMID:12590811

  16. Typical reconstruction performance for distributed compressed sensing based on ℓ2,1-norm regularized least square and Bayesian optimal reconstruction: influences of noise

    NASA Astrophysics Data System (ADS)

    Shiraki, Yoshifumi; Kabashima, Yoshiyuki

    2016-06-01

    A signal model called joint sparse model 2 (JSM-2) or the multiple measurement vector problem, in which all sparse signals share their support, is important for dealing with practical signal processing problems. In this paper, we investigate the typical reconstruction performance of noisy measurement JSM-2 problems for {{\\ell}2,1} -norm regularized least square reconstruction and the Bayesian optimal reconstruction scheme in terms of mean square error. Employing the replica method, we show that these schemes, which exploit the knowledge of the sharing of the signal support, can recover the signals more precisely as the number of channels increases. In addition, we compare the reconstruction performance of two different ensembles of observation matrices: one is composed of independent and identically distributed random Gaussian entries and the other is designed so that row vectors are orthogonal to one another. As reported for the single-channel case in earlier studies, our analysis indicates that the latter ensemble offers better performance than the former ones for the noisy JSM-2 problem. The results of numerical experiments with a computationally feasible approximation algorithm we developed for this study agree with the theoretical estimation.

  17. Sparse Image Format

    Energy Science and Technology Software Center (ESTSC)

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. Itmore » supports large files (> 2GB) and is designed to build in Windows and Linux environments.« less

  18. Sparse Image Format

    SciTech Connect

    Eads, Damian Ryan

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. It supports large files (> 2GB) and is designed to build in Windows and Linux environments.

  19. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  20. TASMANIAN Sparse Grids Module

    Energy Science and Technology Software Center (ESTSC)

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library thatmore » provides a command line interface via text files ad a MATLAB interface via the command line tool.« less

  1. From 2D to 3D: novel nanostructured scaffolds to investigate signalling in reconstructed neuronal networks.

    PubMed

    Bosi, Susanna; Rauti, Rossana; Laishram, Jummi; Turco, Antonio; Lonardoni, Davide; Nieus, Thierry; Prato, Maurizio; Scaini, Denis; Ballerini, Laura

    2015-01-01

    To recreate in vitro 3D neuronal circuits will ultimately increase the relevance of results from cultured to whole-brain networks and will promote enabling technologies for neuro-engineering applications. Here we fabricate novel elastomeric scaffolds able to instruct 3D growth of living primary neurons. Such systems allow investigating the emerging activity, in terms of calcium signals, of small clusters of neurons as a function of the interplay between the 2D or 3D architectures and network dynamics. We report the ability of 3D geometry to improve functional organization and synchronization in small neuronal assemblies. We propose a mathematical modelling of network dynamics that supports such a result. Entrapping carbon nanotubes in the scaffolds remarkably boosted synaptic activity, thus allowing for the first time to exploit nanomaterial/cell interfacing in 3D growth support. Our 3D system represents a simple and reliable construct, able to improve the complexity of current tissue culture models. PMID:25910072

  2. Generation of dense statistical connectomes from sparse morphological data

    PubMed Central

    Egger, Robert; Dercksen, Vincent J.; Udvary, Daniel; Hege, Hans-Christian; Oberlaender, Marcel

    2014-01-01

    Sensory-evoked signal flow, at cellular and network levels, is primarily determined by the synaptic wiring of the underlying neuronal circuitry. Measurements of synaptic innervation, connection probabilities and subcellular organization of synaptic inputs are thus among the most active fields of research in contemporary neuroscience. Methods to measure these quantities range from electrophysiological recordings over reconstructions of dendrite-axon overlap at light-microscopic levels to dense circuit reconstructions of small volumes at electron-microscopic resolution. However, quantitative and complete measurements at subcellular resolution and mesoscopic scales to obtain all local and long-range synaptic in/outputs for any neuron within an entire brain region are beyond present methodological limits. Here, we present a novel concept, implemented within an interactive software environment called NeuroNet, which allows (i) integration of sparsely sampled (sub)cellular morphological data into an accurate anatomical reference frame of the brain region(s) of interest, (ii) up-scaling to generate an average dense model of the neuronal circuitry within the respective brain region(s) and (iii) statistical measurements of synaptic innervation between all neurons within the model. We illustrate our approach by generating a dense average model of the entire rat vibrissal cortex, providing the required anatomical data, and illustrate how to measure synaptic innervation statistically. Comparing our results with data from paired recordings in vitro and in vivo, as well as with reconstructions of synaptic contact sites at light- and electron-microscopic levels, we find that our in silico measurements are in line with previous results. PMID:25426033

  3. Group-based sparse representation for image restoration.

    PubMed

    Zhang, Jian; Zhao, Debin; Gao, Wen

    2014-08-01

    Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. In addition, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman-based technique is developed to solve the proposed GSR-driven ℓ0 minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both peak signal-to-noise ratio and visual perception. PMID:24835225

  4. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  5. Reconstruction of hyperspectral CHRIS/PROBA signal by the Earth Observation Land Data Assimilation System (EO-LDAS)

    NASA Astrophysics Data System (ADS)

    Chernetskiy, Maxim; Gobron, Nadine; Gomez-Dans, Jose; Lewis, Philip

    EO-LDAS is a system that allows one to interpret spectral observations of the land surface to provide an optimal estimate of state of the Earth. It allows a consistent combination of observations from different sensors despite the difference in spatial and spectral resolution and acquisition frequencies. The system is based on a variational data assimilation (DA) scheme, and uses physically-based radiative transfer models (RTM) to map from state to observation. In addition the system takes into account observational uncertainty, prior information and a model of spatial/temporal evolution of the state. Such approach is very useful for the future satellite constellations as well as for reanalysis of historical data. The main purpose of EO-LDAS is the retrieval of biophysical land variables. However, once the state is known after inverting some observations, the system can be used to forward model and predict other observations. The main aim of this contribution is the validation of EO-LDAS by reconstructing CHRIS/PROBA hyperspectral signal on the base of MODIS 500 m, Landsat ETM+ and MISR full resolution data over the Barrax site during the SPARC 2004 campaign. First, multispectral data were inverted by EO-LDAS in order to obtain a set of biophysical parameters which were then used in a forward mode to obtain full spectra over various fields covering Barrax area. The reconstruction was performed using the same view/sun geometry as initial PROBA scene. Single set of spectra from MODIS, ETM+ and MISR were used and a combination of MODIS-ETM+ and MISR-ETM+. In addition uncertainties of output biophysical land parameters were considered for understanding real accuracy and applicability of combinations of different sensors. Finally, spatial and temporal regularisation models were applied to add extra constraints to the inversion. The proposed contribution demonstrates the capabilities of EO-LDAS for the reconstruction of hyperspectral bands on the base of different

  6. A non-iterative method for the electrical impedance tomography based on joint sparse recovery

    NASA Astrophysics Data System (ADS)

    Lee, Ok Kyun; Kang, Hyeonbae; Ye, Jong Chul; Lim, Mikyoung

    2015-07-01

    The purpose of this paper is to propose a non-iterative method for the inverse conductivity problem of recovering multiple small anomalies from the boundary measurements. When small anomalies are buried in a conducting object, the electric potential values inside the object can be expressed by integrals of densities with a common sparse support on the location of anomalies. Based on this integral expression, we formulate the reconstruction problem of small anomalies as a joint sparse recovery and present an efficient non-iterative recovery algorithm of small anomalies. Furthermore, we also provide a slightly modified algorithm to reconstruct an extended anomaly. We validate the effectiveness of the proposed algorithm over the linearized method and the multiple signal classification algorithm by numerical simulations. This work is supported by the Korean Ministry of Education, Sciences and Technology through NRF grant No. NRF-2010-0017532 (to H K), the Korean Ministry of Science, ICT & Future Planning; through NRF grant No. NRF-2013R1A1A3012931 (to M L), the R&D Convergence Program of NST (National Research Council of Science & Technology) of Republic of Korea (Grant CAP-13-3-KERI) (to O K L and J C Y).

  7. Digital DC-Reconstruction of AC-Coupled Electrophysiological Signals with a Single Inverting Filter

    PubMed Central

    Schmid, Ramun; Leber, Remo; Schmid, Hans-Jakob; Generali, Gianluca

    2016-01-01

    Since the introduction of digital electrocardiographs, high-pass filters have been necessary for successful analog-to-digital conversion with a reasonable amplitude resolution. On the other hand, such high-pass filters may distort the diagnostically significant ST-segment of the ECG, which can result in a misleading diagnosis. We present an inverting filter that successfully undoes the effects of a 0.05 Hz single pole high-pass filter. The inverting filter has been tested on more than 1600 clinical ECGs with one-minute durations and produces a negligible mean RMS-error of 3.1*10−8 LSB. Alternative, less strong inverting filters have also been tested, as have different applications of the filters with respect to rounding of the signals after filtering. A design scheme for the alternative inverting filters has been suggested, based on the maximum strength of the filter. With the use of the suggested filters, it is possible to recover the original DC-coupled ECGs from AC-coupled ECGs, at least when a 0.05 Hz first order digital single pole high-pass filter is used for the AC-coupling. PMID:26938769

  8. Fringe filtering technique based on local signal reconstruction using noise subspace inflation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2016-03-01

    A noise filtering technique is proposed to filter the fringe pattern recorded in the optical measurement set-up. A single fringe pattern carrying the information on the measurand is treated as a data matrix which can either be complex or real valued. In the first approach, the noise filtering is performed pixel-wise in a windowed data segment generated around each pixel. The singular value decomposition of an enhanced form of this data segment is performed to extract the signal component from a noisy background. This enhancement of matrix has an effect of noise subspace inflation which accommodates maximum amount of noise. In another computationally efficient approach, the data matrix is divided into number of small-sized blocks and filtering is performed block-wise based on the similar noise subspace inflation method. The proposed method has an important ability to identify the spatially varying fringe density and regions of phase discontinuities. The performance of the proposed method is validated with numerical and experimental results.

  9. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  10. Reconstructing the Fastest Chemical and Electrical Signalling Responses to Microgravity Stress in Plants

    NASA Astrophysics Data System (ADS)

    Mugnai, Sergio; Pandolfi, Camilla; Masi, Elisa; Azzarello, Elisa; Voigt, Boris; Baluska, Frantisek; Volkmann, Dieter; Mancuso, Stefano

    Plants are particularly suited to study the response of a living organism to gravity as they are extremely sensitive to its changes. Gravity perception is a well-studied phenomenon, but the chain of events related to signal transduction and transmission still suffers lack of information. Preliminary results obtained in previous parabolic flight campaigns (PFCs) by our Lab show that microgravity (¡0.05g), but not hypergravity (1.8g), repeatedly induced immediate (less than 1.5 s) oxygen bursts when maize roots experienced loss of gravity forces. Interestingly, these changes were located exclusively in the apex, but not in the mature zone of the root. Ground experiments have also revealed the onset of strong and rapid electrical responses in maize root apices subjected to stress, which lead to the hypothesis of an intrinsic capacity of the root apex to generate functional networks. Experiments during the 49th and 51st ESA PFCs were aimed 1) to find out if the different consumption of oxygen at root level recorded in the previous PFCs can lead to a subsequent local emissions of ROS in living root apices; 2) to study the space-temporal pattern of the neuronal network generated by roots under gravity changing conditions; 3) to evaluate the onset of synchronization events during gravity changes conditions. Concerning oxygen bursts, results indicate that they probably implicate a strong generation of ROS (such as nitric oxide) matching exactly the microgravity events suggesting that the sensing mechanism is not only related to a general mechanical stress (i.e. tensegrity model, present also during hypergravity), but can be specific for the microgravity event. To further investigate this hypothesis we studied the distributed/synchronized electrical activity of cells by the use of a Multi-Electrode Array (MEA). The main results obtained are: root transition zone (TZ) showed a higher spike rate activity compared to the mature zone (MZ). Also, microgravity appeared to

  11. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  12. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  13. Percolation on Sparse Networks

    NASA Astrophysics Data System (ADS)

    Karrer, Brian; Newman, M. E. J.; Zdeborová, Lenka

    2014-11-01

    We study percolation on networks, which is used as a model of the resilience of networked systems such as the Internet to attack or failure and as a simple model of the spread of disease over human contact networks. We reformulate percolation as a message passing process and demonstrate how the resulting equations can be used to calculate, among other things, the size of the percolating cluster and the average cluster size. The calculations are exact for sparse networks when the number of short loops in the network is small, but even on networks with many short loops we find them to be highly accurate when compared with direct numerical simulations. By considering the fixed points of the message passing process, we also show that the percolation threshold on a network with few loops is given by the inverse of the leading eigenvalue of the so-called nonbacktracking matrix.

  14. Reconstructing direct and indirect interactions in networked public goods game

    PubMed Central

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-01-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system. PMID:27444774

  15. Reconstructing direct and indirect interactions in networked public goods game.

    PubMed

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-01-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system. PMID:27444774

  16. Reconstructing direct and indirect interactions in networked public goods game

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system.

  17. D Super-Resolution Approach for Sparse Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Yilmaz, A.

    2015-08-01

    Laser scanner point cloud has been emerging in Photogrammetry and computer vision to achieve high level tasks such as object tracking, object recognition and scene understanding. However, low cost laser scanners are noisy, sparse and prone to systematic errors. This paper proposes a novel 3D super resolution approach to reconstruct surface of the objects in the scene. This method works on sparse, unorganized point clouds and has superior performance over other surface recovery approaches. Since the proposed approach uses anisotropic diffusion equation, it does not deteriorate the object boundaries and it preserves topology of the object.

  18. Three-dimensional sparse-aperture moving-target imaging

    NASA Astrophysics Data System (ADS)

    Ferrara, Matthew; Jackson, Julie; Stuff, Mark

    2008-04-01

    If a target's motion can be determined, the problem of reconstructing a 3D target image becomes a sparse-aperture imaging problem. That is, the data lies on a random trajectory in k-space, which constitutes a sparse data collection that yields very low-resolution images if backprojection or other standard imaging techniques are used. This paper investigates two moving-target imaging algorithms: the first is a greedy algorithm based on the CLEAN technique, and the second is a version of Basis Pursuit Denoising. The two imaging algorithms are compared for a realistic moving-target motion history applied to a Xpatch-generated backhoe data set.

  19. Sub-Nyquist signal-reconstruction-free operational modal analysis and damage detection in the presence of noise

    NASA Astrophysics Data System (ADS)

    Gkoktsi, Kyriaki; Giaralis, Agathoklis; TauSiesakul, Bamrung

    2016-04-01

    Motivated by a need to reduce energy consumption in wireless sensors for vibration-based structural health monitoring (SHM) associated with data acquisition and transmission, this paper puts forth a novel approach for undertaking operational modal analysis (OMA) and damage localization relying on compressed vibrations measurements sampled at rates well below the Nyquist rate. Specifically, non-uniform deterministic sub-Nyquist multi-coset sampling of response acceleration signals in white noise excited linear structures is considered in conjunction with a power spectrum blind sampling/estimation technique which retrieves/samples the power spectral density matrix from arrays of sensors directly from the sub-Nyquist measurements (i.e., in the compressed domain) without signal reconstruction in the time-domain and without posing any signal sparsity conditions. The frequency domain decomposition algorithm is then applied to the power spectral density matrix to extract natural frequencies and mode shapes as a standard OMA step. Further, the modal strain energy index (MSEI) is considered for damage localization based on the mode shapes extracted directly from the compressed measurements. The effectiveness and accuracy of the proposed approach is numerically assessed by considering simulated vibration data pertaining to a white-noise excited simply supported beam in healthy and in 3 damaged states, contaminated with Gaussian white noise. Good accuracy is achieved in estimating mode shapes (quantified in terms of the modal assurance criterion) and natural frequencies from an array of 15 multi-coset devices sampling at a 70% slower than the Nyquist frequency rate for SNRs as low as 10db. Damage localization of equal level/quality is also achieved by the MSEI applied to mode shapes derived from noisy sub-Nyquist (70% compression) and Nyquist measurements for all damaged states considered. Overall, the furnished numerical results demonstrate that the herein considered sub

  20. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    PubMed Central

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  1. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    SciTech Connect

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  2. Sparse deformable models with application to cardiac motion analysis.

    PubMed

    Yu, Yang; Zhang, Shaoting; Huang, Junzhou; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Deformable models have been widely used with success in medical image analysis. They combine bottom-up information derived from image appearance cues, with top-down shape-based constraints within a physics-based formulation. However, in many real world problems the observations extracted from the image data often contain gross errors, which adversely affect the deformation accuracy. To alleviate this issue, we introduce a new family of deformable models that are inspired from compressed sensing, a technique for efficiently reconstructing a signal based on its sparseness in some domain. In this problem, we employ sparsity to represent the outliers or gross errors, and combine it seamlessly with deformable models. The proposed new formulation is applied to the analysis of cardiac motion, using tagged magnetic resonance imaging (tMRI), where the automated tagging line tracking results are very noisy due to the poor image quality. Our new deformable models track the heart motion robustly, and the resulting strains are consistent with those calculated from manual labels. PMID:24683970

  3. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  4. Sparse Regulatory Networks

    PubMed Central

    James, Gareth M.; Sabatti, Chiara; Zhou, Nengfeng; Zhu, Ji

    2011-01-01

    In many organisms the expression levels of each gene are controlled by the activation levels of known “Transcription Factors” (TF). A problem of considerable interest is that of estimating the “Transcription Regulation Networks” (TRN) relating the TFs and genes. While the expression levels of genes can be observed, the activation levels of the corresponding TFs are usually unknown, greatly increasing the difficulty of the problem. Based on previous experimental work, it is often the case that partial information about the TRN is available. For example, certain TFs may be known to regulate a given gene or in other cases a connection may be predicted with a certain probability. In general, the biology of the problem indicates there will be very few connections between TFs and genes. Several methods have been proposed for estimating TRNs. However, they all suffer from problems such as unrealistic assumptions about prior knowledge of the network structure or computational limitations. We propose a new approach that can directly utilize prior information about the network structure in conjunction with observed gene expression data to estimate the TRN. Our approach uses L1 penalties on the network to ensure a sparse structure. This has the advantage of being computationally efficient as well as making many fewer assumptions about the network structure. We use our methodology to construct the TRN for E. coli and show that the estimate is biologically sensible and compares favorably with previous estimates. PMID:21625366

  5. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  6. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  7. Analysis of distortions in the velocity profiles of suspension flows inside a light-scattering medium upon their reconstruction from the optical coherence Doppler tomograph signal

    SciTech Connect

    Bykov, A V; Kirillin, M Yu; Priezzhev, A V

    2005-11-30

    Model signals from one and two plane flows of a particle suspension are obtained for an optical coherence Doppler tomograph (OCDT) by the Monte-Carlo method. The optical properties of particles mimic the properties of non-aggregating erythrocytes. The flows are considered in a stationary scattering medium with optical properties close to those of the skin. It is shown that, as the flow position depth increases, the flow velocity determined from the OCDT signal becomes smaller than the specified velocity and the reconstructed profile extends in the direction of the distant boundary, which is accompanied by the shift of its maximum. In the case of two flows, an increase in the velocity of the near-surface flow leads to the overestimated values of velocity of the reconstructed profile of the second flow. Numerical simulations were performed by using a multiprocessor parallel-architecture computer. (laser applications in medicine)

  8. Completeness for sparse potential scattering

    SciTech Connect

    Shen, Zhongwei

    2014-01-15

    The present paper is devoted to the scattering theory of a class of continuum Schrödinger operators with deterministic sparse potentials. We first establish the limiting absorption principle for both modified free resolvents and modified perturbed resolvents. This actually is a weak form of the classical limiting absorption principle. We then prove the existence and completeness of local wave operators, which, in particular, imply the existence of wave operators. Under additional assumptions on the sparse potential, we prove the completeness of wave operators. In the context of continuum Schrödinger operators with sparse potentials, this paper gives the first proof of the completeness of wave operators.

  9. A novel method for the line-of-response and time-of-flight reconstruction in TOF-PET detectors based on a library of synchronized model signals

    NASA Astrophysics Data System (ADS)

    Moskal, P.; Zoń, N.; Bednarski, T.; Białas, P.; Czerwiński, E.; Gajos, A.; Kamińska, D.; Kapłon, Ł.; Kochanowski, A.; Korcyl, G.; Kowal, J.; Kowalski, P.; Kozik, T.; Krzemień, W.; Kubicz, E.; Niedźwiecki, Sz.; Pałka, M.; Raczyński, L.; Rudy, Z.; Rundel, O.; Salabura, P.; Sharma, N. G.; Silarski, M.; Słomski, A.; Smyrski, J.; Strzelecki, A.; Wieczorek, A.; Wiślicki, W.; Zieliński, M.

    2015-03-01

    A novel method of hit time and hit position reconstruction in scintillator detectors is described. The method is based on comparison of detector signals with results stored in a library of synchronized model signals registered for a set of well-defined positions of scintillation points. The hit position is reconstructed as the one corresponding to the signal from the library which is most similar to the measurement signal. The time of the interaction is determined as a relative time between the measured signal and the most similar one in the library. A degree of similarity of measured and model signals is defined as the distance between points representing the measurement- and model-signal in the multi-dimensional measurement space. Novelty of the method lies also in the proposed way of synchronization of model signals enabling direct determination of the difference between time-of-flights (TOF) of annihilation quanta from the annihilation point to the detectors. The introduced method was validated using experimental data obtained by means of the double strip prototype of the J-PET detector and 22Na sodium isotope as a source of annihilation gamma quanta. The detector was built out from plastic scintillator strips with dimensions of 5 mm×19 mm×300 mm, optically connected at both sides to photomultipliers, from which signals were sampled by means of the Serial Data Analyzer. Using the introduced method, the spatial and TOF resolution of about 1.3 cm (σ) and 125 ps (σ) were established, respectively.

  10. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features. PMID:26340790

  11. Sparse-based multispectral image encryption via ptychography

    NASA Astrophysics Data System (ADS)

    Rawat, Nitin; Shi, Yishi; Kim, Byoungho; Lee, Byung-Geun

    2015-12-01

    Recently, we proposed a model of securing a ptychography-based monochromatic image encryption system via the classical Photon-counting imaging (PCI) technique. In this study, we examine a single-channel multispectral sparse-based photon-counting ptychography imaging (SMPI)-based cryptosystem. A ptychography-based cryptosystem creates a complex object wave field, which can be reconstructed by a series of diffraction intensity patterns through an aperture movement. The PCI sensor records only a few complex Bayer patterned samples that have been utilized in the decryption process. Sparse sensing and nonlinear properties of the classical PCI system, together with the scanning probes, enlarge the key space, and such a combination therefore enhances the system's security. We demonstrate that the sparse samples have adequate information for image decryption, as well as information authentication by means of optical correlation.

  12. Automatic target recognition via sparse representations

    NASA Astrophysics Data System (ADS)

    Estabridis, Katia

    2010-04-01

    Automatic target recognition (ATR) based on the emerging technology of Compressed Sensing (CS) can considerably improve accuracy, speed and cost associated with these types of systems. An image based ATR algorithm has been built upon this new theory, which can perform target detection and recognition in a low dimensional space. Compressed dictionaries (A) are formed to include rotational information for a scale of interest. The algorithm seeks to identify y(test sample) as a linear combination of the dictionary elements : y=Ax, where A ∈ Rnxm(n<sparse vector whose non-zero entries identify the input y. The signal x will be sparse with respect to the dictionary A as long as y is a valid target. The algorithm can reject clutter and background, which are part of the input image. The detection and recognition problems are solved by finding the sparse-solution to the undetermined system y=Ax via Orthogonal Matching Pursuit (OMP) and l1 minimization techniques. Visible and MWIR imagery collected by the Army Night Vision and Electronic Sensors Directorate (NVESD) was utilized to test the algorithm. Results show an average detection and recognition rates above 95% for targets at ranges up to 3Km for both image modalities.

  13. Learning joint intensity-depth sparse representations.

    PubMed

    Tosic, Ivana; Drewes, Sarah

    2014-05-01

    This paper presents a method for learning overcomplete dictionaries of atoms composed of two modalities that describe a 3D scene: 1) image intensity and 2) scene depth. We propose a novel joint basis pursuit (JBP) algorithm that finds related sparse features in two modalities using conic programming and we integrate it into a two-step dictionary learning algorithm. The JBP differs from related convex algorithms because it finds joint sparsity models with different atoms and different coefficient values for intensity and depth. This is crucial for recovering generative models where the same sparse underlying causes (3D features) give rise to different signals (intensity and depth). We give a bound for recovery error of sparse coefficients obtained by JBP, and show numerically that JBP is superior to the group lasso algorithm. When applied to the Middlebury depth-intensity database, our learning algorithm converges to a set of related features, such as pairs of depth and intensity edges or image textures and depth slants. Finally, we show that JBP outperforms state of the art methods on depth inpainting for time-of-flight and Microsoft Kinect 3D data. PMID:24723574

  14. Sparseness- and continuity-constrained seismic imaging

    NASA Astrophysics Data System (ADS)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR < 0 dB); (ii) use the sparseness and locality (both in position and angle) of directional basis functions (such as curvelets and contourlets) on the model: the reflectivity; and (iii) exploit the near invariance of these basis functions under the normal operator, i.e., the scattering-followed-by-imaging operator. Signal-to-noise ratio and the continuity along the imaged reflectors are significantly enhanced by formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  15. Super-sparsely view-sampled cone-beam CT by incorporating prior data.

    PubMed

    Abbas, Sajid; Min, Jonghwan; Cho, Seungryong

    2013-01-01

    Computed tomography (CT) is widely used in medicine for diagnostics or for image-guided therapies, and is also popular in industrial applications for nondestructive testing. CT conventionally requires a large number of projections to produce volumetric images of a scanned object, because the conventional image reconstruction algorithm is based on filtered-backprojection. This requirement may result in relatively high radiation dose to the patients in medical CT unless the radiation dose at each view angle is reduced, and can cause expensive scanning time and efforts in industrial CT applications. Sparse- view CT may provide a viable option to address both issues including high radiation dose and expensive scanning efforts. However, image reconstruction from sparsely sampled data in CT is in general very challenging, and much efforts have been made to develop algorithms for such an image reconstruction problem. Image total-variation minimization algorithm inspired by compressive sensing theory has recently been developed, which exploits the sparseness of the image derivative magnitude and can reconstruct images from sparse-view data to a similar quality of the images conventionally reconstructed from many views. In successive CT scans, prior CT image of an object and its projection data may be readily available, and the current CT image may have not much difference from the prior image. Considering the sparseness of such a difference image between the successive scans, image reconstruction of the difference image may be achieved from very sparsely sampled data. In this work, we showed that one can further reduce the number of projections, resulting in a super-sparse scan, for a good quality image reconstruction with the aid of a prior data. Both numerical and experimental results are provided. PMID:23507853

  16. Sparse Matrix for ECG Identification with Two-Lead Features

    PubMed Central

    Tseng, Kuo-Kun; Luo, Jiao; Wang, Wenmin; Haiting, Dong

    2015-01-01

    Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods. PMID:25961074

  17. Spectrotemporal CT data acquisition and reconstruction at low dose

    SciTech Connect

    Clark, Darin P.; Badea, Cristian T.; Lee, Chang-Lung; Kirsch, David G.

    2015-11-15

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction

  18. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  19. Feasibility study of sparse-angular sampling and sinogram interpolation in material decomposition with a photon-counting detector

    NASA Astrophysics Data System (ADS)

    Kim, Dohyeon; Jo, Byungdu; Park, Su-Jin; Kim, Hyemi; Kim, Hee-Joung

    2016-03-01

    Spectral computed tomography (SCT) is a promising technique for obtaining enhanced image with contrast agent and distinguishing different materials. We focused on developing the analytic reconstruction algorithm in material decomposition technique with lower radiation exposure and shorter acquisition time. Sparse-angular sampling can reduce patient dose and scanning time for obtaining the reconstruction images. In this study, the sinogram interpolation method was used to improve the quality of material decomposed images in sparse angular sampling. A prototype of spectral CT system with 64 pixels CZT-based photon counting detector was used. The source-to-detector distance and the source-tocenter of rotation distance were 1200 and 1015 mm, respectively. The x-ray spectrum at 90 kVp with a tube current of 110 μA was used. Two energy bins (23-33 keV and 34-44 keV) were set to obtain the two images for decomposed iodine and calcification. We used PMMA phantom and its height and radius were 50 mm and 17.5 mm, respectively. The phantom contained 4 materials including iodine, gadolinium, calcification, and liquid state lipid. We evaluated the signal to noise ratio (SNR) of materials to examine the significance of sinogram interpolation method. The decomposed iodine and calcification images were obtained by projection based subtraction method using two energy bins with 36 projection data. The SNR in decomposed images were improved by using sinogram interpolation method. And these results indicated that the signal of decomposed material was increased and the noise of decomposed material was reduced. In conclusion, the sinogram interpolation method can be used in material decomposition method with sparse-angular sampling.

  20. Reconstructing WIMP properties through an interplay of signal measurements in direct detection, Fermi-LAT, and CTA searches for dark matter

    NASA Astrophysics Data System (ADS)

    Roszkowski, Leszek; Sessolo, Enrico Maria; Trojanowski, Sebastian; Williams, Andrew J.

    2016-08-01

    We examine the projected ability to reconstruct the mass, scattering, and annihilation cross section of dark matter in the new generation of large underground detectors, XENON-1T, SuperCDMS, and DarkSide-G2, in combination with diffuse gamma radiation from expected 15 years of data from Fermi-LAT observation of 46 local spiral dwarf galaxies and projected CTA sensitivity to a signal from the Galactic Center. To this end we consider several benchmark points spanning a wide range of WIMP mass, different annihilation final states, and large enough event rates to warrant detection in one or more experiments. As previously shown, below some 100 GeV only direct detection experiments will in principle be able to reconstruct WIMP mass well. This may, in case a signal at Fermi-LAT is also detected, additionally help restricting σv and the allowed decay branching rates. In the intermediate range between some 100 GeV and up a few hundred GeV, direct and indirect detection experiments can be used in complementarity to ameliorate the respective determinations, which in individual experiments can at best be rather poor, thus making the WIMP reconstruction in this mass range very challenging. At large WIMP mass, ~ 1 TeV, CTA will have the ability to reconstruct mass, annihilation cross section, and the allowed decay branching rates to very good precision for the τ+τ‑ or purely leptonic final state, good for the W+W‑ case, and rather poor for bbar b. A substantial improvement can potentially be achieved by reducing the systematic uncertainties, increasing exposure, or by an additional measurement at Fermi-LAT that would help reconstruct the annihilation cross section and the allowed branching fractions to different final states.

  1. Algebraic reconstruction combined with the signal space separation method for the inverse magnetoencephalography problem with a dipole-quadrupole source

    NASA Astrophysics Data System (ADS)

    Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.

    2014-05-01

    This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.

  2. Iterative 4D cardiac micro-CT image reconstruction using an adaptive spatio-temporal sparsity prior

    NASA Astrophysics Data System (ADS)

    Ritschl, Ludwig; Sawall, Stefan; Knaup, Michael; Hess, Andreas; Kachelrieß, Marc

    2012-03-01

    Temporal-correlated image reconstruction, also known as 4D CT image reconstruction, is a big challenge in computed tomography. The reasons for incorporating the temporal domain into the reconstruction are motions of the scanned object, which would otherwise lead to motion artifacts. The standard method for 4D CT image reconstruction is extracting single motion phases and reconstructing them separately. These reconstructions can suffer from undersampling artifacts due to the low number of used projections in each phase. There are different iterative methods which try to incorporate some a priori knowledge to compensate for these artifacts. In this paper we want to follow this strategy. The cost function we use is a higher dimensional cost function which accounts for the sparseness of the measured signal in the spatial and temporal directions. This leads to the definition of a higher dimensional total variation. The method is validated using in vivo cardiac micro-CT mouse data. Additionally, we compare the results to phase-correlated reconstructions using the FDK algorithm and a total variation constrained reconstruction, where the total variation term is only defined in the spatial domain. The reconstructed datasets show strong improvements in terms of artifact reduction and low-contrast resolution compared to other methods. Thereby the temporal resolution of the reconstructed signal is not affected.

  3. Biomarker reconstructions of marine and terrestrial climate signals from marginal marine environments: new results from high-resolution archives

    NASA Astrophysics Data System (ADS)

    Bendle, J. A.; Moossen, H.; Jamieson, R.; Das, S. K.; Quillmann, U.; Jennings, A. E.; Andrews, J. T.; Howe, J.; Cage, A.; Austin, W. E.

    2010-12-01

    One of the key questions facing climate scientists, policy makers and the public today, is how important is natural variability in explaining global warming? Sedimentary archives from marginal marine environments, such as fjordic (or sea-loch) environments, typically have higher sediment accumulation rates than deeper ocean sites and thus provide suitably expanded archives of the Holocene against which the 20th Century changes can be compared. Moreover, with suitable temporal resolution, the impact of Holocene rapid climate changes episodes, such as the 8.2 kyr event can be constrained. Since fjords bridge the land-ocean interface, palaeo-environmental records from fjordic environments provide a unique opportunity to study the link between marine and terrestrial climate. Here we present millennial to centennial scale, independent records of marine and terrestrial change in two fjordic cores: from Ìsafjardardjúp, northwest Iceland (core MD99-2266; location: 66° 13' 77'' N, 23° 15' 93'' W; 106m water depth) and from Loch Sunart, northwest Scotland (core MD-04 2832; location: 56° 40.19'N, 05° 52.21 W; 50 m water depth). The cores are extremely high resolution with 1cm of sediment representing <10 years of accumulation, and come from sites influenced by disparate branches of the North Atlantic Drift (i.e. the distal Gulf Stream), the Irminger and Shetland Currents. We reconstruct sea surface temperature (SST) and terrestrial mean air annual temperatures (MAT) derived from alkenone and tetraether biomarkers (using the UK37' and MBT/CBT-MAT indices respectively). Additional insights into terrestrial environmental change are derived from proxy records for soil pH (from the tetraether CBT proxy) and, in the case of MD99-2266, from higher plant wax distributions. The timing of the millennial-scale SST variability in the cores should give insights into the degree of phasing of millennial scale climate variability between the western (Irminger Current) and eastern (SC

  4. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  5. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods. PMID:26906674

  6. Estimation of sparse null space functions for compressed sensing in SPECT

    NASA Astrophysics Data System (ADS)

    Mukherjee, Joyeeta Mitra; Sidky, Emil; King, Michael A.

    2014-03-01

    Compressed sensing (CS) [1] is a novel sensing (acquisition) paradigm that applies to discrete-to-discrete system models and asserts exact recovery of a sparse signal from far fewer measurements than the number of unknowns [1- 2]. Successful applications of CS may be found in MRI [3, 4] and optical imaging [5]. Sparse reconstruction methods exploiting CS principles have been investigated for CT [6-8] to reduce radiation dose, and to gain imaging speed and image quality in optical imaging [9]. In this work the objective is to investigate the applicability of compressed sensing principles for a faster brain imaging protocol on a hybrid collimator SPECT system. As a proofof- principle we study the null space of the fan-beam collimator component of our system with regards to a particular imaging object. We illustrate the impact of object sparsity on the null space using pixel and Haar wavelet basis functions to represent a piecewise smooth phantom chosen as our object of interest.

  7. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    PubMed Central

    2011-01-01

    Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503

  8. High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures

    PubMed Central

    Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando

    2011-01-01

    Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017

  9. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  10. Learning Sparse Representations of Depth

    NASA Astrophysics Data System (ADS)

    Tosic, Ivana; Olshausen, Bruno A.; Culpepper, Benjamin J.

    2011-09-01

    This paper introduces a new method for learning and inferring sparse representations of depth (disparity) maps. The proposed algorithm relaxes the usual assumption of the stationary noise model in sparse coding. This enables learning from data corrupted with spatially varying noise or uncertainty, typically obtained by laser range scanners or structured light depth cameras. Sparse representations are learned from the Middlebury database disparity maps and then exploited in a two-layer graphical model for inferring depth from stereo, by including a sparsity prior on the learned features. Since they capture higher-order dependencies in the depth structure, these priors can complement smoothness priors commonly used in depth inference based on Markov Random Field (MRF) models. Inference on the proposed graph is achieved using an alternating iterative optimization technique, where the first layer is solved using an existing MRF-based stereo matching algorithm, then held fixed as the second layer is solved using the proposed non-stationary sparse coding algorithm. This leads to a general method for improving solutions of state of the art MRF-based depth estimation algorithms. Our experimental results first show that depth inference using learned representations leads to state of the art denoising of depth maps obtained from laser range scanners and a time of flight camera. Furthermore, we show that adding sparse priors improves the results of two depth estimation methods: the classical graph cut algorithm by Boykov et al. and the more recent algorithm of Woodford et al.

  11. Precipitation reconstruction for the northwestern Chinese Altay since 1760 indicates the drought signals of the northern part of inner Asia.

    PubMed

    Chen, Feng; Yuan, Yujiang; Zhang, Tongwen; Shang, Huaming

    2016-03-01

    Based on the significant positive correlations between the regional tree-ring width chronology and local climate data, the total precipitation of the previous July to the current June was reconstructed since AD 1760 for the northwestern Chinese Altay. The reconstruction model accounts for 40.7 % of the actual precipitation variance during the calibration period from 1959 to 2013. Wet conditions prevailed during the periods 1764-1777, 1784-1791, 1795-1805, 1829-1835, 1838-1846, 1850-1862, 1867-1872, 1907-1916, 1926-1931, 1935-1943, 1956-1961, 1968-1973, 1984-1997, and 2002-2006. Dry episodes occurred during 1760-1763, 1778-1783, 1792-1794, 1806-1828, 1836-1837, 1847-1849, 1863-1866, 1873-1906, 1917-1925, 1932-1934, 1944-1955, 1962-1967, 1974-1983, 1998-2001, and 2007-2012. The spectral analysis of the precipitation reconstruction shows the existence of some cycles (15.3, 4.5, 3.1, 2.7, and 2.1 years). The significant correlations with the gridded precipitation dataset revealed that the precipitation reconstruction represents the precipitation variation for a large area of the northern part of inner Asia. A comparison with the precipitation reconstruction from the southern Chinese Altay shows the high level of confidence for the precipitation reconstruction for the northwestern Chinese Altay. Precipitation variation of the northwestern Chinese Altay is positively correlated with sea surface temperatures in tropical oceans, suggesting a possible linkage of the precipitation variation of the northwestern Chinese Altay to the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). The synoptic climatology analysis reveals that there is the relationship between anomalous atmospheric circulation and extreme climate events in the northwestern Chinese Altay. PMID:26232944

  12. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor.

    PubMed

    Salehizadeh, Seyed M A; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H

    2015-01-01

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  13. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor

    PubMed Central

    Salehizadeh, Seyed M. A.; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H.

    2015-01-01

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  14. Cluster-enhanced sparse approximation of overlapping ultrasonic echoes.

    PubMed

    Mor, Etai; Aladjem, Mayer; Azoulay, Amnon

    2015-02-01

    Ultrasonic pulse-echo methods have been used extensively in non-destructive testing of layered structures. In acoustic measurements on thin layers, the resulting echoes from two successive interfaces overlap in time, making it difficult to assess the individual echo parameters. Over the last decade sparse approximation methods have been extensively used to address this issue. These methods employ a large dictionary of elementary functions (atoms) and attempt to select the smallest subset of atoms (sparsest approximation) that represent the ultrasonic signal accurately. In this paper we propose the cluster-enhanced sparse approximation (CESA) method for estimating overlapping ultrasonic echoes. CESA is specifically adapted to deal with a large number of signals acquired during an ultrasonic scan. It incorporates two principal algorithms. The first is a clustering algorithm, which divides a set of signals comprising an ultrasonic scan into groups of signals that can be approximated by the same set of atoms. The second is a two-stage iterative algorithm, which alternates between update of the atoms associated with each cluster, and re-clustering of the signals according to the updated atoms. Because CESA operates on clusters of signals, it achieves improved results in terms of approximation error and computation time compared with conventional sparse methods, which operate on each signal separately. The superior ability of CESA to approximate highly overlapping ultrasonic echoes is demonstrated through simulation and experiments on adhesively bonded structures. PMID:25643086

  15. Photoplethysmograph signal reconstruction based on a novel motion artifact detection-reduction approach. Part II: Motion and noise artifact removal.

    PubMed

    Salehizadeh, S M A; Dao, Duy K; Chong, Jo Woon; McManus, David; Darling, Chad; Mendelson, Yitzhak; Chon, Ki H

    2014-11-01

    We introduce a new method to reconstruct motion and noise artifact (MNA) contaminated photoplethysmogram (PPG) data. A method to detect MNA corrupted data is provided in a companion paper. Our reconstruction algorithm is based on an iterative motion artifact removal (IMAR) approach, which utilizes the singular spectral analysis algorithm to remove MNA artifacts so that the most accurate estimates of uncorrupted heart rates (HRs) and arterial oxygen saturation (SpO2) values recorded by a pulse oximeter can be derived. Using both computer simulations and three different experimental data sets, we show that the proposed IMAR approach can reliably reconstruct MNA corrupted data segments, as the estimated HR and SpO2 values do not significantly deviate from the uncorrupted reference measurements. Comparison of the accuracy of reconstruction of the MNA corrupted data segments between our IMAR approach and the time-domain independent component analysis (TD-ICA) is made for all data sets as the latter method has been shown to provide good performance. For simulated data, there were no significant differences in the reconstructed HR and SpO2 values starting from 10 dB down to -15 dB for both white and colored noise contaminated PPG data using IMAR; for TD-ICA, significant differences were observed starting at 10 dB. Two experimental PPG data sets were created with contrived MNA by having subjects perform random forehead and rapid side-to-side finger movements show that; the performance of the IMAR approach on these data sets was quite accurate as non-significant differences in the reconstructed HR and SpO2 were found compared to non-contaminated reference values, in most subjects. In comparison, the accuracy of the TD-ICA was poor as there were significant differences in reconstructed HR and SpO2 values in most subjects. For non-contrived MNA corrupted PPG data, which were collected with subjects performing walking and stair climbing tasks, the IMAR significantly

  16. Partially sparse imaging of stationary indoor scenes

    NASA Astrophysics Data System (ADS)

    Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian

    2014-12-01

    In this paper, we exploit the notion of partial sparsity for scene reconstruction associated with through-the-wall radar imaging of stationary targets under reduced data volume. Partial sparsity implies that the scene being imaged consists of a sparse part and a dense part, with the support of the latter assumed to be known. For the problem at hand, sparsity is represented by a few stationary indoor targets, whereas the high scene density is defined by exterior and interior walls. Prior knowledge of wall positions and extent may be available either through building blueprints or from prior surveillance operations. The contributions of the exterior and interior walls are removed from the data through the use of projection matrices, which are determined from wall- and corner-specific dictionaries. The projected data, with enhanced sparsity, is then processed using l 1 norm reconstruction techniques. Numerical electromagnetic data is used to demonstrate the effectiveness of the proposed approach for imaging stationary indoor scenes using a reduced set of measurements.

  17. Anisotropic interpolation of sparse generalized image samples.

    PubMed

    Bourquard, Aurélien; Unser, Michael

    2013-02-01

    Practical image-acquisition systems are often modeled as a continuous-domain prefilter followed by an ideal sampler, where generalized samples are obtained after convolution with the impulse response of the device. In this paper, our goal is to interpolate images from a given subset of such samples. We express our solution in the continuous domain, considering consistent resampling as a data-fidelity constraint. To make the problem well posed and ensure edge-preserving solutions, we develop an efficient anisotropic regularization approach that is based on an improved version of the edge-enhancing anisotropic diffusion equation. Following variational principles, our reconstruction algorithm minimizes successive quadratic cost functionals. To ensure fast convergence, we solve the corresponding sequence of linear problems by using multigrid iterations that are specifically tailored to their sparse structure. We conduct illustrative experiments and discuss the potential of our approach both in terms of algorithmic design and reconstruction quality. In particular, we present results that use as little as 2% of the image samples. PMID:22968212

  18. Robust Sparse Sensing Using Weather Radar

    NASA Astrophysics Data System (ADS)

    Mishra, K. V.; Kruger, A.; Krajewski, W. F.; Xu, W.

    2014-12-01

    The ability of a weather radar to detect weak echoes is limited by the presence of noise or unwanted echoes. Some of these unwanted signals originate externally to the radar system, such as cosmic noise, radome reflections, interference from co-located radars, and power transmission lines. The internal source of noise in microwave radar receiver is mainly thermal. The thermal noise from various microwave devices in the radar receiver tends to lower the signal-to-noise ratio, thereby masking the weaker signals. Recently, the compressed sensing (CS) technique has emerged as a novel signal sampling paradigm that allows perfect reconstruction of signals sampled at frequencies lower than the Nyquist rate. Many radar and remote sensing applications require efficient and rapid data acquisition. The application of CS to weather radars may allow for faster target update rates without compromising the accuracy of target information. In our previous work, we demonstrated recovery of an entire precipitation scene from its compressed-sensed version by using the matrix completion approach. In this study, we characterize the performance of such a CS-based weather radar in the presence of additive noise. We use a signal model where the precipitation signals form a low-rank matrix that is corrupted with (bounded) noise. Using recent advances in algorithms for matrix completion from few noisy observations, we reconstruct the precipitation scene with reasonable accuracy. We test and demonstrate our approach using the data collected by Iowa X-band Polarimetric (XPOL) weather radars.

  19. A scalable 2-D parallel sparse solver

    SciTech Connect

    Kothari, S.C.; Mitra, S.

    1995-12-01

    Scalability beyond a small number of processors, typically 32 or less, is known to be a problem for existing parallel general sparse (PGS) direct solvers. This paper presents a parallel general sparse PGS direct solver for general sparse linear systems on distributed memory machines. The algorithm is based on the well-known sequential sparse algorithm Y12M. To achieve efficient parallelization, a 2-D scattered decomposition of the sparse matrix is used. The proposed algorithm is more scalable than existing parallel sparse direct solvers. Its scalability is evaluated on a 256 processor nCUBE2s machine using Boeing/Harwell benchmark matrices.

  20. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  1. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  2. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    PubMed Central

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  3. Analyzing Sparse Dictionaries for Online Learning With Kernels

    NASA Astrophysics Data System (ADS)

    Honeine, Paul

    2015-12-01

    Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.

  4. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  5. Latent subspace sparse representation-based unsupervised domain adaptation

    NASA Astrophysics Data System (ADS)

    Shuai, Liu; Sun, Hao; Zhao, Fumin; Zhou, Shilin

    2015-12-01

    In this paper, we introduce and study a novel unsupervised domain adaptation (DA) algorithm, called latent subspace sparse representation based domain adaptation, based on the fact that source and target data that lie in different but related low-dimension subspaces. The key idea is that each point in a union of subspaces can be constructed by a combination of other points in the dataset. In this method, we propose to project the source and target data onto a common latent generalized subspace which is a union of subspaces of source and target domains and learn the sparse representation in the latent generalized subspace. By employing the minimum reconstruction error and maximum mean discrepancy (MMD) constraints, the structure of source and target domain are preserved and the discrepancy is reduced between the source and target domains and thus reflected in the sparse representation. We then utilize the sparse representation to build a weighted graph which reflect the relationship of points from the different domains (source-source, source- target, and target-target) to predict the labels of the target domain. We also proposed an efficient optimization method for the algorithm. Our method does not need to combine with any classifiers and therefore does not need train the test procedures. Various experiments show that the proposed method perform better than the competitive state of art subspace-based domain adaptation.

  6. Inferring sparse networks for noisy transient processes.

    PubMed

    Tran, Hoang M; Bukkapatnam, Satish T S

    2016-01-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the l1-min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of l1-min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues. PMID:26916813

  7. Inferring sparse networks for noisy transient processes

    NASA Astrophysics Data System (ADS)

    Tran, Hoang M.; Bukkapatnam, Satish T. S.

    2016-02-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues.

  8. Inferring sparse networks for noisy transient processes

    PubMed Central

    Tran, Hoang M.; Bukkapatnam, Satish T.S.

    2016-01-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues. PMID:26916813

  9. Genetic algorithms for minimal source reconstructions

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.

    1993-12-01

    Under-determined linear inverse problems arise in applications in which signals must be estimated from insufficient data. In these problems the number of potentially active sources is greater than the number of observations. In many situations, it is desirable to find a minimal source solution. This can be accomplished by minimizing a cost function that accounts from both the compatibility of the solution with the observations and for its ``sparseness``. Minimizing functions of this form can be a difficult optimization problem. Genetic algorithms are a relatively new and robust approach to the solution of difficult optimization problems, providing a global framework that is not dependent on local continuity or on explicit starting values. In this paper, the authors describe the use of genetic algorithms to find minimal source solutions, using as an example a simulation inspired by the reconstruction of neural currents in the human brain from magnetoencephalographic (MEG) measurements.

  10. Amesos2 Templated Direct Sparse Solver Package

    Energy Science and Technology Software Center (ESTSC)

    2011-05-24

    Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

  11. Sparse spike coding : applications of neuroscience to the processing of natural images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.

    2008-04-01

    If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the eciency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.

  12. The bias and signal attenuation present in conventional pollen-based climate reconstructions as assessed by early climate data from Minnesota, USA.

    PubMed

    St Jacques, Jeannine-Marie; Cumming, Brian F; Sauchyn, David J; Smol, John P

    2015-01-01

    The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800 s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 °C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is

  13. The Bias and Signal Attenuation Present in Conventional Pollen-Based Climate Reconstructions as Assessed by Early Climate Data from Minnesota, USA

    PubMed Central

    St. Jacques, Jeannine-Marie; Cumming, Brian F.; Sauchyn, David J.; Smol, John P.

    2015-01-01

    The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7°C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is

  14. EPR Oximetry in Three Spatial Dimensions using Sparse Spin Distribution

    PubMed Central

    Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan

    2008-01-01

    A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa-n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer. PMID:18538600

  15. Supervised nonparametric sparse discriminant analysis for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Wu, Longfei; Sun, Hao; Ji, Kefeng

    2016-03-01

    Owing to the high spectral sampling, the spectral information in hyperspectral imagery (HSI) is often highly correlated and contains redundancy. Motivated by the recent success of sparsity preserving based dimensionality reduction (DR) techniques in both computer vision and remote sensing image analysis community, a novel supervised nonparametric sparse discriminant analysis (NSDA) algorithm is presented for HSI classification. The objective function of NSDA aims at preserving the within-class sparse reconstructive relationship for within-class compactness characterization and maximizing the nonparametric between-class scatter simultaneously to enhance discriminative ability of the features in the projected space. Essentially, it seeks for the optimal projection matrix to identify the underlying discriminative manifold structure of a multiclass dataset. Experimental results on one visualization dataset and three recorded HSI dataset demonstrate NSDA outperforms several state-of-the-art feature extraction methods for HSI classification.

  16. Sparse Biclustering of Transposable Data

    PubMed Central

    Tan, Kean Ming

    2013-01-01

    We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online. PMID:25364221

  17. Sparse Biclustering of Transposable Data.

    PubMed

    Tan, Kean Ming; Witten, Daniela M

    2014-01-01

    We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online. PMID:25364221

  18. SAR imaging via iterative adaptive approach and sparse Bayesian learning

    NASA Astrophysics Data System (ADS)

    Xue, Ming; Santiago, Enrique; Sedehi, Matteo; Tan, Xing; Li, Jian

    2009-05-01

    We consider sidelobe reduction and resolution enhancement in synthetic aperture radar (SAR) imaging via an iterative adaptive approach (IAA) and a sparse Bayesian learning (SBL) method. The nonparametric weighted least squares based IAA algorithm is a robust and user parameter-free adaptive approach originally proposed for array processing. We show that it can be used to form enhanced SAR images as well. SBL has been used as a sparse signal recovery algorithm for compressed sensing. It has been shown in the literature that SBL is easy to use and can recover sparse signals more accurately than the l 1 based optimization approaches, which require delicate choice of the user parameter. We consider using a modified expectation maximization (EM) based SBL algorithm, referred to as SBL-1, which is based on a three-stage hierarchical Bayesian model. SBL-1 is not only more accurate than benchmark SBL algorithms, but also converges faster. SBL-1 is used to further enhance the resolution of the SAR images formed by IAA. Both IAA and SBL-1 are shown to be effective, requiring only a limited number of iterations, and have no need for polar-to-Cartesian interpolation of the SAR collected data. This paper characterizes the achievable performance of these two approaches by processing the complex backscatter data from both a sparse case study and a backhoe vehicle in free space with different aperture sizes.

  19. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  20. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient. PMID:23286160

  1. Finding communities in sparse networks

    PubMed Central

    Singh, Abhinav; Humphries, Mark D.

    2015-01-01

    Spectral algorithms based on matrix representations of networks are often used to detect communities, but classic spectral methods based on the adjacency matrix and its variants fail in sparse networks. New spectral methods based on non-backtracking random walks have recently been introduced that successfully detect communities in many sparse networks. However, the spectrum of non-backtracking random walks ignores hanging trees in networks that can contain information about their community structure. We introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node, unlike the non-backtracking operators that forbid an immediate return. We show that the reluctant backtracking operators can detect communities in certain sparse networks where the non-backtracking operators cannot, while performing comparably on benchmark stochastic block model networks and real world networks. We also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function. Interestingly, for this family of non- and reluctant-backtracking operators the main determinant of performance on real-world networks is whether or not they are normalised to conserve probability at each node. PMID:25742951

  2. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  3. Sparsity-constrained PET image reconstruction with learned dictionaries.

    PubMed

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging. PMID:27494441

  4. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  5. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  6. The least error method for sparse solution reconstruction

    NASA Astrophysics Data System (ADS)

    Bredies, K.; Kaltenbacher, B.; Resmerita, E.

    2016-09-01

    This work deals with a regularization method enforcing solution sparsity of linear ill-posed problems by appropriate discretization in the image space. Namely, we formulate the so called least error method in an ℓ 1 setting and perform the convergence analysis by choosing the discretization level according to an a priori rule, as well as two a posteriori rules, via the discrepancy principle and the monotone error rule, respectively. Depending on the setting, linear or sublinear convergence rates in the ℓ 1-norm are obtained under a source condition yielding sparsity of the solution. A part of the study is devoted to analyzing the structure of the approximate solutions and of the involved source elements.

  7. Sparse source configurations for asteroid tomography

    NASA Astrophysics Data System (ADS)

    Pursiainen, S.; Kaasalainen, M.

    2014-04-01

    The objective of our recent research has been to develop non-invasive imaging techniques for future planetary research and mining activities involving a challenging in situ environment and tight payload limits [1]. This presentation will deal in particular with an approach in which the internal relative permittivity ∈r or the refractive index n = √ ∈r of an asteroid is to be recovered based on radio signal transmitted by a sparse set [2] of fixed or movable landers. To address important aspects of mission planning, we have analyzed different signal source configurations to find the minimal number of source positions needed for robust localization of anomalies, such as internal voids. Characteristic to this inverse problem are the large relative changes in signal speed caused by the high permittivity of typical asteroid minerals (e.g. basalt), leading to strong refractions and reflections of the signal. Finding an appropriate problemspecific signaling arrangement is an important premission goal for successful in situ measurements. This presentation will include inversion results obtained with laboratory-recorded travel time data y of the form in which n δ denotes a perturbation of a refractive index n = n δ + nbg; gi estimates the total noise due to different error sources; (ybg)i = ∫Ci nbg ds is an entry of noiseless background data ybg; and Ci is a signal path. Also simulated time-evolution data will be covered with respect to potential u satisfying the wave equation ∈rδ2/δt2+ ōδu/δt-∆u = f, where ō is a (latent) conductivity distribution and f is a source term. Special interest will be paid to inversion robustness regarding changes of the prior model and source positioning. Among other things, our analysis suggests that strongly refractive anomalies can be detected with three or four sources independently of their positioning.

  8. The sparseness of neuronal responses in ferret primary visual cortex.

    PubMed

    Tolhurst, David J; Smyth, Darragh; Thompson, Ian D

    2009-02-25

    Various arguments suggest that neuronal coding of natural sensory stimuli should be sparse (i.e., individual neurons should respond rarely but should respond reliably). We examined sparseness of visual cortical neurons in anesthetized ferret to flashed natural scenes. Response behavior differed widely between neurons. The median firing rate of 4.1 impulses per second was slightly higher than predicted from consideration of metabolic load. Thirteen percent of neurons (12 of 89) responded to <5% of the images, but one-half responded to >25% of images. Multivariate analysis of the range of sparseness values showed that 67% of the variance was accounted for by differing response patterns to moving gratings. Repeat presentation of images showed that response variance for natural images exaggerated sparseness measures; variance was scaled with mean response, but with a lower Fano factor than for the responses to moving gratings. This response variability and the "soft" sparse responses (Rehn and Sommer, 2007) raise the question of what constitutes a reliable neuronal response and imply parallel signaling by multiple neurons. We investigated whether the temporal structure of responses might be reliable enough to give additional information about natural scenes. Poststimulus time histogram shape was similar for "strong" and "weak" stimuli, with no systematic change in first-spike latency with stimulus strength. The variance of first-spike latency for repeat presentations of the same image was greater than the latency variance between images. In general, responses to flashed natural scenes do not seem compatible with a sparse encoding in which neurons fire rarely but reliably. PMID:19244512

  9. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  10. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  11. Phase-image-based sparse-gray-level data pages for holographic data storage.

    PubMed

    Das, Bhargab; Joseph, Joby; Singh, Kehar

    2009-10-01

    We propose a method for implementation of gray-scale sparse block modulation codes with a single spatial light modulator in phase mode for holographic data storage. Sparse data pages promise higher recording densities with reduced consumption of the dynamic range of the recording material and reduced interpixel cross talk. A balanced sparse-gray-level phase data page gives a homogenized Fourier spectrum that improves the interference efficiency between the signal and the reference beams. Construction rules for sparse three-gray-level phase data pages, readout methods, and interpixel cross talk are discussed extensively. We also explore theoretically the potential storage density improvement while using low-pass filtering and sparse-gray-level phase data pages for holographic storage, and demonstrate the trade-off between code rate, block length, and estimated capacity gain. PMID:19798361

  12. Sparse-view ultrasound diffraction tomography using compressed sensing with nonuniform FFT.

    PubMed

    Hua, Shaoyan; Ding, Mingyue; Yuchi, Ming

    2014-01-01

    Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241

  13. Multi-frame blind deconvolution using sparse priors

    NASA Astrophysics Data System (ADS)

    Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-05-01

    In this paper, we propose a method for multi-frame blind deconvolution. Two sparse priors, i.e., the natural image gradient prior and an l1-norm based prior are used to regularize the latent image and point spread functions (PSFs) respectively. An alternating minimization approach is adopted to solve the resulted optimization problem. We use both gray scale blurred frames from a data set and some colored ones which are captured by a digital camera to verify the robustness of our approach. Experimental results show that the proposed method can accurately reconstruct PSFs with complex structures and the restored images are of high quality.

  14. Photoplethysmograph signal reconstruction based on a novel hybrid motion artifact detection-reduction approach. Part I: Motion and noise artifact detection.

    PubMed

    Chong, Jo Woon; Dao, Duy K; Salehizadeh, S M A; McManus, David D; Darling, Chad E; Chon, Ki H; Mendelson, Yitzhak

    2014-11-01

    Motion and noise artifacts (MNA) are a serious obstacle in utilizing photoplethysmogram (PPG) signals for real-time monitoring of vital signs. We present a MNA detection method which can provide a clean vs. corrupted decision on each successive PPG segment. For motion artifact detection, we compute four time-domain parameters: (1) standard deviation of peak-to-peak intervals (2) standard deviation of peak-to-peak amplitudes (3) standard deviation of systolic and diastolic interval ratios, and (4) mean standard deviation of pulse shape. We have adopted a support vector machine (SVM) which takes these parameters from clean and corrupted PPG signals and builds a decision boundary to classify them. We apply several distinct features of the PPG data to enhance classification performance. The algorithm we developed was verified on PPG data segments recorded by simulation, laboratory-controlled and walking/stair-climbing experiments, respectively, and we compared several well-established MNA detection methods to our proposed algorithm. All compared detection algorithms were evaluated in terms of motion artifact detection accuracy, heart rate (HR) error, and oxygen saturation (SpO2) error. For laboratory controlled finger, forehead recorded PPG data and daily-activity movement data, our proposed algorithm gives 94.4, 93.4, and 93.7% accuracies, respectively. Significant reductions in HR and SpO2 errors (2.3 bpm and 2.7%) were noted when the artifacts that were identified by SVM-MNA were removed from the original signal than without (17.3 bpm and 5.4%). The accuracy and error values of our proposed method were significantly higher and lower, respectively, than all other detection methods. Another advantage of our method is its ability to provide highly accurate onset and offset detection times of MNAs. This capability is important for an automated approach to signal reconstruction of only those data points that need to be reconstructed, which is the subject of the

  15. Sparsely Sampled Phase-Insensitive Ultrasonic Transducer Arrays

    NASA Technical Reports Server (NTRS)

    Johnston, Patrick H.

    1992-01-01

    Three methods of interpretation of outputs from sparsely sampled two-dimensional array of receiving ultrasonic transducers used in transmission experiments investigated. Methods are: description of sampled beam in terms of first few spatial moments of sampled distribution of energy; use of signal-dependent cutoff to limit extent of effective receiver aperture; and use of spatial interpolation to increase apparent density of sampling during computation. Methods reduce errors in computations of shapes of ultrasonic beams.

  16. Dim moving target tracking algorithm based on particle discriminative sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Zhengzhou; Li, Jianing; Ge, Fengzeng; Shao, Wanxing; Liu, Bing; Jin, Gang

    2016-03-01

    The small dim moving target usually submerged in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio (SNR). A target tracking algorithm based on particle filter and discriminative sparse representation is proposed in this paper to cope with the uncertainty of dim moving target tracking. The weight of every particle is the crucial factor to ensuring the accuracy of dim target tracking for particle filter (PF) that can achieve excellent performance even under the situation of non-linear and non-Gaussian motion. In discriminative over-complete dictionary constructed according to image sequence, the target dictionary describes target signal and the background dictionary embeds background clutter. The difference between target particle and background particle is enhanced to a great extent, and the weight of every particle is then measured by means of the residual after reconstruction using the prescribed number of target atoms and their corresponding coefficients. The movement state of dim moving target is then estimated and finally tracked by these weighted particles. Meanwhile, the subspace of over-complete dictionary is updated online by the stochastic estimation algorithm. Some experiments are induced and the experimental results show the proposed algorithm could improve the performance of moving target tracking by enhancing the consistency between the posteriori probability distribution and the moving target state.

  17. Sparse and Adaptive Diffusion Dictionary (SADD) for recovering intra-voxel white matter structure.

    PubMed

    Aranda, Ramon; Ramirez-Manzanares, Alonso; Rivera, Mariano

    2015-12-01

    On the analysis of the Diffusion-Weighted Magnetic Resonance Images, multi-compartment models overcome the limitations of the well-known Diffusion Tensor model for fitting in vivo brain axonal orientations at voxels with fiber crossings, branching, kissing or bifurcations. Some successful multi-compartment methods are based on diffusion dictionaries. The diffusion dictionary-based methods assume that the observed Magnetic Resonance signal at each voxel is a linear combination of the fixed dictionary elements (dictionary atoms). The atoms are fixed along different orientations and diffusivity profiles. In this work, we present a sparse and adaptive diffusion dictionary method based on the Diffusion Basis Functions Model to estimate in vivo brain axonal fiber populations. Our proposal overcomes the following limitations of the diffusion dictionary-based methods: the limited angular resolution and the fixed shapes for the atom set. We propose to iteratively re-estimate the orientations and the diffusivity profile of the atoms independently at each voxel by using a simplified and easier-to-solve mathematical approach. As a result, we improve the fitting of the Diffusion-Weighted Magnetic Resonance signal. The advantages with respect to the former Diffusion Basis Functions method are demonstrated on the synthetic data-set used on the 2012 HARDI Reconstruction Challenge and in vivo human data. We demonstrate that improvements obtained in the intra-voxel fiber structure estimations benefit brain research allowing to obtain better tractography estimations. Hence, these improvements result in an accurate computation of the brain connectivity patterns. PMID:26519793

  18. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees

    PubMed Central

    2013-01-01

    Background Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10–30%. Results With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10–30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Conclusions Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the

  19. Encoding Cortical Dynamics in Sparse Features

    PubMed Central

    Khan, Sheraz; Lefèvre, Julien; Baillet, Sylvain; Michmizos, Konstantinos P.; Ganesan, Santosh; Kitzbichler, Manfred G.; Zetino, Manuel; Hämäläinen, Matti S.; Papadelis, Christos; Kenet, Tal

    2014-01-01

    Distributed cortical solutions of magnetoencephalography (MEG) and electroencephalography (EEG) exhibit complex spatial and temporal dynamics. The extraction of patterns of interest and dynamic features from these cortical signals has so far relied on the expertise of investigators. There is a definite need in both clinical and neuroscience research for a method that will extract critical features from high-dimensional neuroimaging data in an automatic fashion. We have previously demonstrated the use of optical flow techniques for evaluating the kinematic properties of motion field projected on non-flat manifolds like in a cortical surface. We have further extended this framework to automatically detect features in the optical flow vector field by using the modified and extended 2-Riemannian Helmholtz–Hodge decomposition (HHD). Here, we applied these mathematical models on simulation and MEG data recorded from a healthy individual during a somatosensory experiment and an epilepsy pediatric patient during sleep. We tested whether our technique can automatically extract salient dynamical features of cortical activity. Simulation results indicated that we can precisely reproduce the simulated cortical dynamics with HHD; encode them in sparse features and represent the propagation of brain activity between distinct cortical areas. Using HHD, we decoded the somatosensory N20 component into two HHD features and represented the dynamics of brain activity as a traveling source between two primary somatosensory regions. In the epilepsy patient, we displayed the propagation of the epileptic activity around the margins of a brain lesion. Our findings indicate that HHD measures computed from cortical dynamics can: (i) quantitatively access the cortical dynamics in both healthy and disease brain in terms of sparse features and dynamic brain activity propagation between distinct cortical areas, and (ii) facilitate a reproducible, automated analysis of experimental and clinical

  20. Encoding cortical dynamics in sparse features.

    PubMed

    Khan, Sheraz; Lefèvre, Julien; Baillet, Sylvain; Michmizos, Konstantinos P; Ganesan, Santosh; Kitzbichler, Manfred G; Zetino, Manuel; Hämäläinen, Matti S; Papadelis, Christos; Kenet, Tal

    2014-01-01

    Distributed cortical solutions of magnetoencephalography (MEG) and electroencephalography (EEG) exhibit complex spatial and temporal dynamics. The extraction of patterns of interest and dynamic features from these cortical signals has so far relied on the expertise of investigators. There is a definite need in both clinical and neuroscience research for a method that will extract critical features from high-dimensional neuroimaging data in an automatic fashion. We have previously demonstrated the use of optical flow techniques for evaluating the kinematic properties of motion field projected on non-flat manifolds like in a cortical surface. We have further extended this framework to automatically detect features in the optical flow vector field by using the modified and extended 2-Riemannian Helmholtz-Hodge decomposition (HHD). Here, we applied these mathematical models on simulation and MEG data recorded from a healthy individual during a somatosensory experiment and an epilepsy pediatric patient during sleep. We tested whether our technique can automatically extract salient dynamical features of cortical activity. Simulation results indicated that we can precisely reproduce the simulated cortical dynamics with HHD; encode them in sparse features and represent the propagation of brain activity between distinct cortical areas. Using HHD, we decoded the somatosensory N20 component into two HHD features and represented the dynamics of brain activity as a traveling source between two primary somatosensory regions. In the epilepsy patient, we displayed the propagation of the epileptic activity around the margins of a brain lesion. Our findings indicate that HHD measures computed from cortical dynamics can: (i) quantitatively access the cortical dynamics in both healthy and disease brain in terms of sparse features and dynamic brain activity propagation between distinct cortical areas, and (ii) facilitate a reproducible, automated analysis of experimental and clinical

  1. Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction

    PubMed Central

    Wang, Bigong; Li, Liang

    2015-01-01

    As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL's development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training. PMID:26089956

  2. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; O'Reilly, Meaghan A.; Hynynen, Kullervo

    2013-07-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337-43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring techniques currently exist.

  3. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    PubMed Central

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring technique currently exists. PMID:23807573

  4. Sparse-view proton computed tomography using modulated proton beams

    SciTech Connect

    Lee, Jiseoc; Kim, Changhwan; Cho, Seungryong; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong

    2015-02-15

    Purpose: Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Methods: Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method–projection onto convex sets (SM-POCS), superiorization method–expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Results: Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed

  5. Sparse Coding for Alpha Matting

    NASA Astrophysics Data System (ADS)

    Johnson, Jubin; Varnousfaderani, Ehsan Shahrian; Cholakkal, Hisham; Rajan, Deepu

    2016-07-01

    Existing color sampling based alpha matting methods use the compositing equation to estimate alpha at a pixel from pairs of foreground (F) and background (B) samples. The quality of the matte depends on the selected (F,B) pairs. In this paper, the matting problem is reinterpreted as a sparse coding of pixel features, wherein the sum of the codes gives the estimate of the alpha matte from a set of unpaired F and B samples. A non-parametric probabilistic segmentation provides a certainty measure on the pixel belonging to foreground or background, based on which a dictionary is formed for use in sparse coding. By removing the restriction to conform to (F,B) pairs, this method allows for better alpha estimation from multiple F and B samples. The same framework is extended to videos, where the requirement of temporal coherence is handled effectively. Here, the dictionary is formed by samples from multiple frames. A multi-frame graph model, as opposed to a single image as for image matting, is proposed that can be solved efficiently in closed form. Quantitative and qualitative evaluations on a benchmark dataset are provided to show that the proposed method outperforms current state-of-the-art in image and video matting.

  6. Sparse encoding of automatic visual association in hippocampal networks.

    PubMed

    Hulme, Oliver J; Skov, Martin; Chadwick, Martin J; Siebner, Hartwig R; Ramsøy, Thomas Z

    2014-11-15

    Intelligent action entails exploiting predictions about associations between elements of ones environment. The hippocampus and mediotemporal cortex are endowed with the network topology, physiology, and neurochemistry to automatically and sparsely code sensori-cognitive associations that can be reconstructed from single or partial inputs. Whilst acquiring fMRI data and performing an attentional task, participants were incidentally presented with a sequence of cartoon images. By assigning subjects a post-scan free-association task on the same images we assayed the density of associations triggered by these stimuli. Using multivariate Bayesian decoding, we show that human hippocampal and temporal neocortical structures host sparse associative representations that are automatically triggered by visual input. Furthermore, as predicted theoretically, there was a significant increase in sparsity in the Cornu Ammonis subfields, relative to the entorhinal cortex. Remarkably, the sparsity of CA encoding correlated significantly with associative memory performance over subjects; elsewhere within the temporal lobe, entorhinal, parahippocampal, perirhinal and fusiform cortices showed the highest model evidence for the sparse encoding of associative density. In the absence of reportability or attentional confounds, this charts a distribution of visual associative representations within hippocampal populations and their temporal lobe afferent fields, and demonstrates the viability of retrospective associative sampling techniques for assessing the form of reflexive associative encoding. PMID:25038440

  7. An Equivalence Between Sparse Approximation and Support Vector Machines.

    PubMed

    Girosi

    1998-07-28

    This article shows a relationship between two different approximation techniques: the support vector machines (SVM), proposed by V. Vapnik (1995) and a sparse approximation scheme that resembles the basis pursuit denoising algorithm (Chen, 1995; Chen, Donoho, and Saunders, 1995). SVM is a technique that can be derived from the structural risk minimization principle (Vapnik, 1982) and can be used to estimate the parameters of several different approximation schemes, including radial basis functions, algebraic and trigonometric polynomials, B-splines, and some forms of multilayer perceptrons. Basis pursuit denoising is a sparse approximation technique in which a function is reconstructed by using a small number of basis functions chosen from a large set (the dictionary). We show that if the data are noiseless, the modified version of basis pursuit denoising proposed in this article is equivalent to SVM in the following sense: if applied to the same data set, the two techniques give the same solution, which is obtained by solving the same quadratic programming problem. In the appendix, we present a derivation of the SVM technique in one framework of regularization theory, rather than statistical learning theory, establishing a connection between SVM, sparse approximation, and regularization theory. PMID:9698353

  8. Learning feature representations with a cost-relevant sparse autoencoder.

    PubMed

    Längkvist, Martin; Loutfi, Amy

    2015-02-01

    There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder. PMID:25515941

  9. Dictionary learning method for joint sparse representation-based image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Qiheng; Fu, Yuli; Li, Haifeng; Zou, Jian

    2013-05-01

    Recently, sparse representation (SR) and joint sparse representation (JSR) have attracted a lot of interest in image fusion. The SR models signals by sparse linear combinations of prototype signal atoms that make a dictionary. The JSR indicates that different signals from the various sensors of the same scene form an ensemble. These signals have a common sparse component and each individual signal owns an innovation sparse component. The JSR offers lower computational complexity compared with SR. First, for JSR-based image fusion, we give a new fusion rule. Then, motivated by the method of optimal directions (MOD), for JSR, we propose a novel dictionary learning method (MODJSR) whose dictionary updating procedure is derived by employing the JSR structure one time with singular value decomposition (SVD). MODJSR has lower complexity than the K-SVD algorithm which is often used in previous JSR-based fusion algorithms. To capture the image details more efficiently, we proposed the generalized JSR in which the signals ensemble depends on two dictionaries. MODJSR is extended to MODGJSR in this case. MODJSR/MODGJSR can simultaneously carry out dictionary learning, denoising, and fusion of noisy source images. Some experiments are given to demonstrate the validity of the MODJSR/MODGJSR for image fusion.

  10. Image fusion using sparse overcomplete feature dictionaries

    SciTech Connect

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  11. Mathematical strategies for filtering complex systems: Regularly spaced sparse observations

    SciTech Connect

    Harlim, J. Majda, A.J.

    2008-05-01

    Real time filtering of noisy turbulent signals through sparse observations on a regularly spaced mesh is a notoriously difficult and important prototype filtering problem. Simpler off-line test criteria are proposed here as guidelines for filter performance for these stiff multi-scale filtering problems in the context of linear stochastic partial differential equations with turbulent solutions. Filtering turbulent solutions of the stochastically forced dissipative advection equation through sparse observations is developed as a stringent test bed for filter performance with sparse regular observations. The standard ensemble transform Kalman filter (ETKF) has poor skill on the test bed and even suffers from filter divergence, surprisingly, at observable times with resonant mean forcing and a decaying energy spectrum in the partially observed signal. Systematic alternative filtering strategies are developed here including the Fourier Domain Kalman Filter (FDKF) and various reduced filters called Strongly Damped Approximate Filter (SDAF), Variance Strongly Damped Approximate Filter (VSDAF), and Reduced Fourier Domain Kalman Filter (RFDKF) which operate only on the primary Fourier modes associated with the sparse observation mesh while nevertheless, incorporating into the approximate filter various features of the interaction with the remaining modes. It is shown below that these much cheaper alternative filters have significant skill on the test bed of turbulent solutions which exceeds ETKF and in various regimes often exceeds FDKF, provided that the approximate filters are guided by the off-line test criteria. The skill of the various approximate filters depends on the energy spectrum of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a precise fashion elucidated here.

  12. Signals on Graphs: Uncertainty Principle and Sampling

    NASA Astrophysics Data System (ADS)

    Tsitsvero, Mikhail; Barbarossa, Sergio; Di Lorenzo, Paolo

    2016-09-01

    In many applications, the observations can be represented as a signal defined over the vertices of a graph. The analysis of such signals requires the extension of standard signal processing tools. In this work, first, we provide a class of graph signals that are maximally concentrated on the graph domain and on its dual. Then, building on this framework, we derive an uncertainty principle for graph signals and illustrate the conditions for the recovery of band-limited signals from a subset of samples. We show an interesting link between uncertainty principle and sampling and propose alternative signal recovery algorithms, including a generalization to frame-based reconstruction methods. After showing that the performance of signal recovery algorithms is significantly affected by the location of samples, we suggest and compare a few alternative sampling strategies. Finally, we provide the conditions for perfect recovery of a useful signal corrupted by sparse noise, showing that this problem is also intrinsically related to vertex-frequency localization properties.

  13. Effects of sparse sampling schemes on image quality in low-dose CT

    SciTech Connect

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-11-15

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  14. Optimization of the signal selection of exclusively reconstructed decays of B0 and B/s mesons at CDF-II

    SciTech Connect

    Doerr, Christian; /Karlsruhe U., EKP

    2006-06-01

    The work presented in this thesis is mainly focused on the application in a {Delta}m{sub s} measurement. Chapter 1 starts with a general theoretical introduction on the unitarity triangle with a focus on the impact of a {Delta}m{sub s} measurement. Chapter 2 then describes the experimental setup, consisting of the Tevatron collider and the CDF II detector, that was used to collect the data. In chapter 3 the concept of parameter estimation using binned and unbinned maximum likelihood fits is laid out. In addition an introduction to the NeuroBayes{reg_sign} neural network package is given. Chapter 4 outlines the analysis steps walking the path from the trigger level selection to fully reconstructed B mesons candidates. In chapter 5 the concepts and formulas that form the ingredients to an unbinned maximum likelihood fit of {Delta}m{sub s} ({Delta}m{sub d}) from a sample of reconstructed B mesons are discussed. Chapter 6 then introduces the novel method of using neural networks to achieve an improved signal selection. First the method is developed, tested and validated using the decay B{sup 0} {yields} D{pi}, D {yields} K{pi}{pi} and then applied to the kinematically very similar decay B{sub s} {yields} D{sub s}{pi}, D{sub s} {yields} {phi}{pi}, {phi} {yields} KK. Chapter 7 uses events selected by the neural network selection as input to an unbinned maximum likelihood fit and extracts the B{sup 0} lifetime and {Delta}m{sub d}. In addition, an amplitude scan and an unbinned maximum likelihood fit of {Delta}m{sub s} is performed, applying the neural network selection developed for the decay channel B{sub s} {yields} D{sub s}{pi}, D{sub s} {yields} {phi}{pi}, {phi} {yields} KK. Finally chapter 8 summarizes and gives an outlook.

  15. Inverse sparse tracker with a locally weighted distance metric.

    PubMed

    Wang, Dong; Lu, Huchuan; Xiao, Ziyang; Yang, Ming-Hsuan

    2015-09-01

    Sparse representation has been recently extensively studied for visual tracking and generally facilitates more accurate tracking results than classic methods. In this paper, we propose a sparsity-based tracking algorithm that is featured with two components: 1) an inverse sparse representation formulation and 2) a locally weighted distance metric. In the inverse sparse representation formulation, the target template is reconstructed with particles, which enables the tracker to compute the weights of all particles by solving only one l1 optimization problem and thereby provides a quite efficient model. This is in direct contrast to most previous sparse trackers that entail solving one optimization problem for each particle. However, we notice that this formulation with normal Euclidean distance metric is sensitive to partial noise like occlusion and illumination changes. To this end, we design a locally weighted distance metric to replace the Euclidean one. Similar ideas of using local features appear in other works, but only being supported by popular assumptions like local models could handle partial noise better than holistic models, without any solid theoretical analysis. In this paper, we attempt to explicitly explain it from a mathematical view. On that basis, we further propose a method to assign local weights by exploiting the temporal and spatial continuity. In the proposed method, appearance changes caused by partial occlusion and shape deformation are carefully considered, thereby facilitating accurate similarity measurement and model update. The experimental validation is conducted from two aspects: 1) self validation on key components and 2) comparison with other state-of-the-art algorithms. Results over 15 challenging sequences show that the proposed tracking algorithm performs favorably against the existing sparsity-based trackers and the other state-of-the-art methods. PMID:25935033

  16. Compact Structure Hashing via Sparse and Similarity Preserving Embedding.

    PubMed

    Ye, Renzhen; Li, Xuelong

    2016-03-01

    Over the past few years, fast approximate nearest neighbor (ANN) search is desirable or essential, e.g., in huge databases, and therefore many hashing-based ANN techniques have been presented to return the nearest neighbors of a given query from huge databases. Hashing-based ANN techniques have become popular due to its low memory cost and good computational complexity. Recently, most of hashing methods have realized the importance of the relationship of the data and exploited the different structure of data to improve retrieval performance. However, a limitation of the aforementioned methods is that the sparse reconstructive relationship of the data is neglected. In this case, few methods can find the discriminating power and own the local properties of the data for learning compact and effective hash codes. To take this crucial issue into account, this paper proposes a method named special structure-based hashing (SSBH). SSBH can preserve the underlying geometric information among the data, and exploit the prior information that there exists sparse reconstructive relationship of the data, for learning compact and effective hash codes. Upon extensive experimental results, SSBH is demonstrated to be more robust and more effective than state-of-the-art hashing methods. PMID:25910267

  17. Effect of asymmetrical eddy currents on magnetic diagnosis signals for equilibrium reconstruction in the Sino-UNIted Spherical Tokamak.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Wang, L

    2014-11-01

    The vacuum vessel of Sino-UNIted Spherical Tokamak was split into two insulated hemispheres, both of which were insulated from the central cylinder. The eddy currents flowing in the vacuum vessel would become asymmetrical due to discontinuity. A 3D finite elements model was applied in order to study the eddy currents. The modeling results indicated that when the Poloidal Field (PF) was applied, the induced eddy currents would flow in the toroidal direction in the center of the hemispheres and would be forced to turn to the poloidal and radial directions due to the insulated slit. Since the eddy currents converged on the top and bottom of the vessel, the current densities there tended to be much higher than those in the equatorial plane were. Moreover, the eddy currents on the top and bottom of vacuum vessel had the same direction when the current flowed in the PF coils. These features resulted in the leading phases of signals on the top and bottom flux loops when compared with the PF waveforms. PMID:25430380

  18. Effect of asymmetrical eddy currents on magnetic diagnosis signals for equilibrium reconstruction in the Sino-UNIted Spherical Tokamaka)

    NASA Astrophysics Data System (ADS)

    Jiang, Y. Z.; Tan, Y.; Gao, Z.; Wang, L.

    2014-11-01

    The vacuum vessel of Sino-UNIted Spherical Tokamak was split into two insulated hemispheres, both of which were insulated from the central cylinder. The eddy currents flowing in the vacuum vessel would become asymmetrical due to discontinuity. A 3D finite elements model was applied in order to study the eddy currents. The modeling results indicated that when the Poloidal Field (PF) was applied, the induced eddy currents would flow in the toroidal direction in the center of the hemispheres and would be forced to turn to the poloidal and radial directions due to the insulated slit. Since the eddy currents converged on the top and bottom of the vessel, the current densities there tended to be much higher than those in the equatorial plane were. Moreover, the eddy currents on the top and bottom of vacuum vessel had the same direction when the current flowed in the PF coils. These features resulted in the leading phases of signals on the top and bottom flux loops when compared with the PF waveforms.

  19. Energy-based scheme for reconstruction of piecewise constant signals observed in the movement of molecular machines.

    PubMed

    Rosskopf, Joachim; Paul-Yuan, Korbinian; Plenio, Martin B; Michaelis, Jens

    2016-08-01

    Analyzing the physical and chemical properties of single DNA-based molecular machines such as polymerases and helicases requires to track stepping motion on the length scale of base pairs. Although high-resolution instruments have been developed that are capable of reaching that limit, individual steps are oftentimes hidden by experimental noise which complicates data processing. Here we present an effective two-step algorithm which detects steps in a high-bandwidth signal by minimizing an energy-based model (energy-based step finder, EBS). First, an efficient convex denoising scheme is applied which allows compression to tuples of amplitudes and plateau lengths. Second, a combinatorial clustering algorithm formulated on a graph is used to assign steps to the tuple data while accounting for prior information. Performance of the algorithm was tested on Poissonian stepping data simulated based on published kinetics data of RNA polymerase II (pol II). Comparison to existing step-finding methods shows that EBS is superior in speed while providing competitive step-detection results, especially in challenging situations. Moreover, the capability to detect backtracked intervals in experimental data of pol II as well as to detect stepping behavior of the Phi29 DNA packaging motor is demonstrated. PMID:27627346

  20. Sparse representation for vehicle recognition

    NASA Astrophysics Data System (ADS)

    Monnig, Nathan D.; Sakla, Wesam

    2014-06-01

    The Sparse Representation for Classification (SRC) algorithm has been demonstrated to be a state-of-the-art algorithm for facial recognition applications. Wright et al. demonstrate that under certain conditions, the SRC algorithm classification performance is agnostic to choice of linear feature space and highly resilient to image corruption. In this work, we examined the SRC algorithm performance on the vehicle recognition application, using images from the semi-synthetic vehicle database generated by the Air Force Research Laboratory. To represent modern operating conditions, vehicle images were corrupted with noise, blurring, and occlusion, with representation of varying pose and lighting conditions. Experiments suggest that linear feature space selection is important, particularly in the cases involving corrupted images. Overall, the SRC algorithm consistently outperforms a standard k nearest neighbor classifier on the vehicle recognition task.

  1. Sparse and stable Markowitz portfolios

    PubMed Central

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-01-01

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537

  2. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  3. A sparse Bayesian framework for conditioning uncertain geologic models to nonlinear flow measurements

    NASA Astrophysics Data System (ADS)

    Li, Lianlin; Jafarpour, Behnam

    2010-09-01

    We present a Bayesian framework for reconstructing hydraulic properties of rock formations from nonlinear dynamic flow data by imposing sparsity on the distribution of the parameters in a sparse transform basis through Laplace prior distribution. Sparse representation of the subsurface flow properties in a compression transform basis (where a compact representation is often possible) lends itself to a natural regularization approach, i.e. sparsity regularization, which has recently been exploited in solving ill-posed subsurface flow inverse problems. The Bayesian estimation approach presented here allows for a probabilistic treatment of the sparse reconstruction problem and has its roots in machine learning and the recently introduced relevance vector machine algorithm for linear inverse problems. We formulate the Bayesian sparse reconstruction algorithm and apply it to nonlinear subsurface inverse problems where solution sparsity in a discrete cosine transform is assumed. The probabilistic description of solution sparsity, as opposed to deterministic regularization, allows for quantification of the estimation uncertainty and avoids the need for specifying a regularization parameter. Several numerical experiments from multiphase subsurface flow application are presented to illustrate the performance of the proposed method and compare it with the regular Bayesian estimation approach that does not impose solution sparsity. While the examples are derived from subsurface flow modeling, the proposed framework can be applied to nonlinear inverse problems in other imaging applications including geophysical and medical imaging and electromagnetic inverse problem.

  4. A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.

    PubMed

    Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick

    2015-11-01

    In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms. PMID:26291045

  5. Approximate Orthogonal Sparse Embedding for Dimensionality Reduction.

    PubMed

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Yang, Jian; Zhang, David

    2016-04-01

    Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L1-norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes. PMID:25955995

  6. Finding One Community in a Sparse Graph

    NASA Astrophysics Data System (ADS)

    Montanari, Andrea

    2015-10-01

    We consider a random sparse graph with bounded average degree, in which a subset of vertices has higher connectivity than the background. In particular, the average degree inside this subset of vertices is larger than outside (but still bounded). Given a realization of such graph, we aim at identifying the hidden subset of vertices. This can be regarded as a model for the problem of finding a tightly knitted community in a social network, or a cluster in a relational dataset. In this paper we present two sets of contributions: ( i) We use the cavity method from spin glass theory to derive an exact phase diagram for the reconstruction problem. In particular, as the difference in edge probability increases, the problem undergoes two phase transitions, a static phase transition and a dynamic one. ( ii) We establish rigorous bounds on the dynamic phase transition and prove that, above a certain threshold, a local algorithm (belief propagation) correctly identify most of the hidden set. Below the same threshold no local algorithm can achieve this goal. However, in this regime the subset can be identified by exhaustive search. For small hidden sets and large average degree, the phase transition for local algorithms takes an intriguingly simple form. Local algorithms succeed with high probability for deg _in - deg _out > √{deg _out/e} and fail for deg _in - deg _out < √{deg _out/e} (with deg _in, deg _out the average degrees inside and outside the community). We argue that spectral algorithms are also ineffective in the latter regime. It is an open problem whether any polynomial time algorithms might succeed for deg _in - deg _out < √{deg _out/e}.

  7. Wide field of view multifocal scanning microscopy with sparse sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Wu, Jigang

    2016-02-01

    We propose to use sparsely sampled line scans with a sparsity-based reconstruction method to obtain images in a wide field of view (WFOV) multifocal scanning microscope. In the WFOV microscope, we used a holographically generated irregular focus grid to scan the sample in one dimension and then reconstructed the sample image from line scans by measuring the transmission of the foci through the sample during scanning. The line scans were randomly spaced with average spacing larger than the Nyquist sampling requirement, and the image was recovered with sparsity-based reconstruction techniques. With this scheme, the acquisition data can be significantly reduced and the restriction for equally spaced foci positions can be removed, indicating simpler experimental requirement. We built a prototype system and demonstrated the effectiveness of the reconstruction by recovering microscopic images of a U.S. Air Force target and an onion skin cell microscope slide with 40, 60, and 80% missing data with respect to the Nyquist sampling requirement.

  8. Gridding and fast Fourier transformation on non-uniformly sparse sampled multidimensional NMR data.

    PubMed

    Jiang, Bin; Jiang, Xianwang; Xiao, Nan; Zhang, Xu; Jiang, Ling; Mao, Xi-an; Liu, Maili

    2010-05-01

    For multidimensional NMR method, indirect dimensional non-uniform sparse sampling can dramatically shorten acquisition time of the experiments. However, the non-uniformly sampled NMR data cannot be processed directly using fast Fourier transform (FFT). We show that the non-uniformly sampled NMR data can be reconstructed to Cartesian grid with the gridding method that has been wide applied in MRI, and sequentially be processed using FFT. The proposed gridding-FFT (GFFT) method increases the processing speed sharply compared with the previously proposed non-uniform Fourier Transform, and may speed up application of the non-uniform sparse sampling approaches. PMID:20236843

  9. Gridding and fast Fourier transformation on non-uniformly sparse sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jiang, Xianwang; Xiao, Nan; Zhang, Xu; Jiang, Ling; Mao, Xi-an; Liu, Maili

    2010-05-01

    For multidimensional NMR method, indirect dimensional non-uniform sparse sampling can dramatically shorten acquisition time of the experiments. However, the non-uniformly sampled NMR data cannot be processed directly using fast Fourier transform (FFT). We show that the non-uniformly sampled NMR data can be reconstructed to Cartesian grid with the gridding method that has been wide applied in MRI, and sequentially be processed using FFT. The proposed gridding-FFT (GFFT) method increases the processing speed sharply compared with the previously proposed non-uniform Fourier Transform, and may speed up application of the non-uniform sparse sampling approaches.

  10. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  11. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  12. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  13. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  14. Sparse coding for layered neural networks

    NASA Astrophysics Data System (ADS)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2002-07-01

    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  15. Data-driven adaptation of a union of sparsifying transforms for blind compressed sensing MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Ravishankar, Saiprasad; Bresler, Yoram

    2015-09-01

    Compressed Sensing has been demonstrated to be a powerful tool for magnetic resonance imaging (MRI), where it enables accurate recovery of images from highly undersampled k-space measurements by exploiting the sparsity of the images or image patches in a transform domain or dictionary. In this work, we focus on blind compressed sensing, where the underlying sparse signal model is a priori unknown, and propose a framework to simultaneously reconstruct the underlying image as well as the unknown model from highly undersampled measurements. Specifically, our model is that the patches of the underlying MR image(s) are approximately sparse in a transform domain. We also extend this model to a union of transforms model that is better suited to capture the diversity of features in MR images. The proposed block coordinate descent type algorithms for blind compressed sensing are highly efficient. Our numerical experiments demonstrate the superior performance of the proposed framework for MRI compared to several recent image reconstruction methods. Importantly, the learning of a union of sparsifying transforms leads to better image reconstructions than a single transform.

  16. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    PubMed

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization. PMID:22978881

  17. Sparse Spectrotemporal Coding of Sounds

    NASA Astrophysics Data System (ADS)

    Klein, David J.; König, Peter; Körding, Konrad P.

    2003-12-01

    Recent studies of biological auditory processing have revealed that sophisticated spectrotemporal analyses are performed by central auditory systems of various animals. The analysis is typically well matched with the statistics of relevant natural sounds, suggesting that it produces an optimal representation of the animal's acoustic biotope. We address this topic using simulated neurons that learn an optimal representation of a speech corpus. As input, the neurons receive a spectrographic representation of sound produced by a peripheral auditory model. The output representation is deemed optimal when the responses of the neurons are maximally sparse. Following optimization, the simulated neurons are similar to real neurons in many respects. Most notably, a given neuron only analyzes the input over a localized region of time and frequency. In addition, multiple subregions either excite or inhibit the neuron, together producing selectivity to spectral and temporal modulation patterns. This suggests that the brain's solution is particularly well suited for coding natural sound; therefore, it may prove useful in the design of new computational methods for processing speech.

  18. Resistant multiple sparse canonical correlation.

    PubMed

    Coleman, Jacob; Replogle, Joseph; Chandler, Gabriel; Hardin, Johanna

    2016-04-01

    Canonical correlation analysis (CCA) is a multivariate technique that takes two datasets and forms the most highly correlated possible pairs of linear combinations between them. Each subsequent pair of linear combinations is orthogonal to the preceding pair, meaning that new information is gleaned from each pair. By looking at the magnitude of coefficient values, we can find out which variables can be grouped together, thus better understanding multiple interactions that are otherwise difficult to compute or grasp intuitively. CCA appears to have quite powerful applications to high-throughput data, as we can use it to discover, for example, relationships between gene expression and gene copy number variation. One of the biggest problems of CCA is that the number of variables (often upwards of 10,000) makes biological interpretation of linear combinations nearly impossible. To limit variable output, we have employed a method known as sparse canonical correlation analysis (SCCA), while adding estimation which is resistant to extreme observations or other types of deviant data. In this paper, we have demonstrated the success of resistant estimation in variable selection using SCCA. Additionally, we have used SCCA to find multiple canonical pairs for extended knowledge about the datasets at hand. Again, using resistant estimators provided more accurate estimates than standard estimators in the multiple canonical correlation setting. R code is available and documented at https://github.com/hardin47/rmscca. PMID:26963062

  19. Sparse Bayesian infinite factor models

    PubMed Central

    Bhattacharya, A.; Dunson, D. B.

    2011-01-01

    We focus on sparse modelling of high-dimensional covariance matrices using Bayesian latent factor models. We propose a multiplicative gamma process shrinkage prior on the factor loadings which allows introduction of infinitely many factors, with the loadings increasingly shrunk towards zero as the column index increases. We use our prior on a parameter-expanded loading matrix to avoid the order dependence typical in factor analysis models and develop an efficient Gibbs sampler that scales well as data dimensionality increases. The gain in efficiency is achieved by the joint conjugacy property of the proposed prior, which allows block updating of the loadings matrix. We propose an adaptive Gibbs sampler for automatically truncating the infinite loading matrix through selection of the number of important factors. Theoretical results are provided on the support of the prior and truncation approximation bounds. A fast algorithm is proposed to produce approximate Bayes estimates. Latent factor regression methods are developed for prediction and variable selection in applications with high-dimensional correlated predictors. Operating characteristics are assessed through simulation studies, and the approach is applied to predict survival times from gene expression data. PMID:23049129

  20. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  1. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  2. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition. PMID:27386281

  3. Online learning and generalization of parts-based image representations by non-negative sparse autoencoders.

    PubMed

    Lemme, Andre; Reinhart, René Felix; Steil, Jochen Jakob

    2012-09-01

    We present an efficient online learning scheme for non-negative sparse coding in autoencoder neural networks. It comprises a novel synaptic decay rule that ensures non-negative weights in combination with an intrinsic self-adaptation rule that optimizes sparseness of the non-negative encoding. We show that non-negativity constrains the space of solutions such that overfitting is prevented and very similar encodings are found irrespective of the network initialization and size. We benchmark the novel method on real-world datasets of handwritten digits and faces. The autoencoder yields higher sparseness and lower reconstruction errors than related offline algorithms based on matrix factorization. It generalizes to new inputs both accurately and without costly computations, which is fundamentally different from the classical matrix factorization approaches. PMID:22706093

  4. Interpolating Sparse Scattered Oceanographic Data Using Flow Information

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Gebbie, G.; Spero, H. J.; Kreylos, O.; Kellogg, L. H.; Hamann, B.

    2012-12-01

    We present a novel approach for interpolating sparse scattered data in the presence of a flow field. In order to visualize a scalar field representing a physical quantity such as salinity, temperature, or nutrient concentration in the ocean, the individual measured values of the quantity of interest typically are first converted into a representation of the scalar field on a regular grid. If the measured values are located at a number of scattered sites, then the reconstruction process will be scattered data interpolation. Scattered data interpolation itself is a well-known problem space for which many methods exist, including methods involving radial basis functions, statistical approaches such as optimal interpolation, and grid-based methods such as Laplace interpolation. However, the quality of the reconstruction result obtained using such methods depends upon having a sufficient density of sample points as input. For cases involving sparse scattered data - such as is the case when using measurements from benthic foraminifera in deep sea sedimentary cores as a proxy for the physical properties of the past ocean - the standard methods may not produce acceptable results. However, if the scalar field is associated with a known (or partially known) flow field, then the flow field information can be used to enhance the interpolation method in order to compensate for the sparsity of the available scalar field samples. Our hypothesis is that scalar field values should be more highly correlated along streamlines of the flow field than across such streamlines. We have investigated and tested such augmented, flow-field-aware scattered data interpolation methods. In particular, we have modified standard scattered data interpolation methods to use non-Euclidean distance pseudometrics, which we have constructed by employing various relative weightings of "distance-along-streamlines" versus "distance-from-streamlines." We have tested the resulting methods by applying them to

  5. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  6. LASER APPLICATIONS IN MEDICINE: Analysis of distortions in the velocity profiles of suspension flows inside a light-scattering medium upon their reconstruction from the optical coherence Doppler tomograph signal

    NASA Astrophysics Data System (ADS)

    Bykov, A. V.; Kirillin, M. Yu; Priezzhev, A. V.

    2005-11-01

    Model signals from one and two plane flows of a particle suspension are obtained for an optical coherence Doppler tomograph (OCDT) by the Monte-Carlo method. The optical properties of particles mimic the properties of non-aggregating erythrocytes. The flows are considered in a stationary scattering medium with optical properties close to those of the skin. It is shown that, as the flow position depth increases, the flow velocity determined from the OCDT signal becomes smaller than the specified velocity and the reconstructed profile extends in the direction of the distant boundary, which is accompanied by the shift of its maximum. In the case of two flows, an increase in the velocity of the near-surface flow leads to the overestimated values of velocity of the reconstructed profile of the second flow. Numerical simulations were performed by using a multiprocessor parallel-architecture computer.

  7. Modeling human performance with low light sparse color imagers

    NASA Astrophysics Data System (ADS)

    Haefner, David P.; Reynolds, Joseph P.; Cha, Jae; Hodgkin, Van

    2011-05-01

    Reflective band sensors are often signal to noise limited in low light conditions. Any additional filtering to obtain spectral information further reduces the signal to noise, greatly affecting range performance. Modern sensors, such as the sparse color filter CCD, circumvent this additional degradation through reducing the number of pixels affected by filters and distributing the color information. As color sensors become more prevalent in the warfighter arsenal, the performance of the sensor-soldier system must be quantified. While field performance testing ultimately validates the success of a sensor, accurately modeling sensor performance greatly reduces the development time and cost, allowing the best technology to reach the soldier the fastest. Modeling of sensors requires accounting for how the signal is affected through the modulation transfer function (MTF) and noise of the system. For the modeling of these new sensors, the MTF and noise for each color band must be characterized, and the appropriate sampling and blur must be applied. We show how sparse array color filter sensors may be modeled and how a soldier's performance with such a sensor may be predicted. This general approach to modeling color sensors can be extended to incorporate all types of low light color sensors.

  8. Flexible Multilayer Sparse Approximations of Matrices and Applications

    NASA Astrophysics Data System (ADS)

    Le Magoarou, Luc; Gribonval, Remi

    2016-06-01

    The computational cost of many signal processing and machine learning techniques is often dominated by the cost of applying certain linear operators to high-dimensional vectors. This paper introduces an algorithm aimed at reducing the complexity of applying linear operators in high dimension by approximately factorizing the corresponding matrix into few sparse factors. The approach relies on recent advances in non-convex optimization. It is first explained and analyzed in details and then demonstrated experimentally on various problems including dictionary learning for image denoising, and the approximation of large matrices arising in inverse problems.

  9. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  10. Sparse and Compositionally Robust Inference of Microbial Ecological Networks

    PubMed Central

    Kurtz, Zachary D.; Müller, Christian L.; Miraldi, Emily R.; Littman, Dan R.; Blaser, Martin J.; Bonneau, Richard A.

    2015-01-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  11. Sparse and compositionally robust inference of microbial ecological networks.

    PubMed

    Kurtz, Zachary D; Müller, Christian L; Miraldi, Emily R; Littman, Dan R; Blaser, Martin J; Bonneau, Richard A

    2015-05-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  12. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  13. Analysis operator learning and its application to image reconstruction.

    PubMed

    Hawe, Simon; Kleinsteuber, Martin; Diepold, Klaus

    2013-06-01

    Exploiting a priori known structural information lies at the core of many image reconstruction methods that can be stated as inverse problems. The synthesis model, which assumes that images can be decomposed into a linear combination of very few atoms of some dictionary, is now a well established tool for the design of image reconstruction algorithms. An interesting alternative is the analysis model, where the signal is multiplied by an analysis operator and the outcome is assumed to be sparse. This approach has only recently gained increasing interest. The quality of reconstruction methods based on an analysis model severely depends on the right choice of the suitable operator. In this paper, we present an algorithm for learning an analysis operator from training images. Our method is based on l(p)-norm minimization on the set of full rank matrices with normalized columns. We carefully introduce the employed conjugate gradient method on manifolds, and explain the underlying geometry of the constraints. Moreover, we compare our approach to state-of-the-art methods for image denoising, inpainting, and single image super-resolution. Our numerical results show competitive performance of our general approach in all presented applications compared to the specialized state-of-the-art techniques. PMID:23412611

  14. Evolutionary reconstruction of networks

    NASA Astrophysics Data System (ADS)

    Ipsen, Mads; Mikhailov, Alexander S.

    2002-10-01

    Can a graph specifying the pattern of connections of a dynamical network be reconstructed from statistical properties of a signal generated by such a system? In this model study, we present a Metropolis algorithm for reconstruction of graphs from their Laplacian spectra. Through a stochastic process of mutations and selection, evolving test networks converge to a reference graph. Applying the method to several examples of random graphs, clustered graphs, and small-world networks, we show that the proposed stochastic evolution allows exact reconstruction of relatively small networks and yields good approximations in the case of large sizes.

  15. Visual recognition and inference using dynamic overcomplete sparse learning.

    PubMed

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter. PMID:17650062

  16. Coronagraph-integrated wavefront sensing with a sparse aperture mask

    NASA Astrophysics Data System (ADS)

    Subedi, Hari; Zimmerman, Neil T.; Kasdin, N. Jeremy; Cavanagh, Kathleen; Riggs, A. J. Eldorado

    2015-07-01

    Stellar coronagraph performance is highly sensitive to optical aberrations. In order to effectively suppress starlight for exoplanet imaging applications, low-order wavefront aberrations entering a coronagraph, such as tip-tilt, defocus, and coma, must be determined and compensated. Previous authors have established the utility of pupil-plane masks (both nonredundant/sparse-aperture and generally asymmetric aperture masks) for wavefront sensing (WFS). Here, we show how a sparse aperture mask (SAM) can be integrated with a coronagraph to measure low-order differential phase aberrations. Starlight rejected by the coronagraph's focal plane stop is collimated to a relay pupil, where the mask forms an interference fringe pattern on a subsequent detector. Our numerical Fourier propagation models show that the information encoded in the fringe intensity distortions is sufficient to accurately discriminate and estimate Zernike phase modes extending from tip-tilt up to radial degree n=5, with amplitude up to λ/20 RMS. The SAM sensor can be integrated with both Lyot and shaped pupil coronagraphs at no detriment to the science beam quality. We characterize the reconstruction accuracy and the performance under low flux/short exposure time conditions, and place it in context of other coronagraph WFS schemes.

  17. Channeled spectropolarimetry using iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dennis J.; LaCasse, Charles F.; Craven, Julia M.

    2016-05-01

    Channeled spectropolarimeters (CSP) measure the polarization state of light as a function of wavelength. Conventional Fourier reconstruction suffers from noise, assumes the channels are band-limited, and requires uniformly spaced samples. To address these problems, we propose an iterative reconstruction algorithm. We develop a mathematical model of CSP measurements and minimize a cost function based on this model. We simulate a measured spectrum using example Stokes parameters, from which we compare conventional Fourier reconstruction and iterative reconstruction. Importantly, our iterative approach can reconstruct signals that contain more bandwidth, an advancement over Fourier reconstruction. Our results also show that iterative reconstruction mitigates noise effects, processes non-uniformly spaced samples without interpolation, and more faithfully recovers the ground truth Stokes parameters. This work offers a significant improvement to Fourier reconstruction for channeled spectropolarimetry.

  18. W-band sparse synthetic aperture for computational imaging.

    PubMed

    Venkatesh, S; Viswanathan, N; Schurig, D

    2016-04-18

    We present a sparse synthetic-aperture, active imaging system at W-band (75 - 110 GHz), which uses sub-harmonic mixer modules. The system employs mechanical scanning of the receiver module position, and a fixed transmitter module. A vector network analyzer provides the back end detection. A full-wave forward model allows accurate construction of the image transfer matrix. We solve the inverse problem to reconstruct scenes using the least squares technique. We demonstrate far-field, diffraction limited imaging of 2D and 3D objects and achieve a cross-range resolution of 3 mm and a depth-range resolution of 4 mm, respectively. Furthermore, we develop an information-based metric to evaluate the performance of a given image transfer matrix for noise-limited, computational imaging systems. We use this metric to find the optimal gain of the radiating element for a given range, both theoretically and experimentally in our system. PMID:27137270

  19. Predicting Homogeneous Pilus Structure from Monomeric Data and Sparse Constraints.

    PubMed

    Xiao, Ke; Shu, Chuanjun; Yan, Qin; Sun, Xiao

    2015-01-01

    Type IV pili (T4P) and T2SS (Type II Secretion System) pseudopili are filaments extending beyond microbial surfaces, comprising homologous subunits called "pilins." In this paper, we presented a new approach to predict pseudo atomic models of pili combining ambiguous symmetric constraints with sparse distance information obtained from experiments and based neither on electronic microscope (EM) maps nor on accurate a priori symmetric details. The approach was validated by the reconstruction of the gonococcal (GC) pilus from Neisseria gonorrhoeae, the type IVb toxin-coregulated pilus (TCP) from Vibrio cholerae, and pseudopilus of the pullulanase T2SS (the PulG pilus) from Klebsiella oxytoca. In addition, analyses of computational errors showed that subunits should be treated cautiously, as they are slightly flexible and not strictly rigid bodies. A global sampling in a wider range was also implemented and implied that a pilus might have more than one but fewer than many possible intact conformations. PMID:26064954

  20. Image reconstruction for single detector rosette scanning systems based on compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Uzeler, Hande; Cakir, Serdar; Aytaç, Tayfun

    2016-02-01

    Compressive sensing (CS) is a signal processing technique that enables a signal that has a sparse representation in a known basis to be reconstructed using measurements obtained below the Nyquist rate. Single detector image reconstruction applications using CS have been shown to give promising results. In this study, we investigate the application of CS theory to single detector infrared (IR) rosette scanning systems which suffer from low performance compared to costly focal plane array (FPA) detectors. The single detector pseudoimaging rosette scanning system scans the scene with a specific pattern and performs processing to estimate the target location without forming an image. In this context, this generation of scanning systems may be improved by utilizing the samples obtained by the rosette scanning pattern in conjunction with the CS framework. For this purpose, we consider surface-to-air engagement scenarios using IR images containing aerial targets and flares. The IR images have been reconstructed from samples obtained with the rosette scanning pattern and other baseline sampling strategies. It has been shown that the proposed scheme exhibits good reconstruction performance and a large size FPA imaging performance can be achieved using a single IR detector with a rosette scanning pattern.

  1. Reconstructive compounding for IVUS palpography.

    PubMed

    Danilouchkine, Mikhail G; Mastik, Frits; van der Steen, Antonius F W

    2009-12-01

    This study proposes a novel algorithm for luminal strain reconstruction from sparse irregularly sampled strain measurements. It is based on the normalized convolution (NC) algorithm. The novel extension comprises the multilevel scheme, which takes into account the variable sampling density of the available strain measurements during the cardiac cycle. The proposed algorithm was applied to restore luminal strain values in intravascular ultrasound (IVUS) palpography. The procedure of reconstructing and averaging the strain values acquired during one cardiac cycle forms a technique, coined as reconstructive compounding. The accuracy of strain reconstruction was initially tested on the luminal strain map, computed from 3 in vivo IVUS pullbacks. The high quality of strain restoration was observed after systematically removing up to 90% of the initial elastographic measurements. The restored distributions accurately reproduced the original strain patterns and the error did not exceed 5%. The experimental validation of the reconstructed compounding technique was performed on 8 in vivo IVUS pullbacks. It demonstrated that the relative decrease in number of invalid strain estimates amounts to 92.05 +/- 6.03% and 99.17 +/- 0.92% for the traditional and reconstructive strain compounding schemes, respectively. In conclusion, implementation of the reconstructive compounding scheme boosts the diagnostic value of IVUS palpography. PMID:20040400

  2. The application of a sparse, distributed memory to the detection, identification and manipulation of physical objects

    NASA Technical Reports Server (NTRS)

    Kanerva, P.

    1986-01-01

    To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.

  3. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  4. Sinogram denoising via simultaneous sparse representation in learned dictionaries.

    PubMed

    Karimi, Davood; Ward, Rabab K

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. PMID:27055224

  5. Reduction and Mastopexy of the Reconstructed Breast: Special Considerations in Free Flap Reconstruction

    PubMed Central

    Zafar, Sarosh N.; Ellsworth, Warren A.

    2015-01-01

    Autologous breast reconstruction is capable of creating a breast that closely resembles a natural breast. Reduction and mastopexy in this type of reconstruction yields several challenges to the reconstructive surgeon. Revision surgery is common to achieve symmetry; however, reduction, mastopexy, and other revision techniques are sparse in the current literature. Often, these techniques are passed from mentor to student during plastic surgery training or are learned with experience in managing one's own patients. Reviewing anatomical principles unique to this subset of patients is essential. We must also consider factors unique to this group including the effects of delayed reconstruction, radiation, skin paddle size, and flap volume. In this article, the authors describe some of the common principles used by experienced reconstructive surgeons to perform reduction and mastopexy in autologous breast reconstruction to achieve a natural, aesthetically pleasing breast reconstruction. In addition, they have included several case examples to further illustrate these principles. PMID:26528087

  6. Detect signals of interdecadal climate variations from an enhanced suite of reconstructed precipitation products since 1850 using the historical station data from Global Historical Climatology Network and the dynamical patterns derived from Global Precipitation Climatology Project

    NASA Astrophysics Data System (ADS)

    Shen, S. S.

    2015-12-01

    This presentation describes the detection of interdecadal climate signals in a newly reconstructed precipitation data from 1850-present. Examples are on precipitation signatures of East Asian Monsoon (EAM), Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillations (AMO). The new reconstruction dataset is an enhanced edition of a suite of global precipitation products reconstructed by Spectral Optimal Gridding of Precipitation Version 1.0 (SOGP 1.0). The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). This enhanced version has three different temporal resolutions (5-day, monthly, and annual) and two different spatial resolutions (2.5 deg and 5.0 deg). It also has a friendly Graphical User Interface (GUI). SOGP uses a multivariate regression method using an empirical orthogonal function (EOF) expansion. The Global Precipitation Climatology Project (GPCP) precipitation data from 1981-20010 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed according to the number of EOF modes used in the reconstruction. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction has been validated by GPCP data after 1979. Our reconstruction successfully displays the 1877 El Nino (see the attached figure), which is considered a validation before 1900. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort of San Diego State University (Sam Shen, Gregori Clarke, Christian Junjinger, Nancy Tafolla, Barbara Sperberg, and

  7. Sparse subspace clustering: algorithm, theory, and applications.

    PubMed

    Elhamifar, Ehsan; Vidal, René

    2013-11-01

    Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering. PMID:24051734

  8. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  9. Dictionary construction in sparse methods for image restoration

    SciTech Connect

    Wohlberg, Brendt

    2010-01-01

    Sparsity-based methods have achieved very good performance in a wide variety of image restoration problems, including denoising, inpainting, super-resolution, and source separation. These methods are based on the assumption that the image to be reconstructed may be represented as a superposition of a few known components, and the appropriate linear combination of components is estimated by solving an optimization such as Basis Pursuit De-Noising (BPDN). Considering that the K-SVD constructs a dictionary which has been optimised for mean performance over a training set, it is not too surprising that better performance can be achieved by selecting a custom dictionary for each individual block to be reconstructed. The nearest neighbor dictionary construction can be understood geometrically as a method for estimating the local projection into the manifold of image blocks, whereas the K-SVD dictionary makes more sense within a source-coding framework (it is presented as a generalization of the k-means algorithm for constructing a VQ codebook), is therefore, it could be argued, less appropriate in principle, for reconstruction problems. One can, of course, motivate the use of the K-SVD in reconstruction application on practical grounds, avoiding the computational expense of constructing a different dictionary for each block to be denoised. Since the performance of the nearest neighbor dictionary decreases when the dictionary becomes sufficiently large, this method is also superior to the approach of utilizing the entire training set as a dictionary (and this can also be understood within the image block manifold model). In practical terms, the tradeoff is between the computational cost of a nearest neighbor search (which can be achieved very efficiently), or of increased cost at the sparse optimization.

  10. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  11. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  12. Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Li, Hongbin

    2016-06-01

    We consider the problem of recovering two-dimensional (2-D) block-sparse signals with \\emph{unknown} cluster patterns. Two-dimensional block-sparse patterns arise naturally in many practical applications such as foreground detection and inverse synthetic aperture radar imaging. To exploit the block-sparse structure, we introduce a 2-D pattern-coupled hierarchical Gaussian prior model to characterize the statistical pattern dependencies among neighboring coefficients. Unlike the conventional hierarchical Gaussian prior model where each coefficient is associated independently with a unique hyperparameter, the pattern-coupled prior for each coefficient not only involves its own hyperparameter, but also its immediate neighboring hyperparameters. Thus the sparsity patterns of neighboring coefficients are related to each other and the hierarchical model has the potential to encourage 2-D structured-sparse solutions. An expectation-maximization (EM) strategy is employed to obtain the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. In addition, the generalized approximate message passing (GAMP) algorithm is embedded into the EM framework to efficiently compute an approximation of the posterior distribution of hidden variables, which results in a significant reduction in computational complexity. Numerical results are provided to illustrate the effectiveness of the proposed algorithm.

  13. Removing sparse noise from hyperspectral images with sparse and low-rank penalties

    NASA Astrophysics Data System (ADS)

    Tariyal, Snigdha; Aggarwal, Hemant Kumar; Majumdar, Angshul

    2016-03-01

    In diffraction grating, at times, there are defective pixels on the focal plane array; this results in horizontal lines of corrupted pixels in some channels. Since only a few such pixels exist, the corruption/noise is sparse. Studies on sparse noise removal from hyperspectral noise are parsimonious. To remove such sparse noise, a prior work exploited the interband spectral correlation along with intraband spatial redundancy to yield a sparse representation in transform domains. We improve upon the prior technique. The intraband spatial redundancy is modeled as a sparse set of transform coefficients and the interband spectral correlation is modeled as a rank deficient matrix. The resulting optimization problem is solved using the split Bregman technique. Comparative experimental results show that our proposed approach is better than the previous one.

  14. Sparse sampling: theory, methods and an application in neuroscience.

    PubMed

    Oñativia, Jon; Dragotti, Pier Luigi

    2015-02-01

    The current methods used to convert analogue signals into discrete-time sequences have been deeply influenced by the classical Shannon-Whittaker-Kotelnikov sampling theorem. This approach restricts the class of signals that can be sampled and perfectly reconstructed to bandlimited signals. During the last few years, a new framework has emerged that overcomes these limitations and extends sampling theory to a broader class of signals named signals with finite rate of innovation (FRI). Instead of characterising a signal by its frequency content, FRI theory describes it in terms of the innovation parameters per unit of time. Bandlimited signals are thus a subset of this more general definition. In this paper, we provide an overview of this new framework and present the tools required to apply this theory in neuroscience. Specifically, we show how to monitor and infer the spiking activity of individual neurons from two-photon imaging of calcium signals. In this scenario, the problem is reduced to reconstructing a stream of decaying exponentials. PMID:25452206

  15. Guaranteed Blind Sparse Spikes Deconvolution via Lifting and Convex Optimization

    NASA Astrophysics Data System (ADS)

    Chi, Yuejie

    2016-06-01

    Neural recordings, returns from radars and sonars, images in astronomy and single-molecule microscopy can be modeled as a linear superposition of a small number of scaled and delayed copies of a band-limited or diffraction-limited point spread function, which is either determined by the nature or designed by the users; in other words, we observe the convolution between a point spread function and a sparse spike signal with unknown amplitudes and delays. While it is of great interest to accurately resolve the spike signal from as few samples as possible, however, when the point spread function is not known a priori, this problem is terribly ill-posed. This paper proposes a convex optimization framework to simultaneously estimate the point spread function as well as the spike signal, by mildly constraining the point spread function to lie in a known low-dimensional subspace. By applying the lifting trick, we obtain an underdetermined linear system of an ensemble of signals with joint spectral sparsity, to which atomic norm minimization is applied. Under mild randomness assumptions of the low-dimensional subspace as well as a separation condition of the spike signal, we prove the proposed algorithm, dubbed as AtomicLift, is guaranteed to recover the spike signal up to a scaling factor as soon as the number of samples is large enough. The extension of AtomicLift to handle noisy measurements is also discussed. Numerical examples are provided to validate the effectiveness of the proposed approaches.

  16. Sparse Recovery Analysis of High-Resolution Climate Data

    NASA Astrophysics Data System (ADS)

    Archibald, R.

    2013-12-01

    The field of compressed sensing is vast and currently very active, with new results, methods, and algorithms appearing almost daily. The first notions of compressed sensing began with Prony's method, which was designed by the French mathematician Gaspard Riche de Prony to extract signal information from a limited number of measurements. Since then, sparsity has been used empirically in a variety of applications, including geology and geophysics, spectroscopy, signal processing, radio astronomy, and medical ultrasound. High-resolution climate studies performed on large scale high performance computing have been producing large amounts of data that can benefit from unique mathematical methods for analysis. This work demonstrates how sparse recovery and L1 regularization can be used effectively on large datasets from high-resolution climate studies.

  17. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  18. Dynamic Stochastic Superresolution of sparsely observed turbulent systems

    SciTech Connect

    Branicki, M.; Majda, A.J.

    2013-05-15

    of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a fashion elucidated here. The DSS technique exploiting a simple Gaussian closure of the nonlinear stochastic forecast model emerges as the most suitable trade-off between the superresolution skill and computational complexity associated with estimating the cross-correlations between the aliasing modes of the sparsely observed turbulent signal. Such techniques offer a promising and efficient approach to constraining unresolved turbulent fluxes through stochastic superparameterization and a subsequent improvement in coarse-grained filtering and prediction of the next generation atmosphere–ocean system (AOS) models.

  19. Native ultrametricity of sparse random ensembles

    NASA Astrophysics Data System (ADS)

    Avetisov, V.; Krapivsky, P. L.; Nechaev, S.

    2016-01-01

    We investigate the eigenvalue density in ensembles of large sparse Bernoulli random matrices. Analyzing in detail the spectral density of ensembles of linear subgraphs, we discuss its ultrametric nature and show that near the spectrum boundary, the tails of the spectral density exhibit a Lifshitz singularity typical for Anderson localization. We pay attention to an intriguing connection of the spectral density to the Dedekind η-function. We conjecture that ultrametricity emerges in rare-event statistics and is inherit to generic complex sparse systems.

  20. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  1. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  2. Contrast adaptive total p-norm variation minimization approach to CT reconstruction for artifact reduction in reduced-view brain perfusion CT

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Won; Kim, Jong-Hyo

    2011-03-01

    Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF), regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray projections become problematic especially when high contrast enhancement signals are present or patient's motion occurred. In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed method, high contrast components are first reconstructed using thresholded projection data and low p-norm total variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm TV minimization technique, and are combined with the reconstructed high contrast component images to produce final reconstructed images. The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant images were compared with those using filtered back projection (FBP) and conventional TV

  3. Infrared image recognition based on structure sparse and atomic sparse parallel

    NASA Astrophysics Data System (ADS)

    Wu, Yalu; Li, Ruilong; Xu, Yi; Wang, Liping

    2015-12-01

    Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method. Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.

  4. Equilibrium reconstruction in the START tokamak

    NASA Astrophysics Data System (ADS)

    Appel, L. C.; Bevir, M. K.; Walsh, M. J.

    2001-02-01

    The computation of magnetic equilibria in the START spherical tokamak is more difficult than those in more conventional large aspect ratio tokamaks. This difficulty arises partly as a result of the use of induction compression to generate high current plasma, as this precludes the positioning of magnetic diagnostics close to the outboard side of the plasma. In addition, the effect of a conducting wall with a high, but finite, conductivity must be included. A method is presented for obtaining plasma equilibrium reconstructions based on the EFIT code. New constraints are used to relate isoflux surface locations deduced from radial profile measurements of electron temperature. A model of flux diffusion through the vessel wall is developed. It is shown that neglecting flux diffusion in the vessel wall can lead to a significant underestimate in the calculation of the plasma βt. Using a relatively sparse set of magnetic signals, βt can be obtained to within a fractional error of +/-10%. Using constraints to relate isoflux surface locations, the principle involved in determining the internal q profile is demonstrated.

  5. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    PubMed

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  6. Improving synthesis and analysis prior blind compressed sensing with low-rank constraints for dynamic MRI reconstruction.

    PubMed

    Majumdar, Angshul

    2015-01-01

    In blind compressed sensing (BCS), both the sparsifying dictionary and the sparse coefficients are estimated simultaneously during signal recovery. A recent study adopted the BCS framework for recovering dynamic MRI sequences from under-sampled K-space measurements; the results were promising. Previous works in dynamic MRI reconstruction showed that, recovery accuracy can be improved by incorporating low-rank penalties into the standard compressed sensing (CS) optimization framework. Our work is motivated by these studies, and we improve upon the basic BCS framework by incorporating low-rank penalties into the optimization problem. The resulting optimization problem has not been solved before; hence we derive a Split Bregman type technique to solve the same. Experiments were carried out on real dynamic contrast enhanced MRI sequences. Results show that, with our proposed improvement, the reconstruction accuracy is better than BCS and other state-of-the-art dynamic MRI recovery algorithms. PMID:25179137

  7. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  8. Second SIAM conference on sparse matrices: Abstracts. Final technical report

    SciTech Connect

    1996-12-31

    This report contains abstracts on the following topics: invited and long presentations (IP1 & LP1); sparse matrix reordering & graph theory I; sparse matrix tools & environments I; eigenvalue computations I; iterative methods & acceleration techniques I; applications I; parallel algor