Science.gov

Sample records for sparse signal reconstruction

  1. Robust Methods for Sensing and Reconstructing Sparse Signals

    ERIC Educational Resources Information Center

    Carrillo, Rafael E.

    2012-01-01

    Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…

  2. Generation of Rayleigh-wave dispersion images from multichannel seismic data using sparse signal reconstruction

    NASA Astrophysics Data System (ADS)

    Mun, Songchol; Bao, Yuequan; Li, Hui

    2015-11-01

    The accurate estimation of dispersion curves has been a key issue for ensuring high quality in geophysical surface wave exploration. Many studies have been carried out on the generation of a high-resolution dispersion image from array measurements. In this study, the sparse signal representation and reconstruction techniques are employed to obtain the high resolution Rayleigh-wave dispersion image from seismic wave data. First, a sparse representation of the seismic wave data is introduced, in which the signal is assumed to be sparse in terms of wave speed. Then, the sparse signal is reconstructed by optimization using l1-norm regularization, which gives the signal amplitude spectrum as a function of wave speed. A dispersion image in the f-v domain is generated by arranging the sparse spectra for all frequency slices in the frequency range. Finally, to show the efficiency of the proposed approach, the Surfbar-2 field test data, acquired by B. Luke and colleagues at the University of Nevada Las Vegas, are analysed. By comparing the real-field dispersion image with the results from other methods, the high mode-resolving ability of the proposed approach is demonstrated, particularly for a case with strongly coherent modes.

  3. Sparse reconstruction of blade tip-timing signals for multi-mode blade vibration monitoring

    NASA Astrophysics Data System (ADS)

    Lin, Jun; Hu, Zheng; Chen, Zhong-Sheng; Yang, Yong-Min; Xu, Hai-Long

    2016-12-01

    Severe blade vibrations may reduce the useful life of the high-speed blade. Nowadays, non-contact measurement using blade tip-timing (BTT) technology is becoming promising in blade vibration monitoring. However, blade tip-timing signals are typically under-sampled. How to extract characteristic features of unknown multi-mode blade vibrations by analyzing these under-sampled signals becomes a big challenge. In this paper, a novel BTT analysis method for reconstructing unknown multi-mode blade vibration signals is proposed. The method consists of two key steps. First, a sparse representation (SR) mathematical model for sparse blade tip-timing signals is built. Second, a multi-mode blade vibration reconstruction algorithm is proposed to solve this SR problem. Experiments are carried out to validate the feasibility of the proposed method. The main advantage of this method is its ability to reconstruct unknown multi-mode blade vibration signals with high accuracy. The minimal requirements of probe number are also presented to provide guidelines for BTT system design.

  4. Atomic library optimization for pulse ultrasonic sparse signal decomposition and reconstruction

    NASA Astrophysics Data System (ADS)

    Song, Shoupeng; Li, Yingxue; Dogandžić, Aleksandar

    2016-02-01

    Compressive sampling of pulse ultrasonic NDE signals could bring significant savings in the data acquisition process. Sparse representation of these signals using an atomic library is key to their interpretation and reconstruction from compressive samples. However, the obstacles to practical applicability of such representations are: large size of the atomic library and computational complexity of the sparse decomposition and reconstruction. To help solve these problems, we develop a method for optimizing the ranges of parameters of traditional Gabor-atom library to match a real pulse ultrasonic signal in terms of correlation. As a result of atomic-library optimization, the number of the atoms is greatly reduced. Numerical simulations compare the proposed approach with the traditional method. Simulation results show that both the time efficiency and signal reconstruction energy error are superior to the traditional one even with small-scale atomic library. The performance of the proposed method is also explored under different noise levels. Finally, we apply the proposed method to real pipeline ultrasonic testing data, and the results indicate that our reduced atomic library outperforms the traditional library.

  5. A reconstruction algorithm based on sparse representation for Raman signal processing under high background noise

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wang, X.; Wang, X.; Xu, Y.; Que, J.; He, H.; Wang, X.; Tang, M.

    2016-02-01

    Background noise is one of the main interference sources of the Raman spectroscopy measurement and imaging technique. In this paper, a sparse representation based algorithm is presented to process the Raman signals under high background noise. In contrast with the existing de-noising methods, the proposed method reconstructs the pure Raman signals by estimating the Raman peak information. The advantage of the proposed algorithm is its high anti-noise capacity and low pure Raman signal reduction contributed by its reconstruction principle. Meanwhile, the Batch-OMP algorithm is applied to accelerate the training of the sparse representation. Therefore, it is very suitable to be adopted in the Raman measurement or imaging instruments to observe fast dynamic processes where the scanning time has to be shortened and the signal-to-noise ratio (SNR) of the raw tested signal is reduced. In the simulation and experiment, the de-noising result obtained by the proposed algorithm was better than the traditional Savitzky-Golay (S-G) filter and the fixed-threshold wavelet de-noising algorithm.

  6. A Fast and Accurate Sparse Continuous Signal Reconstruction by Homotopy DCD with Non-Convex Regularization

    PubMed Central

    Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong

    2014-01-01

    In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758

  7. LOFAR sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Garsden, H.; Girard, J. N.; Starck, J. L.; Corbel, S.; Tasse, C.; Woiselle, A.; McKean, J. P.; van Amesfoort, A. S.; Anderson, J.; Avruch, I. M.; Beck, R.; Bentum, M. J.; Best, P.; Breitling, F.; Broderick, J.; Brüggen, M.; Butcher, H. R.; Ciardi, B.; de Gasperin, F.; de Geus, E.; de Vos, M.; Duscha, S.; Eislöffel, J.; Engels, D.; Falcke, H.; Fallows, R. A.; Fender, R.; Ferrari, C.; Frieswijk, W.; Garrett, M. A.; Grießmeier, J.; Gunst, A. W.; Hassall, T. E.; Heald, G.; Hoeft, M.; Hörandel, J.; van der Horst, A.; Juette, E.; Karastergiou, A.; Kondratiev, V. I.; Kramer, M.; Kuniyoshi, M.; Kuper, G.; Mann, G.; Markoff, S.; McFadden, R.; McKay-Bukowski, D.; Mulcahy, D. D.; Munk, H.; Norden, M. J.; Orru, E.; Paas, H.; Pandey-Pommier, M.; Pandey, V. N.; Pietka, G.; Pizzo, R.; Polatidis, A. G.; Renting, A.; Röttgering, H.; Rowlinson, A.; Schwarz, D.; Sluman, J.; Smirnov, O.; Stappers, B. W.; Steinmetz, M.; Stewart, A.; Swinbank, J.; Tagger, M.; Tang, Y.; Tasse, C.; Thoudam, S.; Toribio, C.; Vermeulen, R.; Vocks, C.; van Weeren, R. J.; Wijnholds, S. J.; Wise, M. W.; Wucknitz, O.; Yatawatta, S.; Zarka, P.; Zensus, A.

    2015-03-01

    Context. The LOw Frequency ARray (LOFAR) radio telescope is a giant digital phased array interferometer with multiple antennas distributed in Europe. It provides discrete sets of Fourier components of the sky brightness. Recovering the original brightness distribution with aperture synthesis forms an inverse problem that can be solved by various deconvolution and minimization methods. Aims: Recent papers have established a clear link between the discrete nature of radio interferometry measurement and the "compressed sensing" (CS) theory, which supports sparse reconstruction methods to form an image from the measured visibilities. Empowered by proximal theory, CS offers a sound framework for efficient global minimization and sparse data representation using fast algorithms. Combined with instrumental direction-dependent effects (DDE) in the scope of a real instrument, we developed and validated a new method based on this framework. Methods: We implemented a sparse reconstruction method in the standard LOFAR imaging tool and compared the photometric and resolution performance of this new imager with that of CLEAN-based methods (CLEAN and MS-CLEAN) with simulated and real LOFAR data. Results: We show that i) sparse reconstruction performs as well as CLEAN in recovering the flux of point sources; ii) performs much better on extended objects (the root mean square error is reduced by a factor of up to 10); and iii) provides a solution with an effective angular resolution 2-3 times better than the CLEAN images. Conclusions: Sparse recovery gives a correct photometry on high dynamic and wide-field images and improved realistic structures of extended sources (of simulated and real LOFAR datasets). This sparse reconstruction method is compatible with modern interferometric imagers that handle DDE corrections (A- and W-projections) required for current and future instruments such as LOFAR and SKA.

  8. Sparse signal reconstruction from polychromatic X-ray CT measurements via mass attenuation discretization

    SciTech Connect

    Gu, Renliang; Dogandžić, Aleksandar

    2014-02-18

    We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer’s law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ{sub 1}-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.

  9. Sparse signal reconstruction from polychromatic X-ray CT measurements via mass attenuation discretization

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2014-02-01

    We propose a method for reconstructing sparse images from polychromatic x-ray computed tomography (ct) measurements via mass attenuation coefficient discretization. The material of the inspected object and the incident spectrum are assumed to be unknown. We rewrite the Lambert-Beer's law in terms of integral expressions of mass attenuation and discretize the resulting integrals. We then present a penalized constrained least-squares optimization approach for reconstructing the underlying object from log-domain measurements, where an active set approach is employed to estimate incident energy density parameters and the nonnegativity and sparsity of the image density map are imposed using negative-energy and smooth ℓ1-norm penalty terms. We propose a two-step scheme for refining the mass attenuation discretization grid by using higher sampling rate over the range with higher photon energy, and eliminating the discretization points that have little effect on accuracy of the forward projection model. This refinement allows us to successfully handle the characteristic lines (Dirac impulses) in the incident energy density spectrum. We compare the proposed method with the standard filtered backprojection, which ignores the polychromatic nature of the measurements and sparsity of the image density map. Numerical simulations using both realistic simulated and real x-ray ct data are presented.

  10. Convolutional Sparse Coding for Trajectory Reconstruction.

    PubMed

    Zhu, Yingying; Lucey, Simon

    2015-03-01

    Trajectory basis Non-Rigid Structure from Motion (NRSfM) refers to the process of reconstructing the 3D trajectory of each point of a non-rigid object from just their 2D projected trajectories. Reconstruction relies on two factors: (i) the condition of the composed camera & trajectory basis matrix, and (ii) whether the trajectory basis has enough degrees of freedom to model the 3D point trajectory. These two factors are inherently conflicting. Employing a trajectory basis with small capacity has the positive characteristic of reducing the likelihood of an ill-conditioned system (when composed with the camera) during reconstruction. However, this has the negative characteristic of increasing the likelihood that the basis will not be able to fully model the object's "true" 3D point trajectories. In this paper we draw upon a well known result centering around the Reduced Isometry Property (RIP) condition for sparse signal reconstruction. RIP allow us to relax the requirement that the full trajectory basis composed with the camera matrix must be well conditioned. Further, we propose a strategy for learning an over-complete basis using convolutional sparse coding from naturally occurring point trajectory corpora to increase the likelihood that the RIP condition holds for a broad class of point trajectories and camera motions. Finally, we propose an l1 inspired objective for trajectory reconstruction that is able to "adaptively" select the smallest sub-matrix from an over-complete trajectory basis that balances (i) and (ii). We present more practical 3D reconstruction results compared to current state of the art in trajectory basis NRSfM.

  11. Compressed Sensing for Reconstructing Sparse Quantum States

    NASA Astrophysics Data System (ADS)

    Rudinger, Kenneth; Joynt, Robert

    2014-03-01

    Compressed sensing techniques have been successfully applied to quantum state tomography, enabling the efficient determination of states that are nearly pure, i.e, of low rank. We show how compressed sensing may be used even when the states to be reconstructed are full rank. Instead, the necessary requirement is that the states be sparse in some known basis (e.g. the Pauli basis). Physical systems at high temperatures in thermal equilibrium are important examples of such states. Using this method, we are able to demonstrate that, like for classical signals, compressed sensing for quantum states exhibits the Donoho-Tanner phase transition. This method will be useful for the determination of the Hamiltonians of artificially constructed quantum systems whose purpose is to simulate condensed-matter models, as it requires many fewer measurements than demanded by standard tomographic procedures. This work was supported in part by ARO, DOD (W911NF-09-1-0439) and NSF (CCR-0635355).

  12. Robust Reconstruction of Complex Networks from Sparse Data

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Di, Zengru

    2015-01-01

    Reconstructing complex networks from measurable data is a fundamental problem for understanding and controlling collective dynamics of complex networked systems. However, a significant challenge arises when we attempt to decode structural information hidden in limited amounts of data accompanied by noise and in the presence of inaccessible nodes. Here, we develop a general framework for robust reconstruction of complex networks from sparse and noisy data. Specifically, we decompose the task of reconstructing the whole network into recovering local structures centered at each node. Thus, the natural sparsity of complex networks ensures a conversion from the local structure reconstruction into a sparse signal reconstruction problem that can be addressed by using the lasso, a convex optimization method. We apply our method to evolutionary games, transportation, and communication processes taking place in a variety of model and real complex networks, finding that universal high reconstruction accuracy can be achieved from sparse data in spite of noise in time series and missing data of partial nodes. Our approach opens new routes to the network reconstruction problem and has potential applications in a wide range of fields.

  13. Neural process reconstruction from sparse user scribbles.

    PubMed

    Roberts, Mike; Jeong, Won-Ki; Vázquez-Reina, Amelio; Unger, Markus; Bischof, Horst; Lichtman, Jeff; Pfister, Hanspeter

    2011-01-01

    We present a novel semi-automatic method for segmenting neural processes in large, highly anisotropic EM (electron microscopy) image stacks. Our method takes advantage of sparse scribble annotations provided by the user to guide a 3D variational segmentation model, thereby allowing our method to globally optimally enforce 3D geometric constraints on the segmentation. Moreover, we leverage a novel algorithm for propagating segmentation constraints through the image stack via optimal volumetric pathways, thereby allowing our method to compute highly accurate 3D segmentations from very sparse user input. We evaluate our method by reconstructing 16 neural processes in a 1024 x 1024 x 50 nanometer-scale EM image stack of a mouse hippocampus. We demonstrate that, on average, our method is 68% more accurate than previous state-of-the-art semi-automatic methods. PMID:22003670

  14. Guided wavefield reconstruction from sparse measurements

    NASA Astrophysics Data System (ADS)

    Mesnil, Olivier; Ruzzene, Massimo

    2016-02-01

    Guided wave measurements are at the basis of several Non-Destructive Evaluation (NDE) techniques. Although sparse measurements of guided wave obtained using piezoelectric sensors can efficiently detect and locate defects, extensive informa-tion on the shape and subsurface location of defects can be extracted from full-field measurements acquired by Laser Doppler Vibrometers (LDV). Wavefield acquisition from LDVs is generally a slow operation due to the fact that the wave propagation to record must be repeated for each point measurement and the initial conditions must be reached between each measurement. In this research, a Sparse Wavefield Reconstruction (SWR) process using Compressed Sensing is developed. The goal of this technique is to reduce the number of point measurements needed to apply NDE techniques by at least one order of magnitude by extrapolating the knowledge of a few randomly chosen measured pixels over an over-sampled grid. To achieve this, the Lamb wave propagation equation is used to formulate a basis of shape functions in which the wavefield has a sparse representation, in order to comply with the Compressed Sensing requirements and use l1-minimization solvers. The main assumption of this reconstruction process is that every material point of the studied area is a potential source. The Compressed Sensing matrix is defined as being the contribution that would have been received at a measurement location from each possible source, using the dispersion relations of the specimen computed using a Semi-Analytical Finite Element technique. The measurements are then processed through an l1-minimizer to find a minimum corresponding to the set of active sources and their corresponding excitation functions. This minimum represents the best combination of the parameters of the model matching the sparse measurements. Wavefields are then reconstructed using the propagation equation. The set of active sources found by minimization contains all the wave

  15. An adaptive hierarchical sensing scheme for sparse signals

    NASA Astrophysics Data System (ADS)

    Schütze, Henry; Barth, Erhardt; Martinetz, Thomas

    2014-02-01

    In this paper, we present Adaptive Hierarchical Sensing (AHS), a novel adaptive hierarchical sensing algorithm for sparse signals. For a given but unknown signal with a sparse representation in an orthogonal basis, the sensing task is to identify its non-zero transform coefficients by performing only few measurements. A measurement is simply the inner product of the signal and a particular measurement vector. During sensing, AHS partially traverses a binary tree and performs one measurement per visited node. AHS is adaptive in the sense that after each measurement a decision is made whether the entire subtree of the current node is either further traversed or omitted depending on the measurement value. In order to acquire an N -dimensional signal that is K-sparse, AHS performs O(K log N/K) measurements. With AHS, the signal is easily reconstructed by a basis transform without the need to solve an optimization problem. When sensing full-size images, AHS can compete with a state-of-the-art compressed sensing approach in terms of reconstruction performance versus number of measurements. Additionally, we simulate the sensing of image patches by AHS and investigate the impact of the choice of the sparse coding basis as well as the impact of the tree composition.

  16. Cervigram image segmentation based on reconstructive sparse representations

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoting; Huang, Junzhou; Wang, Wei; Huang, Xiaolei; Metaxas, Dimitris

    2010-03-01

    We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

  17. Sparse representation for the ISAR image reconstruction

    NASA Astrophysics Data System (ADS)

    Hu, Mengqi; Montalbo, John; Li, Shuxia; Sun, Ligang; Qiao, Zhijun G.

    2016-05-01

    In this paper, a sparse representation of the data for an inverse synthetic aperture radar (ISAR) system is provided in two dimensions. The proposed sparse representation motivates the use a of a Convex Optimization that recovers the image with far less samples, which is required by Nyquist-Shannon sampling theorem to increases the efficiency and decrease the cost of calculation in radar imaging.

  18. Beam hardening correction for sparse-view CT reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2015-03-01

    Beam hardening, which is caused by spectrum polychromatism of the X-ray beam, may result in various artifacts in the reconstructed image and degrade image quality. The artifacts would be further aggravated for the sparse-view reconstruction due to insufficient sampling data. Considering the advantages of the total-variation (TV) minimization in CT reconstruction with sparse-view data, in this paper, we propose a beam hardening correction method for sparse-view CT reconstruction based on Brabant's modeling. In this correction model for beam hardening, the attenuation coefficient of each voxel at the effective energy is modeled and estimated linearly, and can be applied in an iterative framework, such as simultaneous algebraic reconstruction technique (SART). By integrating the correction model into the forward projector of the algebraic reconstruction technique (ART), the TV minimization can recover images when only a limited number of projections are available. The proposed method does not need prior information about the beam spectrum. Preliminary validation using Monte Carlo simulations indicates that the proposed method can provide better reconstructed images from sparse-view projection data, with effective suppression of artifacts caused by beam hardening. With appropriate modeling of other degrading effects such as photon scattering, the proposed framework may provide a new way for low-dose CT imaging.

  19. Sparse image reconstruction for molecular imaging.

    PubMed

    Ting, Michael; Raich, Raviv; Hero, Alfred O

    2009-06-01

    The application that motivates this paper is molecular imaging at the atomic level. When discretized at subatomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. This paper, therefore, does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.

  20. Reconstruction Techniques for Sparse Multistatic Linear Array Microwave Imaging

    SciTech Connect

    Sheen, David M.; Hall, Thomas E.

    2014-06-09

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. In this paper, a sparse multi-static array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated and measured imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  1. Smoothed l0 Norm Regularization for Sparse-View X-Ray CT Reconstruction

    PubMed Central

    Li, Ming; Peng, Chengtao; Guan, Yihui; Xu, Pin

    2016-01-01

    Low-dose computed tomography (CT) reconstruction is a challenging problem in medical imaging. To complement the standard filtered back-projection (FBP) reconstruction, sparse regularization reconstruction gains more and more research attention, as it promises to reduce radiation dose, suppress artifacts, and improve noise properties. In this work, we present an iterative reconstruction approach using improved smoothed l0 (SL0) norm regularization which is used to approximate l0 norm by a family of continuous functions to fully exploit the sparseness of the image gradient. Due to the excellent sparse representation of the reconstruction signal, the desired tissue details are preserved in the resulting images. To evaluate the performance of the proposed SL0 regularization method, we reconstruct the simulated dataset acquired from the Shepp-Logan phantom and clinical head slice image. Additional experimental verification is also performed with two real datasets from scanned animal experiment. Compared to the referenced FBP reconstruction and the total variation (TV) regularization reconstruction, the results clearly reveal that the presented method has characteristic strengths. In particular, it improves reconstruction quality via reducing noise while preserving anatomical features. PMID:27725935

  2. Moving target detection for frequency agility radar by sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Quan, Yinghui; Li, YaChao; Wu, Yaojun; Ran, Lei; Xing, Mengdao; Liu, Mengqi

    2016-09-01

    Frequency agility radar, with randomly varied carrier frequency from pulse to pulse, exhibits superior performance compared to the conventional fixed carrier frequency pulse-Doppler radar against the electromagnetic interference. A novel moving target detection (MTD) method is proposed for the estimation of the target's velocity of frequency agility radar based on pulses within a coherent processing interval by using sparse reconstruction. Hardware implementation of orthogonal matching pursuit algorithm is executed on Xilinx Virtex-7 Field Programmable Gata Array (FPGA) to perform sparse optimization. Finally, a series of experiments are performed to evaluate the performance of proposed MTD method for frequency agility radar systems.

  3. A Comparison of Methods for Ocean Reconstruction from Sparse Observations

    NASA Astrophysics Data System (ADS)

    Streletz, G. J.; Kronenberger, M.; Weber, C.; Gebbie, G.; Hagen, H.; Garth, C.; Hamann, B.; Kreylos, O.; Kellogg, L. H.; Spero, H. J.

    2014-12-01

    We present a comparison of two methods for developing reconstructions of oceanic scalar property fields from sparse scattered observations. Observed data from deep sea core samples provide valuable information regarding the properties of oceans in the past. However, because the locations of sample sites are distributed on the ocean floor in a sparse and irregular manner, developing a global ocean reconstruction is a difficult task. Our methods include a flow-based and a moving least squares -based approximation method. The flow-based method augments the process of interpolating or approximating scattered scalar data by incorporating known flow information. The scheme exploits this additional knowledge to define a non-Euclidean distance measure between points in the spatial domain. This distance measure is used to create a reconstruction of the desired scalar field on the spatial domain. The resulting reconstruction thus incorporates information from both the scattered samples and the known flow field. The second method does not assume a known flow field, but rather works solely with the observed scattered samples. It is based on a modification of the moving least squares approach, a weighted least squares approximation method that blends local approximations into a global result. The modifications target the selection of data used for these local approximations and the construction of the weighting function. The definition of distance used in the weighting function is crucial for this method, so we use a machine learning approach to determine a set of near-optimal parameters for the weighting. We have implemented both of the reconstruction methods and have tested them using several sparse oceanographic datasets. Based upon these studies, we discuss the advantages and disadvantages of each method and suggest possible ways to combine aspects of both methods in order to achieve an overall high-quality reconstruction.

  4. Reconstruction techniques for sparse multistatic linear array microwave imaging

    NASA Astrophysics Data System (ADS)

    Sheen, David M.; Hall, Thomas E.

    2014-06-01

    Sequentially-switched linear arrays are an enabling technology for a number of near-field microwave imaging applications. Electronically sequencing along the array axis followed by mechanical scanning along an orthogonal axis allows dense sampling of a two-dimensional aperture in near real-time. The Pacific Northwest National Laboratory (PNNL) has developed this technology for several applications including concealed weapon detection, groundpenetrating radar, and non-destructive inspection and evaluation. These techniques form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images. Recently, a sparse multi-static array technology has been developed that reduces the number of antennas required to densely sample the linear array axis of the spatial aperture. This allows a significant reduction in cost and complexity of the linear-array-based imaging system. The sparse array has been specifically designed to be compatible with Fourier-Transform-based image reconstruction techniques; however, there are limitations to the use of these techniques, especially for extreme near-field operation. In the extreme near-field of the array, back-projection techniques have been developed that account for the exact location of each transmitter and receiver in the linear array and the 3-D image location. In this paper, the sparse array technique will be described along with associated Fourier-Transform-based and back-projection-based image reconstruction algorithms. Simulated imaging results are presented that show the effectiveness of the sparse array technique along with the merits and weaknesses of each image reconstruction approach.

  5. MR image super-resolution reconstruction using sparse representation, nonlocal similarity and sparse derivative prior.

    PubMed

    Zhang, Di; He, Jiazhong; Zhao, Yun; Du, Minghui

    2015-03-01

    In magnetic resonance (MR) imaging, image spatial resolution is determined by various instrumental limitations and physical considerations. This paper presents a new algorithm for producing a high-resolution version of a low-resolution MR image. The proposed method consists of two consecutive steps: (1) reconstructs a high-resolution MR image from a given low-resolution observation via solving a joint sparse representation and nonlocal similarity L1-norm minimization problem; and (2) applies a sparse derivative prior based post-processing to suppress blurring effects. Extensive experiments on simulated brain MR images and two real clinical MR image datasets validate that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both quantitative measures and visual perception.

  6. A sparse reconstruction algorithm for ultrasonic images in nondestructive testing.

    PubMed

    Guarneri, Giovanni Alfredo; Pipa, Daniel Rodrigues; Neves Junior, Flávio; de Arruda, Lúcia Valéria Ramos; Zibetti, Marcelo Victor Wüst

    2015-01-01

    Ultrasound imaging systems (UIS) are essential tools in nondestructive testing (NDT). In general, the quality of images depends on two factors: system hardware features and image reconstruction algorithms. This paper presents a new image reconstruction algorithm for ultrasonic NDT. The algorithm reconstructs images from A-scan signals acquired by an ultrasonic imaging system with a monostatic transducer in pulse-echo configuration. It is based on regularized least squares using a l1 regularization norm. The method is tested to reconstruct an image of a point-like reflector, using both simulated and real data. The resolution of reconstructed image is compared with four traditional ultrasonic imaging reconstruction algorithms: B-scan, SAFT, ω-k SAFT and regularized least squares (RLS). The method demonstrates significant resolution improvement when compared with B-scan-about 91% using real data. The proposed scheme also outperforms traditional algorithms in terms of signal-to-noise ratio (SNR). PMID:25905700

  7. Recursive Recovery of Sparse Signal Sequences From Compressive Measurements: A Review

    NASA Astrophysics Data System (ADS)

    Vaswani, Namrata; Zhan, Jinchun

    2016-07-01

    In this article, we review the literature on design and analysis of recursive algorithms for reconstructing a time sequence of sparse signals from compressive measurements. The signals are assumed to be sparse in some transform domain or in some dictionary. Their sparsity patterns can change with time, although, in many practical applications, the changes are gradual. An important class of applications where this problem occurs is dynamic projection imaging, e.g., dynamic magnetic resonance imaging (MRI) for real-time medical applications such as interventional radiology, or dynamic computed tomography.

  8. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    PubMed

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method.

  9. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.

  10. Multi-view TWRI scene reconstruction using a joint Bayesian sparse approximation model

    NASA Astrophysics Data System (ADS)

    Tang, V. H.; Bouzerdoum, A.; Phung, S. L.; Tivive, F. H. C.

    2015-05-01

    This paper addresses the problem of scene reconstruction in conjunction with wall-clutter mitigation for com- pressed multi-view through-the-wall radar imaging (TWRI). We consider the problem where the scene behind- the-wall is illuminated from different vantage points using a different set of frequencies at each antenna. First, a joint Bayesian sparse recovery model is employed to estimate the antenna signal coefficients simultaneously, by exploiting the sparsity and inter-signal correlations among antenna signals. Then, a subspace-projection technique is applied to suppress the signal coefficients related to the wall returns. Furthermore, a multi-task linear model is developed to relate the target coefficients to the image of the scene. The composite image is reconstructed using a joint Bayesian sparse framework, taking into account the inter-view dependencies. Experimental results are presented which demonstrate the effectiveness of the proposed approach for multi-view imaging of indoor scenes using a reduced set of measurements at each view.

  11. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing

    PubMed Central

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-01-01

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links. PMID:26287195

  12. Efficient Sparse Signal Transmission over a Lossy Link Using Compressive Sensing.

    PubMed

    Wu, Liantao; Yu, Kai; Cao, Dongyu; Hu, Yuhen; Wang, Zhi

    2015-08-13

    Reliable data transmission over lossy communication link is expensive due to overheads for error protection. For signals that have inherent sparse structures, compressive sensing (CS) is applied to facilitate efficient sparse signal transmissions over lossy communication links without data compression or error protection. The natural packet loss in the lossy link is modeled as a random sampling process of the transmitted data, and the original signal will be reconstructed from the lossy transmission results using the CS-based reconstruction method at the receiving end. The impacts of packet lengths on transmission efficiency under different channel conditions have been discussed, and interleaving is incorporated to mitigate the impact of burst data loss. Extensive simulations and experiments have been conducted and compared to the traditional automatic repeat request (ARQ) interpolation technique, and very favorable results have been observed in terms of both accuracy of the reconstructed signals and the transmission energy consumption. Furthermore, the packet length effect provides useful insights for using compressed sensing for efficient sparse signal transmission via lossy links.

  13. Effects of reconstructed magnetic field from sparse noisy boundary measurements on localization of active neural source.

    PubMed

    Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin

    2016-01-01

    Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization. PMID:26358243

  14. Sparse reconstruction for direction-of-arrival estimation using multi-frequency co-prime arrays

    NASA Astrophysics Data System (ADS)

    BouDaher, Elie; Ahmad, Fauzia; Amin, Moeness G.

    2014-12-01

    In this paper, multi-frequency co-prime arrays are employed to perform direction-of-arrival (DOA) estimation with enhanced degrees of freedom (DOFs). Operation at multiple frequencies creates additional virtual elements in the difference co-array of the co-prime array corresponding to the reference frequency. Sparse reconstruction is then used to fully exploit the enhanced DOFs offered by the multi-frequency co-array, thereby increasing the number of resolvable sources. For the case where the sources have proportional spectra, the received signal vectors at the different frequencies are combined to form an equivalent single measurement vector model corresponding to the multi-frequency co-array. When the sources have nonproportional spectra, a group sparsity-based reconstruction approach is used to determine the direction of signal arrivals. Performance evaluation of the proposed multi-frequency approach is performed using numerical simulations for both cases of proportional and nonproportional source spectra.

  15. MR image reconstruction of sparsely sampled 3D k-space data by projection-onto-convex sets.

    PubMed

    Peng, Haidong; Sabati, Mohammad; Lauzon, Louis; Frayne, Richard

    2006-07-01

    In many rapid three-dimensional (3D) magnetic resonance (MR) imaging applications, such as when following a contrast bolus in the vasculature using a moving table technique, the desired k-space data cannot be fully acquired due to scan time limitations. One solution to this problem is to sparsely sample the data space. Typically, the central zone of k-space is fully sampled, but the peripheral zone is partially sampled. We have experimentally evaluated the application of the projection-onto-convex sets (POCS) and zero-filling (ZF) algorithms for the reconstruction of sparsely sampled 3D k-space data. Both a subjective assessment (by direct image visualization) and an objective analysis [using standard image quality parameters such as global and local performance error and signal-to-noise ratio (SNR)] were employed. Compared to ZF, the POCS algorithm was found to be a powerful and robust method for reconstructing images from sparsely sampled 3D k-space data, a practical strategy for greatly reducing scan time. The POCS algorithm reconstructed a faithful representation of the true image and improved image quality with regard to global and local performance error, with respect to the ZF images. SNR, however, was superior to ZF only when more than 20% of the data were sparsely sampled. POCS-based methods show potential for reconstructing fast 3D MR images obtained by sparse sampling.

  16. Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.

    PubMed

    Ding, Lei; Yuan, Han

    2013-04-01

    Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance.

  17. A Novel 2-D Coherent DOA Estimation Method Based on Dimension Reduction Sparse Reconstruction for Orthogonal Arrays

    PubMed Central

    Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo

    2016-01-01

    Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer–Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition. PMID:27649191

  18. A Novel 2-D Coherent DOA Estimation Method Based on Dimension Reduction Sparse Reconstruction for Orthogonal Arrays.

    PubMed

    Wang, Xiuhong; Mao, Xingpeng; Wang, Yiming; Zhang, Naitong; Li, Bo

    2016-01-01

    Based on sparse representations, the problem of two-dimensional (2-D) direction of arrival (DOA) estimation is addressed in this paper. A novel sparse 2-D DOA estimation method, called Dimension Reduction Sparse Reconstruction (DRSR), is proposed with pairing by Spatial Spectrum Reconstruction of Sub-Dictionary (SSRSD). By utilizing the angle decoupling method, which transforms a 2-D estimation into two independent one-dimensional (1-D) estimations, the high computational complexity induced by a large 2-D redundant dictionary is greatly reduced. Furthermore, a new angle matching scheme, SSRSD, which is less sensitive to the sparse reconstruction error with higher pair-matching probability, is introduced. The proposed method can be applied to any type of orthogonal array without requirement of a large number of snapshots and a priori knowledge of the number of signals. The theoretical analyses and simulation results show that the DRSR-SSRSD method performs well for coherent signals, which performance approaches Cramer-Rao bound (CRB), even under a single snapshot and low signal-to-noise ratio (SNR) condition. PMID:27649191

  19. Filtered gradient compressive sensing reconstruction algorithm for sparse and structured measurement matrices

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri H.; Arguello, Henry

    2016-05-01

    Compressive sensing state-of-the-art proposes random Gaussian and Bernoulli as measurement matrices. Nev- ertheless, often the design of the measurement matrix is subject to physical constraints, and therefore it is frequently not possible that the matrix follows a Gaussian or Bernoulli distribution. Examples of these lim- itations are the structured and sparse matrices of the compressive X-Ray, and compressive spectral imaging systems. A standard algorithm for recovering sparse signals consists in minimizing an objective function that includes a quadratic error term combined with a sparsity-inducing regularization term. This problem can be solved using the iterative algorithms for solving linear inverse problems. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity. However, current algorithms are slow for getting a high quality image reconstruction because they do not exploit the structured and sparsity characteristics of the compressive measurement matrices. This paper proposes the development of a gradient-based algorithm for compressive sensing reconstruction by including a filtering step that yields improved quality using less iterations. This algorithm modifies the iterative solution such that it forces to converge to a filtered version of the residual AT y, where y is the measurement vector and A is the compressive measurement matrix. We show that the algorithm including the filtering step converges faster than the unfiltered version. We design various filters that are motivated by the structure of AT y. Extensive simulation results using various sparse and structured matrices highlight the relative performance gain over the existing iterative process.

  20. Extracting sparse signals from high-dimensional data: A statistical mechanics approach

    NASA Astrophysics Data System (ADS)

    Ramezanali, Mohammad

    Sparse reconstruction algorithms aim to retrieve high-dimensional sparse signals from a limited amount of measurements under suitable conditions. As the number of variables go to infinity, these algorithms exhibit sharp phase transition boundaries where the sparse retrieval breaks down. Several sparse reconstruction algorithms are formulated as optimization problems. Few of the prominent ones among these have been analyzed in the literature by statistical mechanical methods. The function to be optimized plays the role of energy. The treatment involves finite temperature replica mean-field theory followed by the zero temperature limit. Although this approach has been successful in reproducing the algorithmic phase transition boundaries, the replica trick and the non-trivial zero temperature limit obscure the underlying reasons for the failure of the algorithms. In this thesis, we employ the "cavity method" to give an alternative derivation of the phase transition boundaries, working directly in the zero-temperature limit. This approach provides insight into the origin of the different terms in the mean field self-consistency equations. The cavity method naturally generates a local susceptibility which leads to an identity that clearly indicates the existence of two phases. The identity also gives us a novel route to the known parametric expressions for the phase boundary of the Basis Pursuit algorithm and to the new ones for the Elastic Net. These transitions being continuous (second order), we explore the scaling laws and critical exponents that are uniquely determined by the nature of the distribution of the density of the nonzero components of the sparse signal. Not only is the phase boundary of the Elastic Net different from that of the Basis Pursuit, we show that the critical behavior of the two algorithms are from different universality classes.

  1. Universal Collaboration Strategies for Signal Detection: A Sparse Learning Approach

    NASA Astrophysics Data System (ADS)

    Khanduri, Prashant; Kailkhura, Bhavya; Thiagarajan, Jayaraman J.; Varshney, Pramod K.

    2016-10-01

    This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration). We assume that only a small subset of nodes communicate with the Fusion Center (FC). We design optimal collaboration strategies which are universal for a class of deterministic signals. By establishing the equivalence between the collaboration strategy design problem and sparse PCA, we solve the problem efficiently and evaluate the impact of collaboration on detection performance.

  2. Clutter Mitigation in Echocardiography Using Sparse Signal Separation

    PubMed Central

    Turek, Javier S.; Elad, Michael; Yavneh, Irad

    2015-01-01

    In ultrasound imaging, clutter artifacts degrade images and may cause inaccurate diagnosis. In this paper, we apply a method called Morphological Component Analysis (MCA) for sparse signal separation with the objective of reducing such clutter artifacts. The MCA approach assumes that the two signals in the additive mix have each a sparse representation under some dictionary of atoms (a matrix), and separation is achieved by finding these sparse representations. In our work, an adaptive approach is used for learning the dictionary from the echo data. MCA is compared to Singular Value Filtering (SVF), a Principal Component Analysis- (PCA-) based filtering technique, and to a high-pass Finite Impulse Response (FIR) filter. Each filter is applied to a simulated hypoechoic lesion sequence, as well as experimental cardiac ultrasound data. MCA is demonstrated in both cases to outperform the FIR filter and obtain results comparable to the SVF method in terms of contrast-to-noise ratio (CNR). Furthermore, MCA shows a lower impact on tissue sections while removing the clutter artifacts. In experimental heart data, MCA obtains in our experiments clutter mitigation with an average CNR improvement of 1.33 dB. PMID:26199622

  3. Super-resolution and reconstruction of sparse sub-wavelength images.

    PubMed

    Gazit, Snir; Szameit, Alexander; Eldar, Yonina C; Segev, Mordechai

    2009-12-21

    We show that, in contrast to popular belief, sub-wavelength information can be recovered from the far-field of an optical image, thereby overcoming the loss of information embedded in decaying evanescent waves. The only requirement is that the image is known to be sparse, a specific but very general and wide-spread property of signals which occur almost everywhere in nature. The reconstruction method relies on newly-developed compressed sensing techniques, which we adapt to optical super-resolution and sub-wavelength imaging. Our approach exhibits robustness to noise and imperfections. We provide an experimental proof-of-principle by demonstrating image recovery at a spatial resolution 5-times higher than the finest resolution defined by a spatial filter. The technique is general, and can be extended beyond optical microscopy, for example, to atomic force microscopes, scanning-tunneling microscopes, and other imaging systems.

  4. Unbiased measurements of reconstruction fidelity of sparsely sampled magnetic resonance spectra

    NASA Astrophysics Data System (ADS)

    Wu, Qinglin; Coggins, Brian E.; Zhou, Pei

    2016-07-01

    The application of sparse-sampling techniques to NMR data acquisition would benefit from reliable quality measurements for reconstructed spectra. We introduce a pair of noise-normalized measurements, and , for differentiating inadequate modelling from overfitting. While and can be used jointly for methods that do not enforce exact agreement between the back-calculated time domain and the original sparse data, the cross-validation measure is applicable to all reconstruction algorithms. We show that the fidelity of reconstruction is sensitive to changes in and that model overfitting results in elevated and reduced spectral quality.

  5. Unbiased measurements of reconstruction fidelity of sparsely sampled magnetic resonance spectra

    PubMed Central

    Wu, Qinglin; Coggins, Brian E.; Zhou, Pei

    2016-01-01

    The application of sparse-sampling techniques to NMR data acquisition would benefit from reliable quality measurements for reconstructed spectra. We introduce a pair of noise-normalized measurements, and , for differentiating inadequate modelling from overfitting. While and can be used jointly for methods that do not enforce exact agreement between the back-calculated time domain and the original sparse data, the cross-validation measure is applicable to all reconstruction algorithms. We show that the fidelity of reconstruction is sensitive to changes in and that model overfitting results in elevated and reduced spectral quality. PMID:27459896

  6. MAP Support Detection for Greedy Sparse Signal Recovery Algorithms in Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Lee, Namyoon

    2016-10-01

    A reliable support detection is essential for a greedy algorithm to reconstruct a sparse signal accurately from compressed and noisy measurements. This paper proposes a novel support detection method for greedy algorithms, which is referred to as "\\textit{maximum a posteriori (MAP) support detection}". Unlike existing support detection methods that identify support indices with the largest correlation value in magnitude per iteration, the proposed method selects them with the largest likelihood ratios computed under the true and null support hypotheses by simultaneously exploiting the distributions of sensing matrix, sparse signal, and noise. Leveraging this technique, MAP-Matching Pursuit (MAP-MP) is first presented to show the advantages of exploiting the proposed support detection method, and a sufficient condition for perfect signal recovery is derived for the case when the sparse signal is binary. Subsequently, a set of iterative greedy algorithms, called MAP-generalized Orthogonal Matching Pursuit (MAP-gOMP), MAP-Compressive Sampling Matching Pursuit (MAP-CoSaMP), and MAP-Subspace Pursuit (MAP-SP) are presented to demonstrate the applicability of the proposed support detection method to existing greedy algorithms. From empirical results, it is shown that the proposed greedy algorithms with highly reliable support detection can be better, faster, and easier to implement than basis pursuit via linear programming.

  7. Reconstruction Method for Optical Tomography Based on the Linearized Bregman Iteration with Sparse Regularization.

    PubMed

    Leng, Chengcai; Yu, Dongdong; Zhang, Shuang; An, Yu; Hu, Yifang

    2015-01-01

    Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR) for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method. PMID:26421055

  8. Sparse reconstruction of breast MRI using homotopic L0 minimization in a regional sparsified domain.

    PubMed

    Wong, Alexander; Mishra, Akshaya; Fieguth, Paul; Clausi, David A

    2013-03-01

    The use of MRI for early breast examination and screening of asymptomatic women has become increasing popular, given its ability to provide detailed tissue characteristics that cannot be obtained using other imaging modalities such as mammography and ultrasound. Recent application-oriented developments in compressed sensing theory have shown that certain types of magnetic resonance images are inherently sparse in particular transform domains, and as such can be reconstructed with a high level of accuracy from highly undersampled k-space data below Nyquist sampling rates using homotopic L0 minimization schemes, which holds great potential for significantly reducing acquisition time. An important consideration in the use of such homotopic L0 minimization schemes is the choice of sparsifying transform. In this paper, a regional differential sparsifying transform is investigated for use within a homotopic L0 minimization framework for reconstructing breast MRI. By taking local regional characteristics into account, the regional differential sparsifying transform can better account for signal variations and fine details that are characteristic of breast MRI than the popular finite differential transform, while still maintaining strong structure fidelity. Experimental results show that good breast MRI reconstruction accuracy can be achieved compared to existing methods.

  9. Reconstruction of sparse data generated by repeated data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-04-01

    The l1-norm reconstruction techniques have enabled exact data reconstruction with high probability from 'k-sparse' data. This paper presents an added technique to press this reconstruction by truncating the data in its decomposed state. The truncation utilizes a transformation of the eigen-vectors of the covariance matrix and prioritizes the vectors equally without regard to their energy levels associated to the eigenvalues of the vectors. This method presents two primary advantages in data representation: first, the data is naturally represented in only a few terms, components of each of the vectors, and second, the complete set of features is represented, albeit, the fidelity of the representation may have changed. This investigation provides a means of dealing with issues associated with high-energy fading of small-signal data features. One may think of the current technique as a method to inject sparsity into the data that is methodical with consideration of key features represented in eigen-vectors of the covariance matrix of the data.

  10. Classification of transient signals using sparse representations over adaptive dictionaries

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Brumby, Steven P.; Myers, Kary L.; Pawley, Norma H.

    2011-06-01

    Automatic classification of broadband transient radio frequency (RF) signals is of particular interest in persistent surveillance applications. Because such transients are often acquired in noisy, cluttered environments, and are characterized by complex or unknown analytical models, feature extraction and classification can be difficult. We propose a fast, adaptive classification approach based on non-analytical dictionaries learned from data. Conventional representations using fixed (or analytical) orthogonal dictionaries, e.g., Short Time Fourier and Wavelet Transforms, can be suboptimal for classification of transients, as they provide a rigid tiling of the time-frequency space, and are not specifically designed for a particular signal class. They do not usually lead to sparse decompositions, and require separate feature selection algorithms, creating additional computational overhead. Pursuit-type decompositions over analytical, redundant dictionaries yield sparse representations by design, and work well for target signals in the same function class as the dictionary atoms. The pursuit search however has a high computational cost, and the method can perform poorly in the presence of realistic noise and clutter. Our approach builds on the image analysis work of Mairal et al. (2008) to learn a discriminative dictionary for RF transients directly from data without relying on analytical constraints or additional knowledge about the signal characteristics. We then use a pursuit search over this dictionary to generate sparse classification features. We demonstrate that our learned dictionary is robust to unexpected changes in background content and noise levels. The target classification decision is obtained in almost real-time via a parallel, vectorized implementation.

  11. Reconstruction of Graph Signals Through Percolation from Seeding Nodes

    NASA Astrophysics Data System (ADS)

    Segarra, Santiago; Marques, Antonio G.; Leus, Geert; Ribeiro, Alejandro

    2016-08-01

    New schemes to recover signals defined in the nodes of a graph are proposed. Our focus is on reconstructing bandlimited graph signals, which are signals that admit a sparse representation in a frequency domain related to the structure of the graph. Most existing formulations focus on estimating an unknown graph signal by observing its value on a subset of nodes. By contrast, in this paper, we study the problem of reconstructing a known graph signal using as input a graph signal that is non-zero only for a small subset of nodes (seeding nodes). The sparse signal is then percolated (interpolated) across the graph using a graph filter. Graph filters are a generalization of classical time-invariant systems and represent linear transformations that can be implemented distributedly across the nodes of the graph. Three setups are investigated. In the first one, a single simultaneous injection takes place on several nodes in the graph. In the second one, successive value injections take place on a single node. The third one is a generalization where multiple nodes inject multiple signal values. For noiseless settings, conditions under which perfect reconstruction is feasible are given, and the corresponding schemes to recover the desired signal are specified. Scenarios leading to imperfect reconstruction, either due to insufficient or noisy signal value injections, are also analyzed. Moreover, connections with classical interpolation in the time domain are discussed. The last part of the paper presents numerical experiments that illustrate the results developed through synthetic graph signals and two real-world signal reconstruction problems: influencing opinions in a social network and inducing a desired brain state in humans.

  12. Sparse angular CT reconstruction using non-local means based iterative-correction POCS.

    PubMed

    Huang, Jing; Ma, Jianhua; Liu, Nan; Zhang, Hua; Bian, Zhaoying; Feng, Yanqiu; Feng, Qianjin; Chen, Wufan

    2011-04-01

    In divergent-beam computed tomography (CT), sparse angular sampling frequently leads to conspicuous streak artifacts. In this paper, we propose a novel non-local means (NL-means) based iterative-correction projection onto convex sets (POCS) algorithm, named as NLMIC-POCS, for effective and robust sparse angular CT reconstruction. The motivation for using NLMIC-POCS is that NL-means filtered image can produce an acceptable priori solution for sequential POCS iterative reconstruction. The NLMIC-POCS algorithm has been tested on simulated and real phantom data. The experimental results show that the presented NLMIC-POCS algorithm can significantly improve the image quality of the sparse angular CT reconstruction in suppressing streak artifacts and preserving the edges of the image.

  13. Simplified signal processing for impedance spectroscopy with spectrally sparse sequences

    NASA Astrophysics Data System (ADS)

    Annus, P.; Land, R.; Reidla, M.; Ojarand, J.; Mughal, Y.; Min, M.

    2013-04-01

    Classical method for measurement of the electrical bio-impedance involves excitation with sinusoidal waveform. Sinusoidal excitation at fixed frequency points enables wide variety of signal processing options, most general of them being Fourier transform. Multiplication with two quadrature waveforms at desired frequency could be easily accomplished both in analogue and in digital domains, even simplest quadrature square waves can be considered, which reduces signal processing task in analogue domain to synchronous switching followed by low pass filter, and in digital domain requires only additions. So called spectrally sparse excitation sequences (SSS), which have been recently introduced into bio-impedance measurement domain, are very reasonable choice when simultaneous multifrequency excitation is required. They have many good properties, such as ease of generation and good crest factor compared to similar multisinusoids. Typically, the usage of discrete or fast Fourier transform in signal processing step is considered so far. Usage of simplified methods nevertheless would reduce computational burden, and enable simpler, less costly and less energy hungry signal processing platforms. Accuracy of the measurement with SSS excitation when using different waveforms for quadrature demodulation will be compared in order to evaluate the feasibility of the simplified signal processing. Sigma delta modulated sinusoid (binary signal) is considered to be a good alternative for a synchronous demodulation.

  14. Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.

    PubMed

    Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M

    2014-02-10

    Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.

  15. Impact-force sparse reconstruction from highly incomplete and inaccurate measurements

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Chen, Xuefeng

    2016-08-01

    The classical l2-norm-based regularization methods applied for force reconstruction inverse problem require that the number of measurements should not be less than the number of unknown sources. Taking into account the sparse nature of impact-force in time domain, we develop a general sparse methodology based on minimizing l1-norm for solving the highly underdetermined model of impact-force reconstruction. A monotonic two-step iterative shrinkage/thresholding (MTWIST) algorithm is proposed to find the sparse solution to such an underdetermined model from highly incomplete and inaccurate measurements, which can be problematic with Tikhonov regularization. MTWIST is highly efficient for large-scale ill-posed problems since it mainly involves matrix-vector multiplies without matrix factorization. In sparsity frame, the proposed sparse regularization method can not only determine the actual impact location from many candidate sources but also simultaneously reconstruct the time history of impact-force. Simulation and experiment including single-source and two-source impact-force reconstruction are conducted on a simply supported rectangular plate and a shell structure to illustrate the effectiveness and applicability of MTWIST, respectively. Both the locations and force time histories of the single-source and two-source cases are accurately reconstructed from a single accelerometer, where the high noise level is considered in simulation and the primary noise in experiment is supposed to be colored noise. Meanwhile, the consecutive impact-forces reconstruction in a large-scale (greater than 104) sparse frame illustrates that MTWIST has advantages of computational efficiency and identification accuracy over Tikhonov regularization.

  16. Study on adaptive compressed sensing & reconstruction of quantized speech signals

    NASA Astrophysics Data System (ADS)

    Yunyun, Ji; Zhen, Yang

    2012-12-01

    Compressed sensing (CS) is a rising focus in recent years for its simultaneous sampling and compression of sparse signals. Speech signals can be considered approximately sparse or compressible in some domains for natural characteristics. Thus, it has great prospect to apply compressed sensing to speech signals. This paper is involved in three aspects. Firstly, the sparsity and sparsifying matrix for speech signals are analyzed. Simultaneously, a kind of adaptive sparsifying matrix based on the long-term prediction of voiced speech signals is constructed. Secondly, a CS matrix called two-block diagonal (TBD) matrix is constructed for speech signals based on the existing block diagonal matrix theory to find out that its performance is empirically superior to that of the dense Gaussian random matrix when the sparsifying matrix is the DCT basis. Finally, we consider the quantization effect on the projections. Two corollaries about the impact of the adaptive quantization and nonadaptive quantization on reconstruction performance with two different matrices, the TBD matrix and the dense Gaussian random matrix, are derived. We find that the adaptive quantization and the TBD matrix are two effective ways to mitigate the quantization effect on reconstruction of speech signals in the framework of CS.

  17. Learning signaling network structures with sparsely distributed data.

    PubMed

    Sachs, Karen; Itani, Solomon; Carlisle, Jennifer; Nolan, Garry P; Pe'er, Dana; Lauffenburger, Douglas A

    2009-02-01

    Flow cytometric measurement of signaling protein abundances has proved particularly useful for elucidation of signaling pathway structure. The single cell nature of the data ensures a very large dataset size, providing a statistically robust dataset for structure learning. Moreover, the approach is easily scaled to many conditions in high throughput. However, the technology suffers from a dimensionality constraint: at the cutting edge, only about 12 protein species can be measured per cell, far from sufficient for most signaling pathways. Because the structure learning algorithm (in practice) requires that all variables be measured together simultaneously, this restricts structure learning to the number of variables that constitute the flow cytometer's upper dimensionality limit. To address this problem, we present here an algorithm that enables structure learning for sparsely distributed data, allowing structure learning beyond the measurement technology's upper dimensionality limit for simultaneously measurable variables. The algorithm assesses pairwise (or n-wise) dependencies, constructs "Markov neighborhoods" for each variable based on these dependencies, measures each variable in the context of its neighborhood, and performs structure learning using a constrained search.

  18. Precise RFID localization in impaired environment through sparse signal recovery

    NASA Astrophysics Data System (ADS)

    Subedi, Saurav; Zhang, Yimin D.; Amin, Moeness G.

    2013-05-01

    Radio frequency identification (RFID) is a rapidly developing wireless communication technology for electronically identifying, locating, and tracking products, assets, and personnel. RFID has become one of the most important means to construct real-time locating systems (RTLS) that track and identify the location of objects in real time using simple, inexpensive tags and readers. The applicability and usefulness of RTLS techniques depend on their achievable accuracy. In particular, when multilateration-based localization techniques are exploited, the achievable accuracy primarily relies on the precision of the range estimates between a reader and the tags. Such range information can be obtained by using the received signal strength indicator (RSSI) and/or the phase difference of arrival (PDOA). In both cases, however, the accuracy is significantly compromised when the operation environment is impaired. In particular, multipath propagation significantly affects the measurement accuracy of both RSSI and phase information. In addition, because RFID systems are typically operated in short distances, RSSI and phase measurements are also coupled with the reader and tag antenna patterns, making accurate RFID localization very complicated and challenging. In this paper, we develop new methods to localize RFID tags or readers by exploiting sparse signal recovery techniques. The proposed method allows the channel environment and antenna patterns to be taken into account and be properly compensated at a low computational cost. As such, the proposed technique yields superior performance in challenging operation environments with the above-mentioned impairments.

  19. Direct reconstruction of enhanced signal in computed tomography perfusion

    NASA Astrophysics Data System (ADS)

    Li, Bin; Lyu, Qingwen; Ma, Jianhua; Wang, Jing

    2016-04-01

    High imaging dose has been a concern in computed tomography perfusion (CTP) as repeated scans are performed at the same location of a patient. On the other hand, signal changes only occur at limited regions in CT acquired at different time points. In this work, we propose a new reconstruction strategy by effectively utilizing the initial phase high-quality CT to reconstruct the later phase CT acquired with a low-dose protocol. In the proposed strategy, initial high-quality CT is considered as a base image and enhanced signal (ES) is reconstructed directly by minimizing the penalized weighted least-square (PWLS) criterion. The proposed PWLS-ES strategy converts the conventional CT reconstruction into a sparse signal reconstruction problem. Digital and anthropomorphic phantom studies were performed to evaluate the performance of the proposed PWLS-ES strategy. Both phantom studies show that the proposed PWLS-ES method outperforms the standard iterative CT reconstruction algorithm based on the same PWLS criterion according to various quantitative metrics including root mean squared error (RMSE) and the universal quality index (UQI).

  20. Sparse reconstruction of liver cirrhosis from monocular mini-laparoscopic sequences

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; Painer, Sven; Grigat, Rolf-Rainer

    2015-03-01

    Mini-laparoscopy is a technique which is used by clinicians to inspect the liver surface with ultra-thin laparoscopes. However, so far no quantitative measures based on mini-laparoscopic sequences are possible. This paper presents a Structure from Motion (SfM) based methodology to do 3D reconstruction of liver cirrhosis from mini-laparoscopic videos. The approach combines state-of-the-art tracking, pose estimation, outlier rejection and global optimization to obtain a sparse reconstruction of the cirrhotic liver surface. Specular reflection segmentation is included into the reconstruction framework to increase the robustness of the reconstruction. The presented approach is evaluated on 15 endoscopic sequences using three cirrhotic liver phantoms. The median reconstruction accuracy ranges from 0.3 mm to 1 mm.

  1. Two-stage sparse representation-based face recognition with reconstructed images

    NASA Astrophysics Data System (ADS)

    Cheng, Guangtao; Song, Zhanjie; Lei, Yang; Han, Xiuning

    2014-09-01

    In order to address the challenges that both the training and testing images are contaminated by random pixels corruption, occlusion, and disguise, a robust face recognition algorithm based on two-stage sparse representation is proposed. Specifically, noises in the training images are first eliminated by low-rank matrix recovery. Then, by exploiting the first-stage sparse representation computed by solving a new extended ℓ1-minimization problem, noises in the testing image can be successfully removed. After the elimination, feature extraction techniques that are more discriminative but are sensitive to noise can be effectively performed on the reconstructed clean images, and the final classification is accomplished by utilizing the second-stage sparse representation obtained by solving the reduced ℓ1-minimization problem in a low-dimensional feature space. Extensive experiments are conducted on publicly available databases to verify the superiority and robustness of our algorithm.

  2. System matrix analysis for sparse-view iterative image reconstruction in X-ray CT.

    PubMed

    Wang, Linyuan; Zhang, Hanming; Cai, Ailong; Li, Yongl; Yan, Bin; Li, Lei; Hu, Guoen

    2015-01-01

    Iterative image reconstruction (IIR) with sparsity-exploiting methods, such as total variation (TV) minimization, used for investigations in compressive sensing (CS) claim potentially large reductions in sampling requirements. Quantifying this claim for computed tomography (CT) is non-trivial, as both the singularity of undersampled reconstruction and the sufficient view number for sparse-view reconstruction are ill-defined. In this paper, the singular value decomposition method is used to study the condition number and singularity of the system matrix and the regularized matrix. An estimation method of the empirical lower bound is proposed, which is helpful for estimating the number of projection views required for exact reconstruction. Simulation studies show that the singularity of the system matrices for different projection views is effectively reduced by regularization. Computing the condition number of a regularized matrix is necessary to provide a reference for evaluating the singularity and recovery potential of reconstruction algorithms using regularization. The empirical lower bound is helpful for estimating the projections view number with a sparse reconstruction algorithm. PMID:25567402

  3. Analysis of projection geometry for few-view reconstruction of sparse objects.

    PubMed

    Henri, C J; Collins, D L; Peters, T M

    1993-01-01

    In this paper certain projections are examined as to why they are better than others when used to reconstruct sparse objects from a small number of projections. At the heart of this discussion is the notion of "consistency," which is defined as the agreement between the object's 3-D structure and its appearance in each image. It is hypothesized that after two or more projections have been obtained, it is possible to predict how well as subsequent view will perform in terms of resolving ambiguities in the object reconstructed from only the first few views. The prediction is based on a step where views of the partial reconstruction are simulated and the use of consistency to estimate the effectiveness of a given projection is exploited. Here some freedom is presumed to acquire arbitrary as opposed to predetermined views of the object. The principles underlying this approach are outlined, and experiments are performed to illustrate its use in reconstructing a realistic 3-D model. Reflecting an interest in reconstructing cerebral vasculature from angiographic projections, the experiments employ simulations based on a 3-D wireframe model derived from an internal carotid arteriogram. It is found that for such an object, the predictions can be improved significantly by introducing a correction to account for the degree to which the object possesses some symmetry in shape. For objects sufficiently sparse, this correction is less important. It is concluded that when the number of projections is limited, it may be possible to favorably affect the reconstruction process in this manner.

  4. Compressed sensing techniques for arbitrary frequency-sparse signals in structural health monitoring

    NASA Astrophysics Data System (ADS)

    Duan, Zhongdong; Kang, Jie

    2014-03-01

    Structural health monitoring requires collection of large number sample data and sometimes high frequent vibration data for detecting the damage of structures. The expensive cost for collecting the data is a big challenge. The recent proposed Compressive Sensing method enables a potentially large reduction in the sampling, and it is a way to meet the challenge. The Compressed Sensing theory requires sparse signal, meaning that the signals can be well-approximated as a linear combination of just a few elements from a known discrete basis or dictionary. The signal of structure vibration can be decomposed into a few sinusoid linear combinations in the DFT domain. Unfortunately, in most cases, the frequencies of decomposed sinusoid are arbitrary in that domain, which may not lie precisely on the discrete DFT basis or dictionary. In this case, the signal will lost its sparsity, and that makes recovery performance degrades significantly. One way to improve the sparsity of the signal is to increase the size of the dictionary, but there exists a tradeoff: the closely-spaced DFT dictionary will increase the coherence between the elements in the dictionary, which in turn decreases recovery performance. In this work we introduce three approaches for arbitrary frequency signals recovery. The first approach is the continuous basis pursuit (CBP), which reconstructs a continuous basis by introducing interpolation steps. The second approach is a semidefinite programming (SDP), which searches the sparest signal on continuous basis without establish any dictionary, enabling a very high recovery precision. The third approach is spectral iterative hard threshold (SIHT), which is based on redundant DFT dictionary and a restricted union-of-subspaces signal model, inhibiting closely spaced sinusoids. The three approaches are studied by numerical simulation. Structure vibration signal is simulated by a finite element model, and compressed measurements of the signal are taken to perform

  5. New shape models of asteroids reconstructed from sparse-in-time photometry

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna

    2015-08-01

    Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.

  6. Sparse asynchronous cortical generators can produce measurable scalp EEG signals.

    PubMed

    von Ellenrieder, Nicolás; Dan, Jonathan; Frauscher, Birgit; Gotman, Jean

    2016-09-01

    We investigate to what degree the synchronous activation of a smooth patch of cortex is necessary for observing EEG scalp activity. We perform extensive simulations to compare the activity generated on the scalp by different models of cortical activation, based on intracranial EEG findings reported in the literature. The spatial activation is modeled as a cortical patch of constant activation or as random sets of small generators (0.1 to 3cm(2) each) concentrated in a cortical region. Temporal activation models for the generation of oscillatory activity are either equal phase or random phase across the cortical patches. The results show that smooth or random spatial activation profiles produce scalp electric potential distributions with the same shape. Also, in the generation of oscillatory activity, multiple cortical generators with random phase produce scalp activity attenuated on average only 2 to 4 times compared to generators with equal phase. Sparse asynchronous cortical generators can produce measurable scalp EEG. This is a possible explanation for seemingly paradoxical observations of simultaneous disorganized intracranial activity and scalp EEG signals. Thus, the standard interpretation of scalp EEG might constitute an oversimplification of the underlying brain activity. PMID:27262240

  7. Sparse regularization-based reconstruction for bioluminescence tomography using a multilevel adaptive finite element method.

    PubMed

    He, Xiaowei; Hou, Yanbin; Chen, Duofang; Jiang, Yuchuan; Shen, Man; Liu, Junting; Zhang, Qitan; Tian, Jie

    2011-01-01

    Bioluminescence tomography (BLT) is a promising tool for studying physiological and pathological processes at cellular and molecular levels. In most clinical or preclinical practices, fine discretization is needed for recovering sources with acceptable resolution when solving BLT with finite element method (FEM). Nevertheless, uniformly fine meshes would cause large dataset and overfine meshes might aggravate the ill-posedness of BLT. Additionally, accurately quantitative information of density and power has not been simultaneously obtained so far. In this paper, we present a novel multilevel sparse reconstruction method based on adaptive FEM framework. In this method, permissible source region gradually reduces with adaptive local mesh refinement. By using sparse reconstruction with l(1) regularization on multilevel adaptive meshes, simultaneous recovery of density and power as well as accurate source location can be achieved. Experimental results for heterogeneous phantom and mouse atlas model demonstrate its effectiveness and potentiality in the application of quantitative BLT.

  8. Robust Cell Detection and Segmentation in Histopathological Images Using Sparse Reconstruction and Stacked Denoising Autoencoders

    PubMed Central

    Su, Hai; Xing, Fuyong; Kong, Xiangfei; Xie, Yuanpu; Zhang, Shaoting; Yang, Lin

    2016-01-01

    Computer-aided diagnosis (CAD) is a promising tool for accurate and consistent diagnosis and prognosis. Cell detection and segmentation are essential steps for CAD. These tasks are challenging due to variations in cell shapes, touching cells, and cluttered background. In this paper, we present a cell detection and segmentation algorithm using the sparse reconstruction with trivial templates and a stacked denoising autoencoder (sDAE). The sparse reconstruction handles the shape variations by representing a testing patch as a linear combination of shapes in the learned dictionary. Trivial templates are used to model the touching parts. The sDAE, trained with the original data and their structured labels, is used for cell segmentation. To the best of our knowledge, this is the first study to apply sparse reconstruction and sDAE with structured labels for cell detection and segmentation. The proposed method is extensively tested on two data sets containing more than 3000 cells obtained from brain tumor and lung cancer images. Our algorithm achieves the best performance compared with other state of the arts.

  9. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    NASA Astrophysics Data System (ADS)

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-03-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up.

  10. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    NASA Astrophysics Data System (ADS)

    Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai

    2016-03-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  11. Dimensionality Reduction Based Optimization Algorithm for Sparse 3-D Image Reconstruction in Diffuse Optical Tomography

    PubMed Central

    Bhowmik, Tanmoy; Liu, Hanli; Ye, Zhou; Oraintara, Soontorn

    2016-01-01

    Diffuse optical tomography (DOT) is a relatively low cost and portable imaging modality for reconstruction of optical properties in a highly scattering medium, such as human tissue. The inverse problem in DOT is highly ill-posed, making reconstruction of high-quality image a critical challenge. Because of the nature of sparsity in DOT, sparsity regularization has been utilized to achieve high-quality DOT reconstruction. However, conventional approaches using sparse optimization are computationally expensive and have no selection criteria to optimize the regularization parameter. In this paper, a novel algorithm, Dimensionality Reduction based Optimization for DOT (DRO-DOT), is proposed. It reduces the dimensionality of the inverse DOT problem by reducing the number of unknowns in two steps and thereby makes the overall process fast. First, it constructs a low resolution voxel basis based on the sensing-matrix properties to find an image support. Second, it reconstructs the sparse image inside this support. To compensate for the reduced sensitivity with increasing depth, depth compensation is incorporated in DRO-DOT. An efficient method to optimally select the regularization parameter is proposed for obtaining a high-quality DOT image. DRO-DOT is also able to reconstruct high-resolution images even with a limited number of optodes in a spatially limited imaging set-up. PMID:26940661

  12. Hybrid Multilevel Sparse Reconstruction for a Whole Domain Bioluminescence Tomography Using Adaptive Finite Element

    PubMed Central

    Yu, Jingjing; He, Xiaowei; Geng, Guohua; Liu, Fang; Jiao, L. C.

    2013-01-01

    Quantitative reconstruction of bioluminescent sources from boundary measurements is a challenging ill-posed inverse problem owing to the high degree of absorption and scattering of light through tissue. We present a hybrid multilevel reconstruction scheme by combining the ability of sparse regularization with the advantage of adaptive finite element method. In view of the characteristics of different discretization levels, two different inversion algorithms are employed on the initial coarse mesh and the succeeding ones to strike a balance between stability and efficiency. Numerical experiment results with a digital mouse model demonstrate that the proposed scheme can accurately localize and quantify source distribution while maintaining reconstruction stability and computational economy. The effectiveness of this hybrid reconstruction scheme is further confirmed with in vivo experiments. PMID:23533542

  13. Advances in thermographic signal reconstruction

    NASA Astrophysics Data System (ADS)

    Shepard, Steven M.; Frendberg Beemer, Maria

    2015-05-01

    Since its introduction in 2001, the Thermographic Signal Reconstruction (TSR) method has emerged as one of the most widely used methods for enhancement and analysis of thermographic sequences, with applications extending beyond industrial NDT into biomedical research, art restoration and botany. The basic TSR process, in which a noise reduced replica of each pixel time history is created, yields improvement over unprocessed image data that is sufficient for many applications. However, examination of the resulting logarithmic time derivatives of each TSR pixel replica provides significant insight into the physical mechanisms underlying the active thermography process. The deterministic and invariant properties of the derivatives have enabled the successful implementation of automated defect recognition and measurement systems. Unlike most approaches to analysis of thermography data, TSR does not depend on flawbackground contrast, so that it can also be applied to characterization and measurement of thermal properties of flaw-free samples. We present a summary of recent advances in TSR, a review of the underlying theory and examples of its implementation.

  14. Evaluation of iterative sparse object reconstruction from few projections for 3-D rotational coronary angiography.

    PubMed

    Hansis, Eberhard; Schäfer, Dirk; Dössel, Olaf; Grass, Michael

    2008-11-01

    A 3-D reconstruction of the coronary arteries offers great advantages in the diagnosis and treatment of cardiovascular disease, compared to 2-D X-ray angiograms. Besides improved roadmapping, quantitative vessel analysis is possible. Due to the heart's motion, rotational coronary angiography typically provides only 5-10 projections for the reconstruction of each cardiac phase, which leads to a strongly undersampled reconstruction problem. Such an ill-posed problem can be approached with regularized iterative methods. The coronary arteries cover only a small fraction of the reconstruction volume. Therefore, the minimization of the mbiL(1) norm of the reconstructed image, favoring spatially sparse images, is a suitable regularization. Additional problems are overlaid background structures and projection truncation, which can be alleviated by background reduction using a morphological top-hat filter. This paper quantitatively evaluates image reconstruction based on these ideas on software phantom data, in terms of reconstructed absorption coefficients and vessel radii. Results for different algorithms and different input data sets are compared. First results for electrocardiogram-gated reconstruction from clinical catheter-based rotational X-ray coronary angiography are presented. Excellent 3-D image quality can be achieved. PMID:18955171

  15. Block Sparse Compressed Sensing of Electroencephalogram (EEG) Signals by Exploiting Linear and Non-Linear Dependencies

    PubMed Central

    Mahrous, Hesham; Ward, Rabab

    2016-01-01

    This paper proposes a compressive sensing (CS) method for multi-channel electroencephalogram (EEG) signals in Wireless Body Area Network (WBAN) applications, where the battery life of sensors is limited. For the single EEG channel case, known as the single measurement vector (SMV) problem, the Block Sparse Bayesian Learning-BO (BSBL-BO) method has been shown to yield good results. This method exploits the block sparsity and the intra-correlation (i.e., the linear dependency) within the measurement vector of a single channel. For the multichannel case, known as the multi-measurement vector (MMV) problem, the Spatio-Temporal Sparse Bayesian Learning (STSBL-EM) method has been proposed. This method learns the joint correlation structure in the multichannel signals by whitening the model in the temporal and the spatial domains. Our proposed method represents the multi-channels signal data as a vector that is constructed in a specific way, so that it has a better block sparsity structure than the conventional representation obtained by stacking the measurement vectors of the different channels. To reconstruct the multichannel EEG signals, we modify the parameters of the BSBL-BO algorithm, so that it can exploit not only the linear but also the non-linear dependency structures in a vector. The modified BSBL-BO is then applied on the vector with the better sparsity structure. The proposed method is shown to significantly outperform existing SMV and also MMV methods. It also shows significant lower compression errors even at high compression ratios such as 10:1 on three different datasets. PMID:26861335

  16. A Sparse Reconstruction Approach for Identifying Gene Regulatory Networks Using Steady-State Experiment Data

    PubMed Central

    Zhang, Wanhong; Zhou, Tong

    2015-01-01

    Motivation Identifying gene regulatory networks (GRNs) which consist of a large number of interacting units has become a problem of paramount importance in systems biology. Situations exist extensively in which causal interacting relationships among these units are required to be reconstructed from measured expression data and other a priori information. Though numerous classical methods have been developed to unravel the interactions of GRNs, these methods either have higher computing complexities or have lower estimation accuracies. Note that great similarities exist between identification of genes that directly regulate a specific gene and a sparse vector reconstruction, which often relates to the determination of the number, location and magnitude of nonzero entries of an unknown vector by solving an underdetermined system of linear equations y = Φx. Based on these similarities, we propose a novel framework of sparse reconstruction to identify the structure of a GRN, so as to increase accuracy of causal regulation estimations, as well as to reduce their computational complexity. Results In this paper, a sparse reconstruction framework is proposed on basis of steady-state experiment data to identify GRN structure. Different from traditional methods, this approach is adopted which is well suitable for a large-scale underdetermined problem in inferring a sparse vector. We investigate how to combine the noisy steady-state experiment data and a sparse reconstruction algorithm to identify causal relationships. Efficiency of this method is tested by an artificial linear network, a mitogen-activated protein kinase (MAPK) pathway network and the in silico networks of the DREAM challenges. The performance of the suggested approach is compared with two state-of-the-art algorithms, the widely adopted total least-squares (TLS) method and those available results on the DREAM project. Actual results show that, with a lower computational cost, the proposed method can

  17. Some Factors Affecting Time Reversal Signal Reconstruction

    NASA Astrophysics Data System (ADS)

    Prevorovsky, Z.; Kober, J.

    Time reversal (TR) ultrasonic signal processing is now broadly used in a variety of applications, and also in NDE/NDT field. TR processing is used e.g. for S/N ratio enhancement, reciprocal transducer calibration, location, identification, and reconstruction of unknown sources, etc. TR procedure in con-junction with nonlinear elastic wave spectroscopy NEWS is also useful for sensitive detection of defects (nonlinearity presence). To enlarge possibilities of acoustic emission (AE) method, we proposed the use of TR signal reconstruction ability for detected AE signals transfer from a structure with AE source onto a similar remote model of the structure (real or numerical), which allows easier source analysis under laboratory conditions. Though the TR signal reconstruction is robust regarding the system variations, some small differences and changes influence space-time TR focus and reconstruction quality. Experiments were performed on metallic parts of both simple and complicated geometry to examine effects of small changes of temperature or configuration (body shape, dimensions, transducers placement, etc.) on TR reconstruction quality. Results of experiments are discussed in this paper. Considering mathematical similarity between TR and Coda Wave Interferometry (CWI), prediction of signal reconstruction quality was possible using only the direct propagation. The results show how some factors like temperature or stress changes may deteriorate the TR reconstruction quality. It is also shown that sometimes the reconstruction quality is not enhanced using longer TR signal (S/N ratio may decrease).

  18. From molecular model to sparse representation of chromatographic signals with an unknown number of peaks.

    PubMed

    Bertholon, F; Harant, O; Foan, L; Vignoud, S; Jutten, C; Grangeat, P

    2015-08-01

    Analysis of a fluid mixture using a chromatographic system is a standard technique for many biomedical applications such as in-vitro diagnostic of body fluids or air and water quality assessment. The analysis is often dedicated towards a set of molecules or biomarkers. However, due to the fluid complexity, the number of mixture components is often larger than the list of targeted molecules. In order to get an analysis as exhaustive as possible and also to take into account possible interferences, it is important to identify and to quantify all the components that are included in the chromatographic signal. Thus the signal processing aims to reconstruct a list of an unknown number of components and their relative concentrations. We address this question as a problem of sparse representation of a chromatographic signal. The innovative representation is based on a stochastic forward model describing the transport of elementary molecules in the chromatography column as a molecular random walk. We investigate three methods: two probabilistic Bayesian approaches, one parametric and one non-parametric, and a determinist approach based on a parsimonious decomposition on a dictionary basis. We examine the performances of these 3 approaches on an experimental case dedicated to the analysis of mixtures of the micro-pollutants Polycyclic Aromatic Hydrocarbons (PAH) in a methanol solution in two cases of high and low signal to noise ratio (SNR).

  19. Cloud Removal from SENTINEL-2 Image Time Series Through Sparse Reconstruction from Random Samples

    NASA Astrophysics Data System (ADS)

    Cerra, D.; Bieniarz, J.; Müller, R.; Reinartz, P.

    2016-06-01

    In this paper we propose a cloud removal algorithm for scenes within a Sentinel-2 satellite image time series based on synthetisation of the affected areas via sparse reconstruction. For this purpose, a clouds and clouds shadow mask must be given. With respect to previous works, the process has an increased automation degree. Several dictionaries, on the basis of which the data are reconstructed, are selected randomly from cloud-free areas around the cloud, and for each pixel the dictionary yielding the smallest reconstruction error in non-corrupted images is chosen for the restoration. The values below a cloudy area are therefore estimated by observing the spectral evolution in time of the non-corrupted pixels around it. The proposed restoration algorithm is fast and efficient, requires minimal supervision and yield results with low overall radiometric and spectral distortions.

  20. A new look at signal sparsity paradigm for low-dose computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Zhang, Hao; Moore, William; Liang, Zhengrong

    2016-03-01

    Signal sparsity in computed tomography (CT) image reconstruction field is routinely interpreted as sparse angular sampling around the patient body whose image is to be reconstructed. For CT clinical applications, while the normal tissues may be known and treated as sparse signals but the abnormalities inside the body are usually unknown signals and may not be treated as sparse signals. Furthermore, the locations and structures of abnormalities are also usually unknown, and this uncertainty adds in more challenges in interpreting signal sparsity for clinical applications. In this exploratory experimental study, we assume that once the projection data around the continuous body are discretized regardless at what sampling rate, the image reconstruction of the continuous body from the discretized data becomes a signal sparse problem. We hypothesize that a dense prior model describing the continuous body is a desirable choice for achieving an optimal solution for a given clinical task. We tested this hypothesis by adapting total variation stroke (TVS) model to describe the continuous body signals and showing the gain over the classic filtered backprojection (FBP) at a wide range of angular sampling rate. For the given clinical task of detecting lung nodules of size 5mm and larger, a consistent improvement of TVS over FBP on nodule detection was observed by an experienced radiologists from low sample rate to high sampling rate. This experimental outcome concurs with the expectation of the TVS model. Further investigation for theoretical insights and task-dependent evaluations is needed.

  1. Hologram-reconstruction signal enhancement

    NASA Technical Reports Server (NTRS)

    Mezrich, R. S.

    1977-01-01

    Principle of heterodyne detection is used to combine object beam and reconstructed virtual image beam. All light valves in page composer are opened, and virtual-image beam is allowed to interfere with light from valves.

  2. Initial experience in primal-dual optimization reconstruction from sparse-PET patient data

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-03-01

    There exists interest in designing a PET system with reduced detectors due to cost concerns, while not significantly compromising the PET utility. Recently developed optimization-based algorithms, which have demonstrated the potential clinical utility in image reconstruction from sparse CT data, may be used for enabling such design of innovative PET systems. In this work, we investigate a PET configuration with reduced number of detectors, and carry out preliminary studies from patient data collected by use of such sparse-PET configuration. We consider an optimization problem combining Kullback-Leibler (KL) data fidelity with an image TV constraint, and solve it by using a primal-dual optimization algorithm developed by Chambolle and Pock. Results show that advanced algorithms may enable the design of innovative PET configurations with reduced number of detectors, while yielding potential practical PET utilities.

  3. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  4. An algorithm for inverse synthetic aperture imaging lidar based on sparse signal representation

    NASA Astrophysics Data System (ADS)

    Ren, X. Z.; Sun, X. M.

    2014-12-01

    In actual applications of inverse synthetic aperture imaging lidar, the issue of sparse aperture data arises when continuous measurements are impossible or the collected data during some periods are not valid. Hence, the imaging results obtained by traditional methods are limited by high sidelobes. Considering the sparse structure of actual target space in high frequency radar application, a novel imaging method based on sparse signal representation is proposed in this paper. Firstly, the range image is acquired by traditional pulse compression of the optical heterodyne process. Then, the redundant dictionary is constructed through the sparse azimuth sampling positions and the signal form after the range compression. Finally, the imaging results are obtained by solving an ill-posed problem based on sparse regularization. Simulation results confirm the effectiveness of the proposed method.

  5. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    PubMed

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. PMID:26773787

  6. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-01

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  7. Optimization-based image reconstruction from sparse-view data in offset-detector CBCT.

    PubMed

    Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y; Shao, Lingxiong; Pan, Xiaochuan

    2013-01-21

    The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.

  8. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  9. Total variation-stokes strategy for sparse-view X-ray CT image reconstruction.

    PubMed

    Liu, Yan; Liang, Zhengrong; Ma, Jianhua; Lu, Hongbing; Wang, Ke; Zhang, Hao; Moore, William

    2014-03-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and/or other constraints, a piecewise-smooth X-ray computed tomography image can be reconstructed from sparse-view projection data. However, due to the piecewise constant assumption for the TV model, the reconstructed images are frequently reported to suffer from the blocky or patchy artifacts. To eliminate this drawback, we present a total variation-stokes-projection onto convex sets (TVS-POCS) reconstruction method in this paper. The TVS model is derived by introducing isophote directions for the purpose of recovering possible missing information in the sparse-view data situation. Thus the desired consistencies along both the normal and the tangent directions are preserved in the resulting images. Compared to the previous TV-based image reconstruction algorithms, the preserved consistencies by the TVS-POCS method are expected to generate noticeable gains in terms of eliminating the patchy artifacts and preserving subtle structures. To evaluate the presented TVS-POCS method, both qualitative and quantitative studies were performed using digital phantom, physical phantom and clinical data experiments. The results reveal that the presented method can yield images with several noticeable gains, measured by the universal quality index and the full-width-at-half-maximum merit, as compared to its corresponding TV-based algorithms. In addition, the results further indicate that the TVS-POCS method approaches to the gold standard result of the filtered back-projection reconstruction in the full-view data case as theoretically expected, while most previous iterative methods may fail in the full-view case because of their artificial textures in the results.

  10. Data sinogram sparse reconstruction based on steering kernel regression and filtering strategies

    NASA Astrophysics Data System (ADS)

    Marquez, Miguel A.; Mojica, Edson; Arguello, Henry

    2016-05-01

    Computed tomography images have an impact in many applications such as medicine, and others. Recently, compressed sensing-based acquisition strategies have been proposed in order to reduce the x-ray radiation dose. However, these methods lose critical information of the sinogram. In this paper, a reconstruction method of sparse measurements from a sinogram is proposed. The proposed approach takes advantage of the redundancy of similar patches in the sinogram, and estimates a target pixel using a weighted average of its neighbors. Simulation results show that the proposed method obtained a gain up to 2 dB with respect to an l1 minimization algorithm.

  11. Texture enhanced optimization-based image reconstruction (TxE-OBIR) from sparse projection views

    NASA Astrophysics Data System (ADS)

    Xie, Huiqiao; Niu, Tianye; Yang, Yi; Ren, Yi; Tang, Xiangyang

    2016-03-01

    The optimization-based image reconstruction (OBIR) has been proposed and investigated in recent years to reduce radiation dose in X-ray computed tomography (CT) through acquiring sparse projection views. However, the OBIR usually generates images with a quite different noise texture compared to the clinical widely used reconstruction method (i.e. filtered back-projection - FBP). This may make the radiologists/physicians less confident while they are making clinical decisions. Recognizing the fact that the X-ray photon noise statistics is relatively uniform across the detector cells, which is enabled by beam forming devices (e.g. bowtie filters), we propose and evaluate a novel and practical texture enhancement method in this work. In the texture enhanced optimization-based image reconstruction (TxEOBIR), we first reconstruct a texture image with the FBP algorithm from a full set of synthesized projection views of noise. Then, the TxE-OBIR image is generated by adding the texture image into the OBIR reconstruction. As qualitatively confirmed by visual inspection and quantitatively by noise power spectrum (NPS) evaluation, the proposed method can produce images with textures that are visually identical to those of the gold standard FBP images.

  12. High-efficiency imaging through scattering media in noisy environments via sparse image reconstruction

    NASA Astrophysics Data System (ADS)

    Wu, Tengfei; Shao, Xiaopeng; Gong, Changmei; Li, Huijuan; Liu, Jietao

    2015-11-01

    High-efficiency imaging through highly scattering media is urgently desired for various applications. Imaging speed and imaging quality, which determine the imaging efficiency, are two inevitable indices for any optical imaging area. Based on random walk analysis in statistical optics, the elements in a transmission matrix (TM) actually obey Gaussian distribution. Instead of dealing with large amounts of data contained in TM and speckle pattern, imaging can be achieved with only a small number of the data via sparse representation. We make a detailed mathematical analysis of the elements-distribution of the TM of a scattering imaging system and study the imaging method of sparse image reconstruction (SIR). More specifically, we focus on analyzing the optimum sampling rates for the imaging of different structures of targets, which significantly influences both imaging speed and imaging quality. Results show that the optimum sampling rate exists in any noise-level environment if a target can be sparsely represented, and by searching for the optimum sampling rate, it can effectively balance the imaging quality and the imaging speed, which can maximize the imaging efficiency. This work is helpful for practical applications of imaging through highly scattering media with the SIR method.

  13. A Fast Greedy Sparse Method of Current Sources Reconstruction for Ventricular Torsion Detection

    NASA Astrophysics Data System (ADS)

    Bing, Lu; Jiang, Shiqin; Chen, Mengpei; Zhao, Chen; Grönemeyer, D.; Hailer, B.; Van Leeuwen, P.

    2015-09-01

    A fast greedy sparse (FGS) method of cardiac equivalent current sources reconstruction is developed for non-invasive detection and quantitative analysis of individual left ventricular torsion. The cardiac magnetic field inverse problem is solved based on a distributed source model. The analysis of real 61-channel magnetocardiogram (MCG) data demonstrates that one or two dominant current source with larger strength can be identified efficiently by the FGS algorithm. Then, the left ventricle torsion during systole is examined on the basis of x, y and z coordination curves and angle change of reconstructed dominant current sources. The advantages of this method are non-invasive, visible, with higher sensitivity and resolution. It may enable the clinical detection of cardiac systolic and ejection dysfunction.

  14. Fast iterative image reconstruction using sparse matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-03-01

    Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing the difference between the factored system matrix and the original system matrix. The resulting factored system matrix has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs, which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.

  15. Sparse-view x-ray CT reconstruction via total generalized variation regularization

    NASA Astrophysics Data System (ADS)

    Niu, Shanzhou; Gao, Yang; Bian, Zhaoying; Huang, Jing; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2014-06-01

    Sparse-view CT reconstruction algorithms via total variation (TV) optimize the data iteratively on the basis of a noise- and artifact-reducing model, resulting in significant radiation dose reduction while maintaining image quality. However, the piecewise constant assumption of TV minimization often leads to the appearance of noticeable patchy artifacts in reconstructed images. To obviate this drawback, we present a penalized weighted least-squares (PWLS) scheme to retain the image quality by incorporating the new concept of total generalized variation (TGV) regularization. We refer to the proposed scheme as ‘PWLS-TGV’ for simplicity. Specifically, TGV regularization utilizes higher order derivatives of the objective image, and the weighted least-squares term considers data-dependent variance estimation, which fully contribute to improving the image quality with sparse-view projection measurement. Subsequently, an alternating optimization algorithm was adopted to minimize the associative objective function. To evaluate the PWLS-TGV method, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present PWLS-TGV method can achieve images with several noticeable gains over the original TV-based method in terms of accuracy and resolution properties.

  16. MR image reconstruction algorithms for sparse k-space data: a Java-based integration.

    PubMed

    de Beer, R; Coron, A; Graveron-Demilly, D; Lethmate, R; Nastase, S; van Ormondt, D; Wajer, F T A W

    2002-11-01

    We have worked on multi-dimensional magnetic resonance imaging (MRI) data acquisition and related image reconstruction methods that aim at reducing the MRI scan time. To achieve this scan-time reduction we have combined the approach of 'increasing the speed' of k-space acquisition with that of 'deliberately omitting' acquisition of k-space trajectories (sparse sampling). Today we have a whole range of (sparse) sampling distributions and related reconstruction methods. In the context of a European Union Training and Mobility of Researchers project we have decided to integrate all methods into one coordinating software system. This system meets the requirements that it is highly structured in an object-oriented manner using the Unified Modeling Language and the Java programming environment, that it uses the client-server approach, that it allows multi-client communication sessions with facilities for sharing data and that it is a true distributed computing system with guaranteed reliability using core activities of the Java Jini package. PMID:12413561

  17. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection.

    PubMed

    Su, Hai; Xing, Fuyong; Yang, Lin

    2016-06-01

    Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96.

  18. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm.

  19. Progressive Magnetic Resonance Image Reconstruction Based on Iterative Solution of a Sparse Linear System

    PubMed Central

    Fahmy, Ahmed S.; Gabr, Refaat E.; Heberlein, Keith; Hu, Xiaoping P.

    2006-01-01

    Image reconstruction from nonuniformly sampled spatial frequency domain data is an important problem that arises in computed imaging. Current reconstruction techniques suffer from limitations in their model and implementation. In this paper, we present a new reconstruction method that is based on solving a system of linear equations using an efficient iterative approach. Image pixel intensities are related to the measured frequency domain data through a set of linear equations. Although the system matrix is too dense and large to solve by direct inversion in practice, a simple orthogonal transformation to the rows of this matrix is applied to convert the matrix into a sparse one up to a certain chosen level of energy preservation. The transformed system is subsequently solved using the conjugate gradient method. This method is applied to reconstruct images of a numerical phantom as well as magnetic resonance images from experimental spiral imaging data. The results support the theory and demonstrate that the computational load of this method is similar to that of standard gridding, illustrating its practical utility. PMID:23165034

  20. Recovery of sparse translation-invariant signals with continuous basis pursuit

    PubMed Central

    Ekanadham, Chaitanya; Tranchina, Daniel; Simoncelli, Eero

    2013-01-01

    We consider the problem of decomposing a signal into a linear combination of features, each a continuously translated version of one of a small set of elementary features. Although these constituents are drawn from a continuous family, most current signal decomposition methods rely on a finite dictionary of discrete examples selected from this family (e.g., shifted copies of a set of basic waveforms), and apply sparse optimization methods to select and solve for the relevant coefficients. Here, we generate a dictionary that includes auxiliary interpolation functions that approximate translates of features via adjustment of their coefficients. We formulate a constrained convex optimization problem, in which the full set of dictionary coefficients represents a linear approximation of the signal, the auxiliary coefficients are constrained so as to only represent translated features, and sparsity is imposed on the primary coefficients using an L1 penalty. The basis pursuit denoising (BP) method may be seen as a special case, in which the auxiliary interpolation functions are omitted, and we thus refer to our methodology as continuous basis pursuit (CBP). We develop two implementations of CBP for a one-dimensional translation-invariant source, one using a first-order Taylor approximation, and another using a form of trigonometric spline. We examine the tradeoff between sparsity and signal reconstruction accuracy in these methods, demonstrating empirically that trigonometric CBP substantially outperforms Taylor CBP, which in turn offers substantial gains over ordinary BP. In addition, the CBP bases can generally achieve equally good or better approximations with much coarser sampling than BP, leading to a reduction in dictionary dimensionality. PMID:24352562

  1. Reconstruction of sparse connectivity in neural networks from spike train covariances

    NASA Astrophysics Data System (ADS)

    Pernice, Volker; Rotter, Stefan

    2013-03-01

    The inference of causation from correlation is in general highly problematic. Correspondingly, it is difficult to infer the existence of physical synaptic connections between neurons from correlations in their activity. Covariances in neural spike trains and their relation to network structure have been the subject of intense research, both experimentally and theoretically. The influence of recurrent connections on covariances can be characterized directly in linear models, where connectivity in the network is described by a matrix of linear coupling kernels. However, as indirect connections also give rise to covariances, the inverse problem of inferring network structure from covariances can generally not be solved unambiguously. Here we study to what degree this ambiguity can be resolved if the sparseness of neural networks is taken into account. To reconstruct a sparse network, we determine the minimal set of linear couplings consistent with the measured covariances by minimizing the L1 norm of the coupling matrix under appropriate constraints. Contrary to intuition, after stochastic optimization of the coupling matrix, the resulting estimate of the underlying network is directed, despite the fact that a symmetric matrix of count covariances is used for inference. The performance of the new method is best if connections are neither exceedingly sparse, nor too dense, and it is easily applicable for networks of a few hundred nodes. Full coupling kernels can be obtained from the matrix of full covariance functions. We apply our method to networks of leaky integrate-and-fire neurons in an asynchronous-irregular state, where spike train covariances are well described by a linear model.

  2. Sparse Reconstruction Challenge for diffusion MRI: Validation on a physical phantom to determine which acquisition scheme and analysis method to use?

    PubMed

    Ning, Lipeng; Laun, Frederik; Gur, Yaniv; DiBella, Edward V R; Deslauriers-Gauthier, Samuel; Megherbi, Thinhinane; Ghosh, Aurobrata; Zucchelli, Mauro; Menegaz, Gloria; Fick, Rutger; St-Jean, Samuel; Paquette, Michael; Aranda, Ramon; Descoteaux, Maxime; Deriche, Rachid; O'Donnell, Lauren; Rathi, Yogesh

    2015-12-01

    Diffusion magnetic resonance imaging (dMRI) is the modality of choice for investigating in-vivo white matter connectivity and neural tissue architecture of the brain. The diffusion-weighted signal in dMRI reflects the diffusivity of water molecules in brain tissue and can be utilized to produce image-based biomarkers for clinical research. Due to the constraints on scanning time, a limited number of measurements can be acquired within a clinically feasible scan time. In order to reconstruct the dMRI signal from a discrete set of measurements, a large number of algorithms have been proposed in recent years in conjunction with varying sampling schemes, i.e., with varying b-values and gradient directions. Thus, it is imperative to compare the performance of these reconstruction methods on a single data set to provide appropriate guidelines to neuroscientists on making an informed decision while designing their acquisition protocols. For this purpose, the SPArse Reconstruction Challenge (SPARC) was held along with the workshop on Computational Diffusion MRI (at MICCAI 2014) to validate the performance of multiple reconstruction methods using data acquired from a physical phantom. A total of 16 reconstruction algorithms (9 teams) participated in this community challenge. The goal was to reconstruct single b-value and/or multiple b-value data from a sparse set of measurements. In particular, the aim was to determine an appropriate acquisition protocol (in terms of the number of measurements, b-values) and the analysis method to use for a neuroimaging study. The challenge did not delve on the accuracy of these methods in estimating model specific measures such as fractional anisotropy (FA) or mean diffusivity, but on the accuracy of these methods to fit the data. This paper presents several quantitative results pertaining to each reconstruction algorithm. The conclusions in this paper provide a valuable guideline for choosing a suitable algorithm and the corresponding

  3. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment

    PubMed Central

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  4. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment.

    PubMed

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals. PMID:27458376

  5. Review of Sparse Representation-Based Classification Methods on EEG Signal Processing for Epilepsy Detection, Brain-Computer Interface and Cognitive Impairment.

    PubMed

    Wen, Dong; Jia, Peilei; Lian, Qiusheng; Zhou, Yanhong; Lu, Chengbiao

    2016-01-01

    At present, the sparse representation-based classification (SRC) has become an important approach in electroencephalograph (EEG) signal analysis, by which the data is sparsely represented on the basis of a fixed dictionary or learned dictionary and classified based on the reconstruction criteria. SRC methods have been used to analyze the EEG signals of epilepsy, cognitive impairment and brain computer interface (BCI), which made rapid progress including the improvement in computational accuracy, efficiency and robustness. However, these methods have deficiencies in real-time performance, generalization ability and the dependence of labeled sample in the analysis of the EEG signals. This mini review described the advantages and disadvantages of the SRC methods in the EEG signal analysis with the expectation that these methods can provide the better tools for analyzing EEG signals.

  6. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2015-03-01

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov's proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  7. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    SciTech Connect

    Gu, Renliang E-mail: ald@iastate.edu; Dogandžić, Aleksandar E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.

  8. Sparse Bayesian framework applied to 3D super-resolution reconstruction in fetal brain MRI

    NASA Astrophysics Data System (ADS)

    Becerra, Laura C.; Velasco Toledo, Nelson; Romero Castro, Eduardo

    2015-01-01

    Fetal Magnetic Resonance (FMR) is an imaging technique that is becoming increasingly important as allows assessing brain development and thus make an early diagnostic of congenital abnormalities, spatial resolution is limited by the short acquisition time and the unpredictable fetus movements, in consequence the resulting images are characterized by non-parallel projection planes composed by anisotropic voxels. The sparse Bayesian representation is a flexible strategy which is able to model complex relationships. The Super-resolution is approached as a regression problem, the main advantage is the capability to learn data relations from observations. Quantitative performance evaluation was carried out using synthetic images, the proposed method demonstrates a better reconstruction quality compared with standard interpolation approach. The presented method is a promising approach to improve the information quality related with the 3-D fetal brain structure. It is important because allows assessing brain development and thus make an early diagnostic of congenital abnormalities.

  9. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  10. Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.

    PubMed

    Cai, Pingmei; Wang, Guinan; Yu, Shiwei; Zhang, Hongjuan; Ding, Shuxue; Wu, Zikai

    2016-02-01

    The study of biology and medicine in a noise environment is an evolving direction in biological data analysis. Among these studies, analysis of electrocardiogram (ECG) signals in a noise environment is a challenging direction in personalized medicine. Due to its periodic characteristic, ECG signal can be roughly regarded as sparse biomedical signals. This study proposes a two-stage recovery algorithm for sparse biomedical signals in time domain. In the first stage, the concentration subspaces are found in advance. Then by exploiting these subspaces, the mixing matrix is estimated accurately. In the second stage, based on the number of active sources at each time point, the time points are divided into different layers. Next, by constructing some transformation matrices, these time points form a row echelon-like system. After that, the sources at each layer can be solved out explicitly by corresponding matrix operations. It is noting that all these operations are conducted under a weak sparse condition that the number of active sources is less than the number of observations. Experimental results show that the proposed method has a better performance for sparse ECG signal recovery problem. PMID:26816398

  11. A sparse digital signal model for ultrasonic nondestructive evaluation of layered materials.

    PubMed

    Bochud, N; Gomez, A M; Rus, G; Peinado, A M

    2015-09-01

    Signal modeling has been proven to be an useful tool to characterize damaged materials under ultrasonic nondestructive evaluation (NDE). In this paper, we introduce a novel digital signal model for ultrasonic NDE of multilayered materials. This model borrows concepts from lattice filter theory, and bridges them to the physics involved in the wave-material interactions. In particular, the proposed theoretical framework shows that any multilayered material can be characterized by a transfer function with sparse coefficients. The filter coefficients are linked to the physical properties of the material and are analytically obtained from them, whereas a sparse distribution naturally arises and does not rely on heuristic approaches. The developed model is first validated with experimental measurements obtained from multilayered media consisting of homogeneous solids. Then, the sparse structure of the obtained digital filter is exploited through a model-based inverse problem for damage identification in a carbon fiber-reinforced polymer (CFRP) plate.

  12. Evaluation of image collection requirements for 3D reconstruction using phototourism techniques on sparse overhead data

    NASA Astrophysics Data System (ADS)

    Ontiveros, Erin; Salvaggio, Carl; Nilosek, David; Raqueño, Nina; Faulring, Jason

    2012-06-01

    Phototourism is a burgeoning field that uses collections of ground-based photographs to construct a three-dimensional model of a tourist site, using computer vision techniques. These techniques capitalize on the extensive overlap generated by the various visitor-acquired images from which a three-dimensional point cloud can be generated. From there, a facetized version of the structure can be created. Remotely sensed data tends to focus on nadir or near nadir imagery while trying to minimize overlap in order to achieve the greatest ground coverage possible during a data collection. A workflow is being developed at Digital Imaging and Remote Sensing (DIRS) Group at the Rochester Institute of Technology (RIT) that utilizes these phototourism techniques, which typically use dense coverage of a small object or region, and applies them to remotely sensed imagery, which involves sparse data coverage of a large area. In addition to this, RIT has planned and executed a high-overlap image collection, using the RIT WASP system, to study the requirements needed for such three-dimensional reconstruction efforts. While the collection was extensive, the intention was to find the minimum number of images and frame overlap needed to generate quality point clouds. This paper will discuss the image data collection effort and what it means to generate and evaluate a quality point cloud for reconstruction purposes.

  13. Image reconstruction from sparse data in synchrotron-radiation-based microtomography

    PubMed Central

    Xia, D.; Xiao, X.; Bian, J.; Han, X.; Sidky, E. Y.; De Carlo, F.; Pan, X.

    2011-01-01

    Synchrotron-radiation-based microcomputed-tomography (SR-μCT) is a powerful tool for yielding 3D structural information of high spatial and contrast resolution about a specimen preserved in its natural state. A large number of projection views are required currently for yielding SR-μCT images by use of existing algorithms without significant artifacts. When a wet biological specimen is imaged, synchrotron x-ray radiation from a large number of projection views can result in significant structural deformation within the specimen. A possible approach to reducing imaging time and specimen deformation is to decrease the number of projection views. In the work, using reconstruction algorithms developed recently for medical computed tomography (CT), we investigate and demonstrate image reconstruction from sparse-view data acquired in SR-μCT. Numerical results of our study suggest that images of practical value can be obtained from data acquired at a number of projection views significantly lower than those used currently in a typical SR-μCT imaging experiment. PMID:21529012

  14. Dynamic Filtering of Time-Varying Sparse Signals via ℓ _1 Minimization

    NASA Astrophysics Data System (ADS)

    Charles, Adam S.; Balavoine, Aurele; Rozell, Christopher J.

    2016-11-01

    Despite the importance of sparsity signal models and the increasing prevalence of high-dimensional streaming data, there are relatively few algorithms for dynamic filtering of time-varying sparse signals. Of the existing algorithms, fewer still provide strong performance guarantees. This paper examines two algorithms for dynamic filtering of sparse signals that are based on efficient l1 optimization methods. We first present an analysis for one simple algorithm (BPDN-DF) that works well when the system dynamics are known exactly. We then introduce a novel second algorithm (RWL1-DF) that is more computationally complex than BPDN-DF but performs better in practice, especially in the case where the system dynamics model is inaccurate. Robustness to model inaccuracy is achieved by using a hierarchical probabilistic data model and propagating higher-order statistics from the previous estimate (akin to Kalman filtering) in the sparse inference process. We demonstrate the properties of these algorithms on both simulated data as well as natural video sequences. Taken together, the algorithms presented in this paper represent the first strong performance analysis of dynamic filtering algorithms for time-varying sparse signals as well as state-of-the-art performance in this emerging application.

  15. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS)

    NASA Astrophysics Data System (ADS)

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  16. Accelerated dynamic cardiac MRI exploiting sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS).

    PubMed

    Park, Suhyung; Park, Jaeseok

    2015-05-01

    Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k  -  t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k  -  t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k  -  t SPARKS incorporates Kalman-smoother self-calibration in k  -  t space and sparse signal recovery in x  -   f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k  -  t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k  -  t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.

  17. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction.

    PubMed

    Min, Junhong; Holden, Seamus J; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-11-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum.

  18. Signal processing using sparse derivatives with applications to chromatograms and ECG

    NASA Astrophysics Data System (ADS)

    Ning, Xiaoran

    In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse

  19. Rapid 3D dynamic arterial spin labeling with a sparse model-based image reconstruction.

    PubMed

    Zhao, Li; Fielden, Samuel W; Feng, Xue; Wintermark, Max; Mugler, John P; Meyer, Craig H

    2015-11-01

    Dynamic arterial spin labeling (ASL) MRI measures the perfusion bolus at multiple observation times and yields accurate estimates of cerebral blood flow in the presence of variations in arterial transit time. ASL has intrinsically low signal-to-noise ratio (SNR) and is sensitive to motion, so that extensive signal averaging is typically required, leading to long scan times for dynamic ASL. The goal of this study was to develop an accelerated dynamic ASL method with improved SNR and robustness to motion using a model-based image reconstruction that exploits the inherent sparsity of dynamic ASL data. The first component of this method is a single-shot 3D turbo spin echo spiral pulse sequence accelerated using a combination of parallel imaging and compressed sensing. This pulse sequence was then incorporated into a dynamic pseudo continuous ASL acquisition acquired at multiple observation times, and the resulting images were jointly reconstructed enforcing a model of potential perfusion time courses. Performance of the technique was verified using a numerical phantom and it was validated on normal volunteers on a 3-Tesla scanner. In simulation, a spatial sparsity constraint improved SNR and reduced estimation errors. Combined with a model-based sparsity constraint, the proposed method further improved SNR, reduced estimation error and suppressed motion artifacts. Experimentally, the proposed method resulted in significant improvements, with scan times as short as 20s per time point. These results suggest that the model-based image reconstruction enables rapid dynamic ASL with improved accuracy and robustness.

  20. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    PubMed

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples. PMID:27672448

  1. SU-E-I-45: Reconstruction of CT Images From Sparsely-Sampled Data Using the Logarithmic Barrier Method

    SciTech Connect

    Xu, H

    2014-06-01

    Purpose: To develop and investigate whether the logarithmic barrier (LB) method can result in high-quality reconstructed CT images using sparsely-sampled noisy projection data Methods: The objective function is typically formulated as the sum of the total variation (TV) and a data fidelity (DF) term with a parameter λ that governs the relative weight between them. Finding the optimized value of λ is a critical step for this approach to give satisfactory results. The proposed LB method avoid using λ by constructing the objective function as the sum of the TV and a log function whose augment is the DF term. Newton's method was used to solve the optimization problem. The algorithm was coded in MatLab2013b. Both Shepp-Logan phantom and a patient lung CT image were used for demonstration of the algorithm. Measured data were simulated by calculating the projection data using radon transform. A Poisson noise model was used to account for the simulated detector noise. The iteration stopped when the difference of the current TV and the previous one was less than 1%. Results: Shepp-Logan phantom reconstruction study shows that filtered back-projection (FBP) gives high streak artifacts for 30 and 40 projections. Although visually the streak artifacts are less pronounced for 64 and 90 projections in FBP, the 1D pixel profiles indicate that FBP gives noisier reconstructed pixel values than LB does. A lung image reconstruction is presented. It shows that use of 64 projections gives satisfactory reconstructed image quality with regard to noise suppression and sharp edge preservation. Conclusion: This study demonstrates that the logarithmic barrier method can be used to reconstruct CT images from sparsely-amped data. The number of projections around 64 gives a balance between the over-smoothing of the sharp demarcation and noise suppression. Future study may extend to CBCT reconstruction and improvement on computation speed.

  2. Application of a non-convex smooth hard threshold regularizer to sparse-view CT image reconstruction

    NASA Astrophysics Data System (ADS)

    Rose, Sean; Sidky, Emil Y.; Pan, Xioachuan

    2015-03-01

    In this work, we apply non-convex, sparsity exploiting regularization techniques to image reconstruction in computed tomography (CT).We modify the well-known total variation (TV) penalty to use a non-convex smooth hard threshold (SHT) penalty as opposed to the typical l1 norm. The SHT penalty is different from the p <1 norms in that it is bounded above and has bounded gradient as its argument approaches the zero vector. We propose a re-weighting scheme utilizing the Chambolle-Pock (CP) algorithm in an attempt to solve a data-error constrained optimization problem utilizing the SHT penalty and call the resulting algorithm SHTCP. We then demonstrate the algorithm on sparse-view reconstruction of a simulated breast phantom with noiseless and noisy data and compare the converged images to those generated by a CP algorithm solving the analogous data-error constrained problem utilizing the TV. We demonstrate that SHTCP allows for more accurate reconstruction in the case of sparse-view noisy data and, in the case of noiseless data, allows for accurate reconstruction from fewer views than its TV counterpart.

  3. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  4. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  5. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  6. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia

  7. A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes

    PubMed Central

    2011-01-01

    Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better

  8. Assimilating irregularly spaced sparsely observed turbulent signals with hierarchical Bayesian reduced stochastic filters

    SciTech Connect

    Brown, Kristen A.; Harlim, John

    2013-02-15

    In this paper, we consider a practical filtering approach for assimilating irregularly spaced, sparsely observed turbulent signals through a hierarchical Bayesian reduced stochastic filtering framework. The proposed hierarchical Bayesian approach consists of two steps, blending a data-driven interpolation scheme and the Mean Stochastic Model (MSM) filter. We examine the potential of using the deterministic piecewise linear interpolation scheme and the ordinary kriging scheme in interpolating irregularly spaced raw data to regularly spaced processed data and the importance of dynamical constraint (through MSM) in filtering the processed data on a numerically stiff state estimation problem. In particular, we test this approach on a two-layer quasi-geostrophic model in a two-dimensional domain with a small radius of deformation to mimic ocean turbulence. Our numerical results suggest that the dynamical constraint becomes important when the observation noise variance is large. Second, we find that the filtered estimates with ordinary kriging are superior to those with linear interpolation when observation networks are not too sparse; such robust results are found from numerical simulations with many randomly simulated irregularly spaced observation networks, various observation time intervals, and observation error variances. Third, when the observation network is very sparse, we find that both the kriging and linear interpolations are comparable.

  9. Fast and efficient fully 3D PET image reconstruction using sparse system matrix factorization with GPU acceleration

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Qi, Jinyi

    2011-10-01

    Statistically based iterative image reconstruction has been widely used in positron emission tomography (PET) imaging. The quality of reconstructed images depends on the accuracy of the system matrix that defines the mapping from the image space to the data space. However, an accurate system matrix is often associated with high computation cost and huge storage requirement. In this paper, we present a method to address this problem using sparse matrix factorization and graphics processor unit (GPU) acceleration. We factor the accurate system matrix into three highly sparse matrices: a sinogram blurring matrix, a geometric projection matrix and an image blurring matrix. The geometrical projection matrix is precomputed based on a simple line integral model, while the sinogram and image blurring matrices are estimated from point-source measurements. The resulting factored system matrix has far less nonzero elements than the original system matrix, which substantially reduces the storage and computation cost. The smaller matrix size also allows an efficient implementation of the forward and backward projectors on a GPU, which often has a limited memory space. Our experimental studies show that the proposed method can dramatically reduce the computation cost of high-resolution iterative image reconstruction, while achieving better performance than existing factorization methods.

  10. Sparse-view computed tomography image reconstruction via a combination of L(1) and SL(0) regularization.

    PubMed

    Qi, Hongliang; Chen, Zijia; Guo, Jingyu; Zhou, Linghong

    2015-01-01

    Low-dose computed tomography reconstruction is an important issue in the medical imaging domain. Sparse-view has been widely studied as a potential strategy. Compressed sensing (CS) method has shown great potential to reconstruct high-quality CT images from sparse-view projection data. Nonetheless, low-contrast structures tend to be blurred by the total variation (TV, L1-norm of the gradient image) regularization. Moreover, TV will produce blocky effects on smooth and edge regions. To overcome this limitation, this study has proposed an iterative image reconstruction algorithm by combining L1 regularization and smoothed L0 (SL0) regularization. SL0 is a smooth approximation of L0 norm and can solve the problem of L0 norm being sensitive to noise. To evaluate the proposed method, both qualitative and quantitative studies were conducted on a digital Shepp-Logan phantom and a real head phantom. Experimental comparative results have indicated that the proposed L1/SL0-POCS algorithm can effectively suppress noise and artifacts, as well as preserve more structural information compared to other existing methods.

  11. Multiscale Transient Signal Detection: Localizing Transients in Geodetic Data Through Wavelet Transforms and Sparse Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Riel, B.; Simons, M.; Agram, P.

    2012-12-01

    Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic accumulation of strain in the crust. Over seismically and volcanically active regions, these signals are often challenging to detect due to noise and other modes of deformation. Geodetic datasets that provide precise measurements of surface displacement over wide areas are ideal for exploiting both the spatial and temporal coherence of transient signals. We present an extension to the Multiscale InSAR Time Series (MInTS) approach for analyzing geodetic data by combining the localization benefits of wavelet transforms (localizing signals in space) with sparse optimization techniques (localizing signals in time). Our time parameterization approach allows us to reduce geodetic time series to sparse, compressible signals with very few non-zero coefficients corresponding to transient events. We first demonstrate the temporal transient detection by analyzing GPS data over the Long Valley caldera in California and along the San Andreas fault near Parkfield, CA. For Long Valley, we are able to resolve the documented 2002-2003 uplift event with greater temporal precision. Similarly for Parkfield, we model the postseismic deformation by specific integrated basis splines characterized by timescales that are largely consistent with postseismic relaxation times. We then apply our method to ERS and Envisat InSAR datasets consisting of over 200 interferograms for Long Valley and over 100 interferograms for Parkfield. The wavelet transforms reduce the impact of spatially correlated atmospheric noise common in InSAR data since the wavelet coefficients themselves are essentially uncorrelated. The spatial density and extended temporal coverage of the InSAR data allows us to effectively localize ground deformation events in both space and time with greater precision than has been previously accomplished.

  12. Subthreshold membrane responses underlying sparse spiking to natural vocal signals in auditory cortex

    PubMed Central

    Perks, Krista Eva; Gentner, Timothy Q.

    2015-01-01

    Natural acoustic communication signals, such as speech, are typically high-dimensional with a wide range of co-varying spectro-temporal features at multiple timescales. The synaptic and network mechanisms for encoding these complex signals are largely unknown. We are investigating these mechanisms in high-level sensory regions of the songbird auditory forebrain, where single neurons show sparse, object-selective spiking responses to conspecific songs. Using whole-cell in-vivo patch clamp techniques in the caudal mesopallium and the caudal nidopallium of starlings, we examine song-driven subthreshold and spiking activity. We find that both the subthreshold and the spiking activity are reliable (i.e., the same song drives a similar response each time it is presented) and specific (i.e. responses to different songs are distinct). Surprisingly, however, the reliability and specificity of the sub-threshold response was uniformly high regardless of when the cell spiked, even for song stimuli that drove no spikes. We conclude that despite a selective and sparse spiking response, high-level auditory cortical neurons are under continuous, non-selective, stimulus-specific synaptic control. To investigate the role of local network inhibition in this synaptic control, we then recorded extracellularly while pharmacologically blocking local GABA-ergic transmission. This manipulation modulated the strength and the reliability of stimulus-driven spiking, consistent with a role for local inhibition in regulating the reliability of network activity and the stimulus specificity of the subthreshold response in single cells. We discuss these results in the context of underlying computations that could generate sparse, stimulus-selective spiking responses, and models for hierarchical pooling. PMID:25728189

  13. Robust detection of premature ventricular contractions using sparse signal decomposition and temporal features.

    PubMed

    Manikandan, M Sabarimalai; Ramkumar, Barathram; Deshpande, Pranav S; Choudhary, Tilendra

    2015-12-01

    An automated noise-robust premature ventricular contraction (PVC) detection method is proposed based on the sparse signal decomposition, temporal features, and decision rules. In this Letter, the authors exploit sparse expansion of electrocardiogram (ECG) signals on mixed dictionaries for simultaneously enhancing the QRS complex and reducing the influence of tall P and T waves, baseline wanders, and muscle artefacts. They further investigate a set of ten generalised temporal features combined with decision-rule-based detection algorithm for discriminating PVC beats from non-PVC beats. The accuracy and robustness of the proposed method is evaluated using 47 ECG recordings from the MIT/BIH arrhythmia database. Evaluation results show that the proposed method achieves an average sensitivity of 89.69%, and specificity 99.63%. Results further show that the proposed decision-rule-based algorithm with ten generalised features can accurately detect different patterns of PVC beats (uniform and multiform, couplets, triplets, and ventricular tachycardia) in presence of other normal and abnormal heartbeats.

  14. On the estimation of brain signal entropy from sparse neuroimaging data

    PubMed Central

    Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus

    2016-01-01

    Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961

  15. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  16. A Tool for Alignment and Averaging of Sparse Fluorescence Signals in Rod-Shaped Bacteria.

    PubMed

    Goudsmits, Joris M H; van Oijen, Antoine M; Robinson, Andrew

    2016-04-26

    Fluorescence microscopy studies have shown that many proteins localize to highly specific subregions within bacterial cells. Analyzing the spatial distribution of low-abundance proteins within cells is highly challenging because information obtained from multiple cells needs to be combined to provide well-defined maps of protein locations. We present (to our knowledge) a novel tool for fast, automated, and user-impartial analysis of fluorescent protein distribution across the short axis of rod-shaped bacteria. To demonstrate the strength of our approach in extracting spatial distributions and visualizing dynamic intracellular processes, we analyzed sparse fluorescence signals from single-molecule time-lapse images of individual Escherichia coli cells. In principle, our tool can be used to provide information on the distribution of signal intensity across the short axis of any rod-shaped object. PMID:27119631

  17. An algorithm for extraction of periodic signals from sparse, irregularly sampled data

    NASA Technical Reports Server (NTRS)

    Wilcox, J. Z.

    1994-01-01

    Temporal gaps in discrete sampling sequences produce spurious Fourier components at the intermodulation frequencies of an oscillatory signal and the temporal gaps, thus significantly complicating spectral analysis of such sparsely sampled data. A new fast Fourier transform (FFT)-based algorithm has been developed, suitable for spectral analysis of sparsely sampled data with a relatively small number of oscillatory components buried in background noise. The algorithm's principal idea has its origin in the so-called 'clean' algorithm used to sharpen images of scenes corrupted by atmospheric and sensor aperture effects. It identifies as the signal's 'true' frequency that oscillatory component which, when passed through the same sampling sequence as the original data, produces a Fourier image that is the best match to the original Fourier space. The algorithm has generally met with succession trials with simulated data with a low signal-to-noise ratio, including those of a type similar to hourly residuals for Earth orientation parameters extracted from VLBI data. For eight oscillatory components in the diurnal and semidiurnal bands, all components with an amplitude-noise ratio greater than 0.2 were successfully extracted for all sequences and duty cycles (greater than 0.1) tested; the amplitude-noise ratios of the extracted signals were as low as 0.05 for high duty cycles and long sampling sequences. When, in addition to these high frequencies, strong low-frequency components are present in the data, the low-frequency components are generally eliminated first, by employing a version of the algorithm that searches for non-integer multiples of the discrete FET minimum frequency.

  18. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  19. Adaptive-weighted total variation minimization for sparse data toward low-dose x-ray computed tomography image reconstruction.

    PubMed

    Liu, Yan; Ma, Jianhua; Fan, Yi; Liang, Zhengrong

    2012-12-01

    Previous studies have shown that by minimizing the total variation (TV) of the to-be-estimated image with some data and other constraints, piecewise-smooth x-ray computed tomography (CT) can be reconstructed from sparse-view projection data without introducing notable artifacts. However, due to the piecewise constant assumption for the image, a conventional TV minimization algorithm often suffers from over-smoothness on the edges of the resulting image. To mitigate this drawback, we present an adaptive-weighted TV (AwTV) minimization algorithm in this paper. The presented AwTV model is derived by considering the anisotropic edge property among neighboring image voxels, where the associated weights are expressed as an exponential function and can be adaptively adjusted by the local image-intensity gradient for the purpose of preserving the edge details. Inspired by the previously reported TV-POCS (projection onto convex sets) implementation, a similar AwTV-POCS implementation was developed to minimize the AwTV subject to data and other constraints for the purpose of sparse-view low-dose CT image reconstruction. To evaluate the presented AwTV-POCS algorithm, both qualitative and quantitative studies were performed by computer simulations and phantom experiments. The results show that the presented AwTV-POCS algorithm can yield images with several notable gains, in terms of noise-resolution tradeoff plots and full-width at half-maximum values, as compared to the corresponding conventional TV-POCS algorithm.

  20. Learning distance function for regression-based 4D pulmonary trunk model reconstruction estimated from sparse MRI data

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Tsymbal, Alexey; Ionasec, Razvan; Georgescu, Bogdan; Zhou, Shaohua K.; Hornegger, Joachim; Comaniciu, Dorin

    2011-03-01

    Congenital heart defect (CHD) is the most common birth defect and a frequent cause of death for children. Tetralogy of Fallot (ToF) is the most often occurring CHD which affects in particular the pulmonary valve and trunk. Emerging interventional methods enable percutaneous pulmonary valve implantation, which constitute an alternative to open heart surgery. While minimal invasive methods become common practice, imaging and non-invasive assessment tools become crucial components in the clinical setting. Cardiac computed tomography (CT) and cardiac magnetic resonance imaging (cMRI) are techniques with complementary properties and ability to acquire multiple non-invasive and accurate scans required for advance evaluation and therapy planning. In contrary to CT which covers the full 4D information over the cardiac cycle, cMRI often acquires partial information, for example only one 3D scan of the whole heart in the end-diastolic phase and two 2D planes (long and short axes) over the whole cardiac cycle. The data acquired in this way is called sparse cMRI. In this paper, we propose a regression-based approach for the reconstruction of the full 4D pulmonary trunk model from sparse MRI. The reconstruction approach is based on learning a distance function between the sparse MRI which needs to be completed and the 4D CT data with the full information used as the training set. The distance is based on the intrinsic Random Forest similarity which is learnt for the corresponding regression problem of predicting coordinates of unseen mesh points. Extensive experiments performed on 80 cardiac CT and MR sequences demonstrated the average speed of 10 seconds and accuracy of 0.1053mm mean absolute error for the proposed approach. Using the case retrieval workflow and local nearest neighbour regression with the learnt distance function appears to be competitive with respect to "black box" regression with immediate prediction of coordinates, while providing transparency to the

  1. Sparsely corrupted stimulated scattering signals recovery by iterative reweighted continuous basis pursuit

    SciTech Connect

    Wang, Kunpeng; Chai, Yi; Su, Chunxiao

    2013-08-15

    In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ℓ{sub 1}-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.

  2. Cosparsity-based Stagewise Matching Pursuit algorithm for reconstruction of the cosparse signals

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhao, Yuxin; Wang, Wenwu; Hao, Yanling

    2015-12-01

    The cosparse analysis model has been introduced as an interesting alternative to the standard sparse synthesis model. Given a set of corrupted measurements, finding a signal belonging to this model is known as analysis pursuit, which is an important problem in analysis model based sparse representation. Several pursuit methods have already been proposed, such as the methods based on l 1-relaxation and greedy approaches based on the cosparsity of the signal. This paper presents a novel greedy-like algorithm, called Cosparsity-based Stagewise Matching Pursuit (CSMP), where the cosparsity of the target signal is estimated adaptively with a stagewise approach composed of forward and backward processes. In the forward process, the cosparsity is estimated and the signal is approximated, followed by the refinement of the cosparsity and the signal in the backward process. As a result, the target signal can be reconstructed without the prior information of the cosparsity level. Experiments show that the performance of the proposed algorithm is comparable to those of the l 1-relaxation and Analysis Subspace Pursuit (ASP)/Analysis Compressive Sampling Matching Pursuit (ACoSaMP) in noiseless case and better than that of Greedy Analysis Pursuit (GAP) in noisy case.

  3. Range resolution improvement of eyesafe ladar testbed (ELT) measurements using sparse signal deconvolution

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Gunther, Jacob H.

    2014-06-01

    The Eyesafe Ladar Test-bed (ELT) is an experimental ladar system with the capability of digitizing return laser pulse waveforms at 2 GHz. These waveforms can then be exploited off-line in the laboratory to develop signal processing techniques for noise reduction, range resolution improvement, and range discrimination between two surfaces of similar range interrogated by a single laser pulse. This paper presents the results of experiments with new deconvolution algorithms with the hoped-for gains of improving the range discrimination of the ladar system. The sparsity of ladar returns is exploited to solve the deconvolution problem in two steps. The first step is to estimate a point target response using a database of measured calibration data. This basic target response is used to construct a dictionary of target responses with different delays/ranges. Using this dictionary ladar returns from a wide variety of surface configurations can be synthesized by taking linear combinations. A sparse linear combination matches the physical reality that ladar returns consist of the overlapping of only a few pulses. The dictionary construction process is a pre-processing step that is performed only once. The deconvolution step is performed by minimizing the error between the measured ladar return and the dictionary model while constraining the coefficient vector to be sparse. Other constraints such as the non-negativity of the coefficients are also applied. The results of the proposed technique are presented in the paper and are shown to compare favorably with previously investigated deconvolution techniques.

  4. Signal discovery, limits, and uncertainties with sparse on/off measurements: an objective bayesian analysis

    SciTech Connect

    Knoetig, Max L.

    2014-08-01

    For decades researchers have studied the On/Off counting problem where a measured rate consists of two parts. One part is due to a signal process and the other is due to a background process, the magnitudes for both of which are unknown. While most frequentist methods are adequate for large number counts, they cannot be applied to sparse data. Here, I want to present a new objective Bayesian solution that only depends on three parameters: the number of events in the signal region, the number of events in the background region, and the ratio of the exposure for both regions. First, the probability of the counts only being due to background is derived analytically. Second, the marginalized posterior for the signal parameter is also derived analytically. With this two-step approach it is easy to calculate the signal's significance, strength, uncertainty, or upper limit in a unified way. This approach is valid without restrictions for any number count, including zero, and may be widely applied in particle physics, cosmic-ray physics, and high-energy astrophysics. In order to demonstrate the performance of this approach, I apply the method to gamma-ray burst data.

  5. Sparse representation of MER signals for localizing the Subthalamic Nucleus in Parkinson's disease surgery.

    PubMed

    Vargas Cardona, Hernán Darío; Álvarez, Mauricio A; Orozco, Álvaro A

    2014-01-01

    Deep brain stimulation (DBS) of Subthalamic Nucleus (STN) is the best method for treating advanced Parkinson's disease (PD), leading to striking improvements in motor function and quality of life of PD patients. During DBS, online analysis of microelectrode recording (MER) signals is a powerful tool to locate the STN. Therapeutic outcomes depend of a precise positioning of a stimulator device in the target area. In this paper, we show how a sparse representation of MER signals allows to extract discriminant features, improving the accuracy in identification of STN. We apply three techniques for over-complete representation of signals: Method of Frames (MOF), Best Orthogonal Basis (BOB) and Basis Pursuit (BP). All the techniques are compared to classical methods for signal processing like Wavelet Transform (WT), and a more sophisticated method known as adaptive Wavelet with lifting schemes (AW-LS). We apply each processing method in two real databases and we evaluate its performance with simple supervised classifiers. Classification outcomes for MOF, BOB and BP clearly outperform WT and AW-LF in all classifiers for both databases, reaching accuracy values over 98%.

  6. Weak signal detection in hyperspectral imagery using sparse matrix transform (SMT) covariance estimation

    SciTech Connect

    Theiler, James P; Cao, Guangzhi; Bouman, Charles A

    2009-01-01

    Many detection algorithms in hyperspectral image analysis, from well-characterized gaseous and solid targets to deliberately uncharacterized anomalies and anomlous changes, depend on accurately estimating the covariance matrix of the background. In practice, the background covariance is estimated from samples in the image, and imprecision in this estimate can lead to a loss of detection power. In this paper, we describe the sparse matrix transform (SMT) and investigate its utility for estimating the covariance matrix from a limited number of samples. The SMT is formed by a product of pairwise coordinate (Givens) rotations, which can be efficiently estimated using greedy optimization. Experiments on hyperspectral data show that the estimate accurately reproduces even small eigenvalues and eigenvectors. In particular, we find that using the SMT to estimate the covariance matrix used in the adaptive matched filter leads to consistently higher signal-to-noise ratios.

  7. Adaptive sparse signal processing for discrimination of satellite-based radiofrequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2015-05-01

    For over two decades, Los Alamos National Laboratory programs have included an active research effort utilizing satellite observations of terrestrial lightning to learn more about the Earth's RF background. The FORTE satellite provided a rich satellite lightning database, which has been previously used for some event classification, and remains relevant for advancing lightning research. Lightning impulses are dispersed as they travel through the ionosphere, appearing as nonlinear chirps at the receiver on orbit. The data processing challenge arises from the combined complexity of the lightning source model, the propagation medium nonlinearities, and the sensor artifacts. We continue to develop modern event classification capability on the FORTE database using adaptive signal processing combined with compressive sensing techniques. The focus of our work is improved feature extraction using sparse representations in overcomplete analytical dictionaries. We explore two possible techniques for detecting lightning events, and showcase the algorithms on few representative data examples. We present preliminary results of our work and discuss future development.

  8. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring

    PubMed Central

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-01-01

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040

  9. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring.

    PubMed

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-01-01

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040

  10. Sparse Reconstruction for Temperature Distribution Using DTS Fiber Optic Sensors with Applications in Electrical Generator Stator Monitoring.

    PubMed

    Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos

    2016-09-07

    This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.

  11. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-01

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  12. Improved sparse reconstruction for fluorescence molecular tomography with L1/2 regularization

    PubMed Central

    Guo, Hongbo; Yu, Jingjing; He, Xiaowei; Hou, Yuqing; Dong, Fang; Zhang, Shuling

    2015-01-01

    Fluorescence molecular tomography (FMT) is a promising imaging technique that allows in vivo visualization of molecular-level events associated with disease progression and treatment response. Accurate and efficient 3D reconstruction algorithms will facilitate the wide-use of FMT in preclinical research. Here, we utilize L1/2-norm regularization for improving FMT reconstruction. To efficiently solve the nonconvex L1/2-norm penalized problem, we transform it into a weighted L1-norm minimization problem and employ a homotopy-based iterative reweighting algorithm to recover small fluorescent targets. Both simulations on heterogeneous mouse model and in vivo experiments demonstrated that the proposed L1/2-norm method outperformed the comparative L1-norm reconstruction methods in terms of location accuracy, spatial resolution and quantitation of fluorescent yield. Furthermore, simulation analysis showed the robustness of the proposed method, under different levels of measurement noise and number of excitation sources. PMID:26137370

  13. Real time reconstruction of quasiperiodic multi parameter physiological signals

    NASA Astrophysics Data System (ADS)

    Ganeshapillai, Gartheeban; Guttag, John

    2012-12-01

    A modern intensive care unit (ICU) has automated analysis systems that depend on continuous uninterrupted real time monitoring of physiological signals such as electrocardiogram (ECG), arterial blood pressure (ABP), and photo-plethysmogram (PPG). These signals are often corrupted by noise, artifacts, and missing data. We present an automated learning framework for real time reconstruction of corrupted multi-parameter nonstationary quasiperiodic physiological signals. The key idea is to learn a patient-specific model of the relationships between signals, and then reconstruct corrupted segments using the information available in correlated signals. We evaluated our method on MIT-BIH arrhythmia data, a two-channel ECG dataset with many clinically significant arrhythmias, and on the CinC challenge 2010 data, a multi-parameter dataset containing ECG, ABP, and PPG. For each, we evaluated both the residual distance between the original signals and the reconstructed signals, and the performance of a heartbeat classifier on a reconstructed ECG signal. At an SNR of 0 dB, the average residual distance on the CinC data was roughly 3% of the energy in the signal, and on the arrhythmia database it was roughly 16%. The difference is attributable to the large amount of diversity in the arrhythmia database. Remarkably, despite the relatively high residual difference, the classification accuracy on the arrhythmia database was still 98%, indicating that our method restored the physiologically important aspects of the signal.

  14. Sparse-view spectral CT reconstruction using spectral patch-based low-rank penalty.

    PubMed

    Kim, Kyungsang; Ye, Jong Chul; Worstell, William; Ouyang, Jinsong; Rakvongthai, Yothin; El Fakhri, Georges; Li, Quanzheng

    2015-03-01

    Spectral computed tomography (CT) is a promising technique with the potential for improving lesion detection, tissue characterization, and material decomposition. In this paper, we are interested in kVp switching-based spectral CT that alternates distinct kVp X-ray transmissions during gantry rotation. This system can acquire multiple X-ray energy transmissions without additional radiation dose. However, only sparse views are generated for each spectral measurement; and the spectra themselves are limited in number. To address these limitations, we propose a penalized maximum likelihood method using spectral patch-based low-rank penalty, which exploits the self-similarity of patches that are collected at the same position in spectral images. The main advantage is that the relatively small number of materials within each patch allows us to employ the low-rank penalty that is less sensitive to intensity changes while preserving edge directions. In our optimization formulation, the cost function consists of the Poisson log-likelihood for X-ray transmission and the nonconvex patch-based low-rank penalty. Since the original cost function is difficult to minimize directly, we propose an optimization method using separable quadratic surrogate and concave convex procedure algorithms for the log-likelihood and penalty terms, which results in an alternating minimization that provides a computational advantage because each subproblem can be solved independently. We performed computer simulations and a real experiment using a kVp switching-based spectral CT with sparse-view measurements, and compared the proposed method with conventional algorithms. We confirmed that the proposed method improves spectral images both qualitatively and quantitatively. Furthermore, our GPU implementation significantly reduces the computational cost.

  15. Airborne gravimetry data sparse reconstruction via L1-norm convex quadratic programming

    NASA Astrophysics Data System (ADS)

    Yang, Ya-Peng; Wu, Mei-Ping; Tang, Gang

    2015-06-01

    In practice, airborne gravimetry is a sub-Nyquist sampling method because of the restrictions imposed by national boundaries, financial cost, and database size. In this study, we analyze the sparsity of airborne gravimetry data by using the discrete Fourier transform and propose a reconstruction method based on the theory of compressed sensing for large-scale gravity anomaly data. Consequently, the reconstruction of the gravity anomaly data is transformed to a L1-norm convex quadratic programming problem. We combine the preconditioned conjugate gradient algorithm (PCG) and the improved interior-point method (IPM) to solve the convex quadratic programming problem. Furthermore, a flight test was carried out with the homegrown strapdown airborne gravimeter SGA-WZ. Subsequently, we reconstructed the gravity anomaly data of the flight test, and then, we compared the proposed method with the linear interpolation method, which is commonly used in airborne gravimetry. The test results show that the PCG-IPM algorithm can be used to reconstruct large-scale gravity anomaly data with higher accuracy and more effectiveness than the linear interpolation method.

  16. Reconstruction of ECG signals in presence of corruption.

    PubMed

    Ganeshapillai, Gartheeban; Liu, Jessica F; Guttag, John

    2011-01-01

    We present an approach to identifying and reconstructing corrupted regions in a multi-parameter physiological signal. The method, which uses information in correlated signals, is specifically designed to preserve clinically significant aspects of the signals. We use template matching to jointly segment the multi-parameter signal, morphological dissimilarity to estimate the quality of the signal segment, similarity search using features on a database of templates to find the closest match, and time-warping to reconstruct the corrupted segment with the matching template. In experiments carried out on the MIT-BIH Arrhythmia Database, a two-parameter database with many clinically significant arrhythmias, our method improved the classification accuracy of the beat type by more than 7 times on a signal corrupted with white Gaussian noise, and increased the similarity to the original signal, as measured by the normalized residual distance, by more than 2.5 times. PMID:22255158

  17. Reconstruction of ECG signals in presence of corruption.

    PubMed

    Ganeshapillai, Gartheeban; Liu, Jessica F; Guttag, John

    2011-01-01

    We present an approach to identifying and reconstructing corrupted regions in a multi-parameter physiological signal. The method, which uses information in correlated signals, is specifically designed to preserve clinically significant aspects of the signals. We use template matching to jointly segment the multi-parameter signal, morphological dissimilarity to estimate the quality of the signal segment, similarity search using features on a database of templates to find the closest match, and time-warping to reconstruct the corrupted segment with the matching template. In experiments carried out on the MIT-BIH Arrhythmia Database, a two-parameter database with many clinically significant arrhythmias, our method improved the classification accuracy of the beat type by more than 7 times on a signal corrupted with white Gaussian noise, and increased the similarity to the original signal, as measured by the normalized residual distance, by more than 2.5 times.

  18. Dual energy CT with one full scan and a second sparse-view scan using structure preserving iterative reconstruction (SPIR)

    NASA Astrophysics Data System (ADS)

    Wang, Tonghe; Zhu, Lei

    2016-09-01

    Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an

  19. Source Reconstruction for Spectrally-resolved Bioluminescence Tomography with Sparse A priori Information

    PubMed Central

    Lu, Yujie; Zhang, Xiaoqun; Douraghy, Ali; Stout, David; Tian, Jie; Chan, Tony F.; Chatziioannou, Arion F.

    2009-01-01

    Through restoration of the light source information in small animals in vivo, optical molecular imaging, such as fluorescence molecular tomography (FMT) and bioluminescence tomography (BLT), can depict biological and physiological changes observed using molecular probes. A priori information plays an indispensable role in tomographic reconstruction. As a type of a priori information, the sparsity characteristic of the light source has not been sufficiently considered to date. In this paper, we introduce a compressed sensing method to develop a new tomographic algorithm for spectrally-resolved bioluminescence tomography. This method uses the nature of the source sparsity to improve the reconstruction quality with a regularization implementation. Based on verification of the inverse crime, the proposed algorithm is validated with Monte Carlo-based synthetic data and the popular Tikhonov regularization method. Testing with different noise levels and single/multiple source settings at different depths demonstrates the improved performance of this algorithm. Experimental reconstruction with a mouse-shaped phantom further shows the potential of the proposed algorithm. PMID:19434138

  20. Sparse matrix beamforming and image reconstruction for 2-D HIFU monitoring using harmonic motion imaging for focused ultrasound (HMIFU) with in vitro validation.

    PubMed

    Hou, Gary Y; Provost, Jean; Grondin, Julien; Wang, Shutao; Marquet, Fabrice; Bunting, Ethan; Konofagou, Elisa E

    2014-11-01

    Harmonic motion imaging for focused ultrasound (HMIFU) utilizes an amplitude-modulated HIFU beam to induce a localized focal oscillatory motion simultaneously estimated. The objective of this study is to develop and show the feasibility of a novel fast beamforming algorithm for image reconstruction using GPU-based sparse-matrix operation with real-time feedback. In this study, the algorithm was implemented onto a fully integrated, clinically relevant HMIFU system. A single divergent transmit beam was used while fast beamforming was implemented using a GPU-based delay-and-sum method and a sparse-matrix operation. Axial HMI displacements were then estimated from the RF signals using a 1-D normalized cross-correlation method and streamed to a graphic user interface with frame rates up to 15 Hz, a 100-fold increase compared to conventional CPU-based processing. The real-time feedback rate does not require interrupting the HIFU treatment. Results in phantom experiments showed reproducible HMI images and monitoring of 22 in vitro HIFU treatments using the new 2-D system demonstrated reproducible displacement imaging, and monitoring of 22 in vitro HIFU treatments using the new 2-D system showed a consistent average focal displacement decrease of 46.7 ±14.6% during lesion formation. Complementary focal temperature monitoring also indicated an average rate of displacement increase and decrease with focal temperature at 0.84±1.15%/(°)C, and 2.03±0.93%/(°)C , respectively. These results reinforce the HMIFU capability of estimating and monitoring stiffness related changes in real time. Current ongoing studies include clinical translation of the presented system for monitoring of HIFU treatment for breast and pancreatic tumor applications.

  1. Sparse sampling and reconstruction for electron and scanning probe microscope imaging

    DOEpatents

    Anderson, Hyrum; Helms, Jovana; Wheeler, Jason W.; Larson, Kurt W.; Rohrer, Brandon R.

    2015-07-28

    Systems and methods for conducting electron or scanning probe microscopy are provided herein. In a general embodiment, the systems and methods for conducting electron or scanning probe microscopy with an undersampled data set include: driving an electron beam or probe to scan across a sample and visit a subset of pixel locations of the sample that are randomly or pseudo-randomly designated; determining actual pixel locations on the sample that are visited by the electron beam or probe; and processing data collected by detectors from the visits of the electron beam or probe at the actual pixel locations and recovering a reconstructed image of the sample.

  2. Adaptive sparse signal processing of satellite-based radio frequency (RF) recordings of lightning events

    NASA Astrophysics Data System (ADS)

    Moody, Daniela I.; Smith, David A.

    2014-05-01

    Ongoing research at Los Alamos National Laboratory studies the Earth's radio frequency (RF) background utilizing satellite-based RF observations of terrestrial lightning. Such impulsive events are dispersed through the ionosphere and appear as broadband nonlinear chirps at a receiver on-orbit. They occur in the presence of additive noise and structured clutter, making their classification challenging. The Fast On-orbit Recording of Transient Events (FORTE) satellite provided a rich RF lightning database. Application of modern pattern recognition techniques to this database may further lightning research in the scientific community, and potentially improve on-orbit processing and event discrimination capabilities for future satellite payloads. Conventional feature extraction techniques using analytical dictionaries, such as a short-time Fourier basis or wavelets, are not comprehensively suitable for analyzing the broadband RF pulses under consideration here. We explore an alternative approach based on non-analytical dictionaries learned directly from data, and extend two dictionary learning algorithms, K-SVD and Hebbian, for use with satellite RF data. Both algorithms allow us to learn features without relying on analytical constraints or additional knowledge about the expected signal characteristics. We then use a pursuit search over the learned dictionaries to generate sparse classification features, and discuss their performance in terms of event classification. We also use principal component analysis to analyze and compare the respective learned dictionary spaces to the real data space.

  3. Light field reconstruction robust to signal dependent noise

    NASA Astrophysics Data System (ADS)

    Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai

    2014-11-01

    Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.

  4. A single-frame terahertz image super-resolution reconstruction method based on sparse representation theory

    NASA Astrophysics Data System (ADS)

    Li, Yue; Zhao, Yuan-meng; Deng, Chao; Zhang, Cunlin

    2014-11-01

    Terrorist attacks make the public safety issue becoming the focus of national attention. Passive terahertz security instrument can help overcomesome shortcomings with current security instruments. Terahertz wave has a strong penetrating power which can pass through clothes without harming human bodies and detected objects. However, in the lab experiments, we found that original terahertz imagesobtained by passive terahertz technique were often too vague to detect the objects of interest. Prior studies suggest that learning-based image super-resolution reconstruction(SRR) method can solve this problem. To our knowledge, we applied the learning-based image SRR method for the first time in single-frame passive terahertz image processing. Experimental results showed that the processed passive terahertz images wereclearer and easier to identify suspicious objects than the original images. We also compare our method with three conventional methods and our method show greater advantage over the other methods.

  5. Chaotic signal reconstruction with application to noise radar system

    NASA Astrophysics Data System (ADS)

    Liu, Lidong; Hu, Jinfeng; He, Zishu; Han, Chunlin; Li, Huiyong; Li, Jun

    2011-12-01

    Chaotic signals are potentially attractive in engineering applications, most of which require an accurate estimation of the actual chaotic signal from a noisy background. In this article, we present an improved symbolic dynamics-based method (ISDM) for accurate estimating the initial condition of chaotic signal corrupted by noise. Then, a new method, called piecewise estimation method (PEM), for chaotic signal reconstruction based on ISDM is proposed. The reconstruction performance using PEM is much better than that using the existing initial condition estimation methods. Next, PEM is applied in a noncoherent reception noise radar scheme and an improved noncoherent reception scheme is given. The simulation results show that the improved noncoherent scheme has better correlation performance and range resolution especially at low signal-to-noise ratios (SNRs).

  6. Signal enhanced holographic fluorescence microscopy with guide-star reconstruction.

    PubMed

    Jang, Changwon; Clark, David C; Kim, Jonghyun; Lee, Byoungho; Kim, Myung K

    2016-04-01

    We propose a signal enhanced guide-star reconstruction method for holographic fluorescence microscopy. In the late 00's, incoherent digital holography started to be vigorously studied by several groups to overcome the limitations of conventional digital holography. The basic concept of incoherent digital holography is to acquire the complex hologram from incoherent light by utilizing temporal coherency of a spatially incoherent light source. The advent of incoherent digital holography opened new possibility of holographic fluorescence microscopy (HFM), which was difficult to achieve with conventional digital holography. However there has been an important issue of low and noisy signal in HFM which slows down the system speed and degrades the imaging quality. When guide-star reconstruction is adopted, the image reconstruction gives an improved result compared to the conventional propagation reconstruction method. The guide-star reconstruction method gives higher imaging signal-to-noise ratio since the acquired complex point spread function provides optimal system-adaptive information and can restore the signal buried in the noise more efficiently. We present theoretical explanation and simulation as well as experimental results. PMID:27446653

  7. Signal enhanced holographic fluorescence microscopy with guide-star reconstruction

    PubMed Central

    Jang, Changwon; Clark, David C.; Kim, Jonghyun; Lee, Byoungho; Kim, Myung K.

    2016-01-01

    We propose a signal enhanced guide-star reconstruction method for holographic fluorescence microscopy. In the late 00’s, incoherent digital holography started to be vigorously studied by several groups to overcome the limitations of conventional digital holography. The basic concept of incoherent digital holography is to acquire the complex hologram from incoherent light by utilizing temporal coherency of a spatially incoherent light source. The advent of incoherent digital holography opened new possibility of holographic fluorescence microscopy (HFM), which was difficult to achieve with conventional digital holography. However there has been an important issue of low and noisy signal in HFM which slows down the system speed and degrades the imaging quality. When guide-star reconstruction is adopted, the image reconstruction gives an improved result compared to the conventional propagation reconstruction method. The guide-star reconstruction method gives higher imaging signal-to-noise ratio since the acquired complex point spread function provides optimal system-adaptive information and can restore the signal buried in the noise more efficiently. We present theoretical explanation and simulation as well as experimental results. PMID:27446653

  8. Adaptive filter for reconstruction of stereo audio signals

    NASA Astrophysics Data System (ADS)

    Cisowski, Krzysztof

    2004-05-01

    The paper presents a new approach to reconstruction of impulsively disturbed stereo audio signals. The problems of restoration of large blocks of missing samples are outlined. Present methods of removing of covariance defect are discussed. Model of stereophonic signal is defined and Kalman filter appropriate for this model is introduced. Modifications of the filter directing to the new method of reconstruction of block of missing samples are discussed. Projection based algorithm allows to recover samples of left (or right) stereo channel using additional information included in undistorted samples from the other channel.

  9. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release. PMID:25732072

  10. Signal Reconstruction and Performance of the ATLAS Hadronic calorimeter

    NASA Astrophysics Data System (ADS)

    Atlas, Atlas

    2014-03-01

    The Tile Calorimeter (TileCal) of the ATLAS experiment is the hadronic calorimeter designed for energy reconstruction of hadrons, jets, tau-particles and missing transverse energy. Latest results on calibration, signal reconstruction and performance of the TileCal detector using pp collision data are presented. The studies of the TileCal response to single isolated charged particles and the noise description with increasing pile-up are presented. In addition, TileCal upgrade plans are discussed. on behalf of the ATLAS Collaboration.

  11. Reconstructing for joint angles on the shoulder and elbow from non-invasive electroencephalographic signals through electromyography

    PubMed Central

    Choi, Kyuwan

    2013-01-01

    In this study, first the cortical activities over 2240 vertexes on the brain were estimated from 64 channels electroencephalography (EEG) signals using the Hierarchical Bayesian estimation while 5 subjects did continuous arm reaching movements. From the estimated cortical activities, a sparse linear regression method selected only useful features in reconstructing the electromyography (EMG) signals and estimated the EMG signals of 9 arm muscles. Then, a modular artificial neural network was used to estimate four joint angles from the estimated EMG signals of 9 muscles: one for movement control and the other for posture control. The estimated joint angles using this method have the correlation coefficient (CC) of 0.807 (±0.10) and the normalized root-mean-square error (nRMSE) of 0.176 (±0.29) with the actual joint angles. PMID:24167469

  12. Single-channel blind separation using L₁-sparse complex non-negative matrix factorization for acoustic signals.

    PubMed

    Parathai, P; Woo, W L; Dlay, S S; Gao, Bin

    2015-01-01

    An innovative method of single-channel blind source separation is proposed. The proposed method is a complex-valued non-negative matrix factorization with probabilistically optimal L1-norm sparsity. This preserves the phase information of the source signals and enforces the inherent structures of the temporal codes to be optimally sparse, thus resulting in more meaningful parts factorization. An efficient algorithm with closed-form expression to compute the parameters of the model including the sparsity has been developed. Real-time acoustic mixtures recorded from a single-channel are used to verify the effectiveness of the proposed method. PMID:25618092

  13. Simultaneous Reconstruction of Multiple Signaling Pathways via the Prize-Collecting Steiner Forest Problem

    PubMed Central

    Tuncbag, Nurcan; Braunstein, Alfredo; Pagnani, Andrea; Huang, Shao-Shan Carol; Chayes, Jennifer; Borgs, Christian; Zecchina, Riccardo

    2013-01-01

    Abstract Signaling and regulatory networks are essential for cells to control processes such as growth, differentiation, and response to stimuli. Although many “omic” data sources are available to probe signaling pathways, these data are typically sparse and noisy. Thus, it has been difficult to use these data to discover the cause of the diseases and to propose new therapeutic strategies. We overcome these problems and use “omic” data to reconstruct simultaneously multiple pathways that are altered in a particular condition by solving the prize-collecting Steiner forest problem. To evaluate this approach, we use the well-characterized yeast pheromone response. We then apply the method to human glioblastoma data, searching for a forest of trees, each of which is rooted in a different cell-surface receptor. This approach discovers both overlapping and independent signaling pathways that are enriched in functionally and clinically relevant proteins, which could provide the basis for new therapeutic strategies. Although the algorithm was not provided with any information about the phosphorylation status of receptors, it identifies a small set of clinically relevant receptors among hundreds present in the interactome. PMID:23383998

  14. Signal reconstruction performance with the ATLAS Hadronic Tile Calorimeter

    NASA Astrophysics Data System (ADS)

    Klimek, Pawel; ATLAS Tile Calorimeter Group

    2012-12-01

    The Tile Calorimeter (TileCal) is the central section of the hadronic calorimeter of ATLAS. It is a key detector for the reconstruction of hadrons, jets, taus and missing transverse energy. TileCal is a sampling calorimeter with steel as absorber and scintillators as active medium. The scintillators are read-out by wavelength shifting fibers coupled to photomultiplier tubes (PMTs). The analogue signals from the PMTs are amplified, shaped and digitized by sampling the signal every 25 ns. The read out system is designed to reconstruct the data in real time fulfilling the tight constraints imposed by the ATLAS first level trigger rate (100 kHz). The signal amplitude for each channel and their phase are measured using Optimal Filtering techniques both at online and offline level. We present the performance of these techniques on the data collected in the proton-proton collisions at center-of-mass energy equal to 7 TeV. We will address the performance for the measurement on high pile-up environment and on various physics and calibration signals.

  15. Adaptive multimode signal reconstruction from time–frequency representations

    PubMed Central

    Meignen, Sylvain; Oberlin, Thomas; Depalle, Philippe; Flandrin, Patrick

    2016-01-01

    This paper discusses methods for the adaptive reconstruction of the modes of multicomponent AM–FM signals by their time–frequency (TF) representation derived from their short-time Fourier transform (STFT). The STFT of an AM–FM component or mode spreads the information relative to that mode in the TF plane around curves commonly called ridges. An alternative view is to consider a mode as a particular TF domain termed a basin of attraction. Here we discuss two new approaches to mode reconstruction. The first determines the ridge associated with a mode by considering the location where the direction of the reassignment vector sharply changes, the technique used to determine the basin of attraction being directly derived from that used for ridge extraction. A second uses the fact that the STFT of a signal is fully characterized by its zeros (and then the particular distribution of these zeros for Gaussian noise) to deduce an algorithm to compute the mode domains. For both techniques, mode reconstruction is then carried out by simply integrating the information inside these basins of attraction or domains. PMID:26953184

  16. Low-dose X-ray computed tomography image reconstruction with a combined low-mAs and sparse-view protocol.

    PubMed

    Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua

    2014-06-16

    To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.

  17. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies

    PubMed Central

    Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker—single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,limε→0N(2)/N(1)=ϕ12(θ), compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals (PN(i)<ε→0). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol. PMID:27380176

  18. An Optimal Bahadur-Efficient Method in Detection of Sparse Signals with Applications to Pathway Analysis in Sequencing Association Studies.

    PubMed

    Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui

    2016-01-01

    Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.

  19. Reconstructing signals from noisy data with unknown signal and noise covariance.

    PubMed

    Oppermann, Niels; Robbers, Georg; Ensslin, Torsten A

    2011-10-01

    We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present.

  20. Sparse Sensing of Aerodynamic Loads on Insect Wings

    NASA Astrophysics Data System (ADS)

    Manohar, Krithika; Brunton, Steven; Kutz, J. Nathan

    2015-11-01

    We investigate how insects use sparse sensors on their wings to detect aerodynamic loading and wing deformation using a coupled fluid-structure model given periodically flapping input motion. Recent observations suggest that insects collect sensor information about their wing deformation to inform control actions for maneuvering and rejecting gust disturbances. Given a small number of point measurements of the chordwise aerodynamic loads from the sparse sensors, we reconstruct the entire chordwise loading using sparsesensing - a signal processing technique that reconstructs a signal from a small number of measurements using l1 norm minimization of sparse modal coefficients in some basis. We compare reconstructions from sensors randomly sampled from probability distributions biased toward different regions along the wing chord. In this manner, we determine the preferred regions along the chord for sensor placement and for estimating chordwise loads to inform control decisions in flight.

  1. Adaptive sparse reconstruction with joint parametric estimation for high-speed uniformly moving targets in coincidence imaging radar

    NASA Astrophysics Data System (ADS)

    Zha, Guofeng; Wang, Hongqiang; Yang, Zhaocheng; Cheng, Yongqiang; Qin, Yuliang

    2016-04-01

    As a complementary imaging technology, coincidence imaging radar (CIR) achieves high resolution for stationary or low-speed targets under the assumption of ignoring the influence of the original position mismatching. As to high-speed moving targets moving from the original imaging cell to other imaging cells during imaging, it is inaccurate to reconstruct the target using the previous imaging plane. We focus on the recovery problem for high-speed moving targets in the CIR system based on the intrapulse frequency random modulation signal in a single pulse. The effects induced by the motion on the imaging performance are analyzed. Because the basis matrix in the CIR imaging equation is determined by the unknown velocity parameter of the moving target, both the target images and basis matrix should be estimated jointly. We propose an adaptive joint parametric estimation recovery algorithm based on the Tikhonov regularization method to update the target velocity and basis matrix adaptively and recover the target images synchronously. Finally, the target velocity and target images are obtained in an iterative manner. Simulation results are presented to demonstrate the efficiency of the proposed algorithm.

  2. Joint surface reconstruction and 4D deformation estimation from sparse data and prior knowledge for marker-less Respiratory motion tracking

    SciTech Connect

    Berkels, Benjamin; Rumpf, Martin; Bauer, Sebastian; Ettl, Svenja; Arold, Oliver; Hornegger, Joachim

    2013-09-15

    Purpose: The intraprocedural tracking of respiratory motion has the potential to substantially improve image-guided diagnosis and interventions. The authors have developed a sparse-to-dense registration approach that is capable of recovering the patient's external 3D body surface and estimating a 4D (3D + time) surface motion field from sparse sampling data and patient-specific prior shape knowledge.Methods: The system utilizes an emerging marker-less and laser-based active triangulation (AT) sensor that delivers sparse but highly accurate 3D measurements in real-time. These sparse position measurements are registered with a dense reference surface extracted from planning data. Thereby a dense displacement field is recovered, which describes the spatio-temporal 4D deformation of the complete patient body surface, depending on the type and state of respiration. It yields both a reconstruction of the instantaneous patient shape and a high-dimensional respiratory surrogate for respiratory motion tracking. The method is validated on a 4D CT respiration phantom and evaluated on both real data from an AT prototype and synthetic data sampled from dense surface scans acquired with a structured-light scanner.Results: In the experiments, the authors estimated surface motion fields with the proposed algorithm on 256 datasets from 16 subjects and in different respiration states, achieving a mean surface reconstruction accuracy of ±0.23 mm with respect to ground truth data—down from a mean initial surface mismatch of 5.66 mm. The 95th percentile of the local residual mesh-to-mesh distance after registration did not exceed 1.17 mm for any subject. On average, the total runtime of our proof of concept CPU implementation is 2.3 s per frame, outperforming related work substantially.Conclusions: In external beam radiation therapy, the approach holds potential for patient monitoring during treatment using the reconstructed surface, and for motion-compensated dose delivery using

  3. Energy efficient acquisition and reconstruction of EEG signals.

    PubMed

    Singh, W; Shukla, A; Deb, S; Majumdar, A

    2014-01-01

    In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Previous Compressed Sensing (CS) based solutions to EEG tele-monitoring over WBAN's could only reduce the communication cost. In this work, we propose a matrix completion based formulation that can also reduce the energy consumption for sensing. We test our method with state-of-the-art CS based techniques and find that the reconstruction accuracy from our method is significantly better and that too at considerably less energy consumption. Our method is also tested for post-reconstruction signal classification where it outperforms previous CS based techniques. At the heart of the system is an Analog to Information Converter (AIC) implemented in 65nm CMOS technology. The pseudorandom clock generator enables random under-sampling and subsequent conversion by the 12-bit Successive Approximation Register Analog to Digital Converter (SAR ADC). AIC achieves a sample rate of 0.5 KS/s, an ENOB 9.54 bits, and consumes 108 nW from 1 V power supply. PMID:25570198

  4. On signals faint and sparse: The ACICA algorithm for blind de-trending of exoplanetary transits with low signal-to-noise

    SciTech Connect

    Waldmann, I. P.

    2014-01-01

    Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here we present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.

  5. Reconstruction of Signaling Networks Regulating Fungal Morphogenesis by Transcriptomics▿ †

    PubMed Central

    Meyer, Vera; Arentshorst, Mark; Flitter, Simon J.; Nitsche, Benjamin M.; Kwon, Min Jin; Reynaga-Peña, Cristina G.; Bartnicki-Garcia, Salomon; van den Hondel, Cees A. M. J. J.; Ram, Arthur F. J.

    2009-01-01

    Coordinated control of hyphal elongation and branching is essential for sustaining mycelial growth of filamentous fungi. In order to study the molecular machinery ensuring polarity control in the industrial fungus Aspergillus niger, we took advantage of the temperature-sensitive (ts) apical-branching ramosa-1 mutant. We show here that this strain serves as an excellent model system to study critical steps of polar growth control during mycelial development and report for the first time a transcriptomic fingerprint of apical branching for a filamentous fungus. This fingerprint indicates that several signal transduction pathways, including TORC2, phospholipid, calcium, and cell wall integrity signaling, concertedly act to control apical branching. We furthermore identified the genetic locus affected in the ramosa-1 mutant by complementation of the ts phenotype. Sequence analyses demonstrated that a single amino acid exchange in the RmsA protein is responsible for induced apical branching of the ramosa-1 mutant. Deletion experiments showed that the corresponding rmsA gene is essential for the growth of A. niger, and complementation analyses with Saccharomyces cerevisiae evidenced that RmsA serves as a functional equivalent of the TORC2 component Avo1p. TORC2 signaling is required for actin polarization and cell wall integrity in S. cerevisiae. Congruently, our microscopic investigations showed that polarized actin organization and chitin deposition are disturbed in the ramosa-1 mutant. The integration of the transcriptomic, genetic, and phenotypic data obtained in this study allowed us to reconstruct a model for cellular events involved in apical branching. PMID:19749177

  6. Getting a decent (but sparse) signal to the brain for users of cochlear implants.

    PubMed

    Wilson, Blake S

    2015-04-01

    The challenge in getting a decent signal to the brain for users of cochlear implants (CIs) is described. A breakthrough occurred in 1989 that later enabled most users to understand conversational speech with their restored hearing alone. Subsequent developments included stimulation in addition to that provided with a unilateral CI, either with electrical stimulation on both sides or with acoustic stimulation in combination with a unilateral CI, the latter for persons with residual hearing at low frequencies in either or both ears. Both types of adjunctive stimulation produced further improvements in performance for substantial fractions of patients. Today, the CI and related hearing prostheses are the standard of care for profoundly deaf persons and ever-increasing indications are now allowing persons with less severe losses to benefit from these marvelous technologies. The steps in achieving the present levels of performance are traced, and some possibilities for further improvements are mentioned. This article is part of a Special Issue entitled .

  7. Application of linear graph embedding as a dimensionality reduction technique and sparse representation classifier as a post classifier for the classification of epilepsy risk levels from EEG signals

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sunil Kumar; Rajaguru, Harikumar

    2015-12-01

    The most common and frequently occurring neurological disorder is epilepsy and the main method useful for the diagnosis of epilepsy is electroencephalogram (EEG) signal analysis. Due to the length of EEG recordings, EEG signal analysis method is quite time-consuming when it is processed manually by an expert. This paper proposes the application of Linear Graph Embedding (LGE) concept as a dimensionality reduction technique for processing the epileptic encephalographic signals and then it is classified using Sparse Representation Classifiers (SRC). SRC is used to analyze the classification of epilepsy risk levels from EEG signals and the parameters such as Sensitivity, Specificity, Time Delay, Quality Value, Performance Index and Accuracy are analyzed.

  8. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    DOEpatents

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  9. Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection.

    PubMed

    Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue

    2016-01-01

    Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection. PMID:27529245

  10. Baseline Signal Reconstruction for Temperature Compensation in Lamb Wave-Based Damage Detection

    PubMed Central

    Liu, Guoqiang; Xiao, Yingchun; Zhang, Hua; Ren, Gexue

    2016-01-01

    Temperature variations have significant effects on propagation of Lamb wave and therefore can severely limit the damage detection for Lamb wave. In order to mitigate the temperature effect, a temperature compensation method based on baseline signal reconstruction is developed for Lamb wave-based damage detection. The method is a reconstruction of a baseline signal at the temperature of current signal. In other words, it compensates the baseline signal to the temperature of current signal. The Hilbert transform is used to compensate the phase of baseline signal. The Orthogonal matching pursuit (OMP) is used to compensate the amplitude of baseline signal. Experiments were conducted on two composite panels to validate the effectiveness of the proposed method. Results show that the proposed method could effectively work for temperature intervals of at least 18 °C with the baseline signal temperature as the center, and can be applied to the actual damage detection. PMID:27529245

  11. Signal and noise of Fourier reconstructed fMRI data.

    PubMed

    Rowe, Daniel B; Nencka, Andrew S; Hoffmann, Raymond G

    2007-01-30

    In magnetic resonance imaging, complex-valued measurements are acquired in time corresponding to spatial frequency measurements in space generally placed on a Cartesian rectangular grid. These complex-valued measurements are transformed into a measured complex-valued image by an image reconstruction method. The most common image reconstruction method is the inverse Fourier transform. It is known that image voxels are spatially correlated. A property of the inverse Fourier transformation is that uncorrelated spatial frequency measurements yield spatially uncorrelated voxel measurements and vice versa. Spatially correlated voxel measurements result from correlated spatial frequency measurements. This paper describes the resulting correlation structure between voxel measurements when inverse Fourier reconstructing correlated spatial frequency measurements. A real-valued representation for the complex-valued measurements is introduced along with an associated multivariate normal distribution. One potential application of this methodology is that there may be a correlation structure introduced by the measurement process or adjustments made to the spatial frequencies. This would produce spatially correlated voxel measurements after inverse Fourier transform reconstruction that have artificially inflated spatial correlation. One implication of these results is that one source of spatial correlation between voxels termed connectivity may be attributed to correlated spatial frequencies. The true voxel connectivity may be less than previously thought. This methodology could be utilized to characterize noise correlation in its original form and adjust for it. The exact statistical relationship between spatial frequency measurements and voxel measurements has now been established.

  12. Reconstruction of stress corrosion cracks using signals of pulsed eddy current testing

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xie, Shejuan; Chen, Zhenmao; Li, Yong; Wang, Xiaowei; Takagi, Toshiyuki

    2013-06-01

    A scheme to apply signals of pulsed eddy current testing (PECT) to reconstruct a deep stress corrosion crack (SCC) is proposed on the basis of a multi-layer and multi-frequency reconstruction strategy. First, a numerical method is introduced to extract conventional eddy current testing (ECT) signals of different frequencies from the PECT responses at different scanning points, which are necessary for multi-frequency ECT inversion. Second, the conventional fast forward solver for ECT signal simulation is upgraded to calculate the single-frequency pickup signal of a magnetic field by introducing a strategy that employs a tiny search coil. Using the multiple-frequency ECT signals and the upgraded fast signal simulator, we reconstructed the shape profiles and conductivity of an SCC at different depths layer-by-layer with a hybrid inversion scheme of the conjugate gradient and particle swarm optimisation. Several modelled SCCs of rectangular or stepwise shape in an SUS304 plate are reconstructed from simulated PECT signals with artificial noise. The reconstruction results show better precision in crack depth than the conventional ECT inversion method, which demonstrates the validity and efficiency of the proposed PECT inversion scheme.

  13. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-05-07

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging.

  14. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging.

    PubMed

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method's applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method's advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  15. Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging

    PubMed Central

    Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao

    2016-01-01

    Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114

  16. Signal Reconstruction from Nonuniformly Spaced Samples Using Evolutionary Slepian Transform-Based POCS

    NASA Astrophysics Data System (ADS)

    Oh, Jinsung; Senay, Seda; Chaparro, LuisF

    2010-12-01

    We consider the reconstruction of signals from nonuniformly spaced samples using a projection onto convex sets (POCSs) implemented with the evolutionary time-frequency transform. Signals of practical interest have finite time support and are nearly band-limited, and as such can be better represented by Slepian functions than by sinc functions. The evolutionary spectral theory provides a time-frequency representation of nonstationary signals, and for deterministic signals the kernel of the evolutionary representation can be derived from a Slepian projection of the signal. The representation of low pass and band pass signals is thus efficiently done by means of the Slepian functions. Assuming the given nonuniformly spaced samples are from a signal satisfying the finite time support and the essential band-limitedness conditions with a known center frequency, imposing time and frequency limitations in the evolutionary transformation permit us to reconstruct the signal iteratively. Restricting the signal to a known finite time and frequency support, a closed convex set, the projection generated by the time-frequency transformation converges into a close approximation to the original signal. Simulation results illustrate the evolutionary Slepian-based transform in the representation and reconstruction of signals from irregularly-spaced and contiguous lost samples.

  17. Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty

    SciTech Connect

    Ensslin, Torsten A.; Frommert, Mona

    2011-05-15

    The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. To solve this, we develop a generic parameter-uncertainty renormalized estimation (PURE) technique. As a concrete application, we address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power-spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener-filter map, and (v) renormalization flow analysis of the field-theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener-filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so-called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others, especially if the post-Wiener-filter corrections are included or in case an additional scale-independent spectral smoothness prior can be adopted.

  18. New signal processing technique for density profile reconstruction using reflectometry

    SciTech Connect

    Clairet, F.; Bottereau, C.; Ricaud, B.; Briolle, F.; Heuraux, S.

    2011-08-15

    Reflectometry profile measurement requires an accurate determination of the plasma reflected signal. Along with a good resolution and a high signal to noise ratio of the phase measurement, adequate data analysis is required. A new data processing based on time-frequency tomographic representation is used. It provides a clearer separation between multiple components and improves isolation of the relevant signals. In this paper, this data processing technique is applied to two sets of signals coming from two different reflectometer devices used on the Tore Supra tokamak. For the standard density profile reflectometry, it improves the initialization process and its reliability, providing a more accurate profile determination in the far scrape-off layer with density measurements as low as 10{sup 16} m{sup -1}. For a second reflectometer, which provides measurements in front of a lower hybrid launcher, this method improves the separation of the relevant plasma signal from multi-reflection processes due to the proximity of the plasma.

  19. Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing

    PubMed Central

    Yoshida, Ryo; West, Mike

    2010-01-01

    We describe a class of sparse latent factor models, called graphical factor models (GFMs), and relevant sparse learning algorithms for posterior mode estimation. Linear, Gaussian GFMs have sparse, orthogonal factor loadings matrices, that, in addition to sparsity of the implied covariance matrices, also induce conditional independence structures via zeros in the implied precision matrices. We describe the models and their use for robust estimation of sparse latent factor structure and data/signal reconstruction. We develop computational algorithms for model exploration and posterior mode search, addressing the hard combinatorial optimization involved in the search over a huge space of potential sparse configurations. A mean-field variational technique coupled with annealing is developed to successively generate “artificial” posterior distributions that, at the limiting temperature in the annealing schedule, define required posterior modes in the GFM parameter space. Several detailed empirical studies and comparisons to related approaches are discussed, including analyses of handwritten digit image and cancer gene expression data. PMID:20890391

  20. Accelerated signal encoding and reconstruction using pixon method

    DOEpatents

    Puetter, Richard; Yahil, Amos; Pina, Robert

    2005-05-17

    The method identifies a Pixon element, which is a fundamental and indivisible unit of information, and a Pixon basis, which is the set of possible functions from which the Pixon elements are selected. The actual Pixon elements selected from this basis during the reconstruction process represents the smallest number of such units required to fit the data and representing the minimum number of parameters necessary to specify the image. The Pixon kernels can have arbitrary properties (e.g., shape, size, and/or position) as needed to best fit the data.

  1. Reconstruction of physiological signals using iterative retraining and accumulated averaging of neural network models.

    PubMed

    McBride, Joseph; Sullivan, Adam; Xia, Henian; Petrie, Adam; Zhao, Xiaopeng

    2011-06-01

    Real-time monitoring of vital physiological signals is of significant clinical relevance. Disruptions in the signals are frequently encountered and make it difficult for precise diagnosis. Thus, the ability to accurately predict/recover the lost signals could greatly impact medical research and application. We have developed new techniques of signal reconstructions based on iterative retraining and accumulated averaging of neural networks. The effectiveness and robustness of these techniques are demonstrated using data records from the Computing in Cardiology/PhysioNet Challenge 2010. The average correlation coefficient between prediction and target for 100 records of various target signals is about 0.9. We have also explored influences of a few important parameters on the accuracy of reconstructions. The developed techniques may be used to detect changes in patient state and to recognize intervals of signal corruption.

  2. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals.

    PubMed

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B; Cook, Mark J; Wolters, Carsten H; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  3. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    PubMed Central

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  4. Tomographic Reconstruction of Breast Characteristics Using Transmitted Ultrasound Signals

    NASA Astrophysics Data System (ADS)

    Sandhu, Gursharan; Li, Cuiping; Duric, Neb; Huang, Zhi-Feng

    2012-10-01

    X-ray Mammography has been the standard technique for the detection of breast cancer. However, it uses ionizing radiation, and can cause severe discomfort. It also has low spatial resolution, and can be prone to misdiagnosis. Techniques such as X-ray CT and MRI alleviate some of these issues but are costly. Researchers at Karmanos Cancer Institute developed a tomographic ultrasound device which is able to reconstruct the reflectivity, attenuation, and sound speed characteristics of the breast. A patient places her breast into a ring array of transducers immersed in a water bath, and the device scanning the breast yields a 3d reconstruction. Our work focuses on improving algorithms for attenuation and sound speed imaging. Current time-of-flight tomography provides relatively low resolution images. Improvements are made by considering diffraction effects with the use of the low resolution image as a seed to the Born approximation. Ultimately, full waveform inversion will be used to obtain images with resolution comparable to MRI.

  5. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT

    PubMed Central

    Choudhary, Kumari Sonal; Rohatgi, Neha; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-01-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend. PMID:27253373

  6. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  7. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  8. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    NASA Astrophysics Data System (ADS)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  9. Improved terahertz imaging with a sparse synthetic aperture array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuopeng; Buma, Takashi

    2010-02-01

    Sparse arrays are highly attractive for implementing two-dimensional arrays, but come at the cost of degraded image quality. We demonstrate significantly improved performance by exploiting the coherent ultrawideband nature of singlecycle THz pulses. We compute two weighting factors to each time-delayed signal before final summation to form the reconstructed image. The first factor employs cross-correlation analysis to measure the degree of walk-off between timedelayed signals of neighboring elements. The second factor measures the spatial coherence of the time-delayed delayed signals. Synthetic aperture imaging experiments are performed with a THz time-domain system employing a mechanically scanned single transceiver element. Cross-sectional imaging of wire targets is performed with a onedimensional sparse array with an inter-element spacing of 1.36 mm (over four λ at 1 THz). The proposed image reconstruction technique improves image contrast by 15 dB, which is impressive considering the relatively few elements in the array. En-face imaging of a razor blade is also demonstrated with a 56 x 56 element two-dimensional array, showing reduced image artifacts with adaptive reconstruction. These encouraging results suggest that the proposed image reconstruction technique can be highly beneficial to the development of large area two-dimensional THz arrays.

  10. Investigation on magnetoacoustic signal generation with magnetic induction and its application to electrical conductivity reconstruction.

    PubMed

    Ma, Qingyu; He, Bin

    2007-08-21

    A theoretical study on the magnetoacoustic signal generation with magnetic induction and its applications to electrical conductivity reconstruction is conducted. An object with a concentric cylindrical geometry is located in a static magnetic field and a pulsed magnetic field. Driven by Lorentz force generated by the static magnetic field, the magnetically induced eddy current produces acoustic vibration and the propagated sound wave is received by a transducer around the object to reconstruct the corresponding electrical conductivity distribution of the object. A theory on the magnetoacoustic waveform generation for a circular symmetric model is provided as a forward problem. The explicit formulae and quantitative algorithm for the electrical conductivity reconstruction are then presented as an inverse problem. Computer simulations were conducted to test the proposed theory and assess the performance of the inverse algorithms for a multi-layer cylindrical model. The present simulation results confirm the validity of the proposed theory and suggest the feasibility of reconstructing electrical conductivity distribution based on the proposed theory on the magnetoacoustic signal generation with magnetic induction.

  11. Crack growth sparse pursuit for wind turbine blade

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Yang, Zhibo; Zhang, Han; Du, Zhaohui; Chen, Xuefeng

    2015-01-01

    One critical challenge to achieving reliable wind turbine blade structural health monitoring (SHM) is mainly caused by composite laminates with an anisotropy nature and a hard-to-access property. The typical pitch-catch PZTs approach generally detects structural damage with both measured and baseline signals. However, the accuracy of imaging or tomography by delay-and-sum approaches based on these signals requires improvement in practice. Via the model of Lamb wave propagation and the establishment of a dictionary that corresponds to scatters, a robust sparse reconstruction approach for structural health monitoring comes into view for its promising performance. This paper proposes a neighbor dictionary that identifies the first crack location through sparse reconstruction and then presents a growth sparse pursuit algorithm that can precisely pursue the extension of the crack. An experiment with the goal of diagnosing a composite wind turbine blade with an artificial crack is performed, and it validates the proposed approach. The results give competitively accurate crack detection with the correct locations and extension length.

  12. The application of nonlinear structures to the reconstruction of binary signals

    NASA Astrophysics Data System (ADS)

    Gibson, Gavin J.; Siu, Sammy; Cowan, Colin F. N.

    1991-08-01

    The problem of reconstructing digital signals which have been passed through a dispersive channel and corrupted with additive noise is discussed. The problems encountered by linear equalizers under adverse conditions on the signal-to-noise ratio and channel phase are described. Considering the equalization problem as a geometric classification problem, it is shown that these difficulties can be overcome by utilizing nonlinear classifiers as channel equalizers. The manner in which neural networks can be utilized as adaptive channel equalizers is described, and simulation results which suggest that the neural network equalizers offer a performance which exceeds that of the linear structures, particularly in the high-noise environment, are presented .

  13. P-Finder: Reconstruction of Signaling Networks from Protein-Protein Interactions and GO Annotations.

    PubMed

    Young-Rae Cho; Yanan Xin; Speegle, Greg

    2015-01-01

    Because most complex genetic diseases are caused by defects of cell signaling, illuminating a signaling cascade is essential for understanding their mechanisms. We present three novel computational algorithms to reconstruct signaling networks between a starting protein and an ending protein using genome-wide protein-protein interaction (PPI) networks and gene ontology (GO) annotation data. A signaling network is represented as a directed acyclic graph in a merged form of multiple linear pathways. An advanced semantic similarity metric is applied for weighting PPIs as the preprocessing of all three methods. The first algorithm repeatedly extends the list of nodes based on path frequency towards an ending protein. The second algorithm repeatedly appends edges based on the occurrence of network motifs which indicate the link patterns more frequently appearing in a PPI network than in a random graph. The last algorithm uses the information propagation technique which iteratively updates edge orientations based on the path strength and merges the selected directed edges. Our experimental results demonstrate that the proposed algorithms achieve higher accuracy than previous methods when they are tested on well-studied pathways of S. cerevisiae. Furthermore, we introduce an interactive web application tool, called P-Finder, to visualize reconstructed signaling networks.

  14. New direction of arrival estimation of coherent signals based on reconstructing matrix under unknown mutual coupling

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Li, Weixing; Zhang, Yue; Chen, Zengping

    2016-01-01

    A direction of arrival (DOA) estimation algorithm for coherent signals in the presence of unknown mutual coupling is proposed. A group of auxiliary sensors in a uniform linear array are applied to eliminate the effects on the orthogonality of subspaces brought by mutual coupling. Then, a Toeplitz matrix, whose rank is independent of the coherency between impinging signals, is reconstructed to eliminate the rank loss of the spatial covariance matrix. Therefore, the signal and noise subspaces can be estimated properly. This method can estimate the DOAs of coherent signals under unknown mutual coupling accurately without any iteration and calibration sources. It has a low computational burden and high accuracy. Simulation results demonstrate the effectiveness of the algorithm.

  15. Sparse Representation of Electrodermal Activity With Knowledge-Driven Dictionaries

    PubMed Central

    Tsiartas, Andreas; Stein, Leah I.; Cermak, Sharon A.; Narayanan, Shrikanth S.

    2015-01-01

    Biometric sensors and portable devices are being increasingly embedded into our everyday life, creating the need for robust physiological models that efficiently represent, analyze, and interpret the acquired signals. We propose a knowledge-driven method to represent electrodermal activity (EDA), a psychophysiological signal linked to stress, affect, and cognitive processing. We build EDA-specific dictionaries that accurately model both the slow varying tonic part and the signal fluctuations, called skin conductance responses (SCR), and use greedy sparse representation techniques to decompose the signal into a small number of atoms from the dictionary. Quantitative evaluation of our method considers signal reconstruction, compression rate, and information retrieval measures, that capture the ability of the model to incorporate the main signal characteristics, such as SCR occurrences. Compared to previous studies fitting a predetermined structure to the signal, results indicate that our approach provides benefits across all aforementioned criteria. This paper demonstrates the ability of appropriate dictionaries along with sparse decomposition methods to reliably represent EDA signals and provides a foundation for automatic measurement of SCR characteristics and the extraction of meaningful EDA features. PMID:25494494

  16. Fast multi-dimensional NMR acquisition and processing using the sparse FFT.

    PubMed

    Hassanieh, Haitham; Mayzel, Maxim; Shi, Lixin; Katabi, Dina; Orekhov, Vladislav Yu

    2015-09-01

    Increasing the dimensionality of NMR experiments strongly enhances the spectral resolution and provides invaluable direct information about atomic interactions. However, the price tag is high: long measurement times and heavy requirements on the computation power and data storage. We introduce sparse fast Fourier transform as a new method of NMR signal collection and processing, which is capable of reconstructing high quality spectra of large size and dimensionality with short measurement times, faster computations than the fast Fourier transform, and minimal storage for processing and handling of sparse spectra. The new algorithm is described and demonstrated for a 4D BEST-HNCOCA spectrum. PMID:26123316

  17. Robust signal reconstruction for condition monitoring of industrial components via a modified Auto Associative Kernel Regression method

    NASA Astrophysics Data System (ADS)

    Baraldi, Piero; Di Maio, Francesco; Turati, Pietro; Zio, Enrico

    2015-08-01

    In this work, we propose a modification of the traditional Auto Associative Kernel Regression (AAKR) method which enhances the signal reconstruction robustness, i.e., the capability of reconstructing abnormal signals to the values expected in normal conditions. The modification is based on the definition of a new procedure for the computation of the similarity between the present measurements and the historical patterns used to perform the signal reconstructions. The underlying conjecture for this is that malfunctions causing variations of a small number of signals are more frequent than those causing variations of a large number of signals. The proposed method has been applied to real normal condition data collected in an industrial plant for energy production. Its performance has been verified considering synthetic and real malfunctioning. The obtained results show an improvement in the early detection of abnormal conditions and the correct identification of the signals responsible of triggering the detection.

  18. A simple method to reconstruct firing rates from dendritic calcium signals.

    PubMed

    Moreaux, Laurent; Laurent, Gilles

    2008-12-01

    Calcium imaging using fluorescent reporters is the most widely used optical approach to investigate activity in intact neuronal circuits with single-cell resolution. Calcium signals, however, are often difficult to interpret, especially if the desired output quantity is membrane voltage or instantaneous firing rates. Combining dendritic intracellular electrophysiology and multi-photon calcium imaging in vivo, we recently investigated the relationship between optical signals recorded with the fluorescent calcium indicator Oregon Green BAPTA-1 (OGB-1) and spike output in principal neurons in the locust antennal lobe. We derived from these experiments a simple, empirical and easily adaptable method requiring minimal calibration to reconstruct firing rates from calcium signals with good accuracy and 50-ms temporal resolution.

  19. Typical reconstruction limits for distributed compressed sensing based on ℓ2,1-norm minimization and Bayesian optimal reconstruction

    NASA Astrophysics Data System (ADS)

    Shiraki, Yoshifumi; Kabashima, Yoshiyuki

    2015-05-01

    The distributed compressed sensing framework provides an efficient compression scheme of multichannel signals that are sparse in some domains and highly correlated with one another. In particular, a signal model called the joint sparse model 2 (JSM-2) or multiple measurement vector problem, in which all sparse signals share their support, is important for dealing with practical problems such as magnetic resonance imaging and magnetoencephalography. In this paper, we investigate the typical reconstruction performance of JSM-2 problems for two schemes. One is ℓ2,1-norm minimization reconstruction and the other is Bayesian optimal reconstruction. Employing the replica method, we show that the reconstruction performance of both schemes which exploit the knowledge of the sharing of the signal support overcomes that of their corresponding approaches for the single-channel compressed sensing problem. We also develop a computationally feasible approximate algorithm for performing the Bayes optimal scheme to validate our theoretical estimation. Our replica-based analysis numerically indicates that the spinodal point of the Bayesian reconstruction disappears, which implies that a fundamental reconstruction limit can be achieved by the BP-based approximate algorithm in a practical amount of time when the number of channels is sufficiently large. The results of the numerical experiments of both reconstruction schemes agree excellently with the theoretical evaluation.

  20. Sparse cortical source localization using spatio-temporal atoms.

    PubMed

    Korats, Gundars; Ranta, Radu; Le Cam, Steven; Louis-Dorr, Valérie

    2015-01-01

    This paper addresses the problem of sparse localization of cortical sources from scalp EEG recordings. Localization algorithms use propagation model under spatial and/or temporal constraints, but their performance highly depends on the data signal-to-noise ratio (SNR). In this work we propose a dictionary based sparse localization method which uses a data driven spatio-temporal dictionary to reconstruct the measurements using Single Best Replacement (SBR) and Continuation Single Best Replacement (CSBR) algorithms. We tested and compared our methods with the well-known MUSIC and RAP-MUSIC algorithms on simulated realistic data. Tests were carried out for different noise levels. The results show that our method has a strong advantage over MUSIC-type methods in case of synchronized sources. PMID:26737185

  1. Signal transformation in erosional landscapes: insights for reconstructing tectonic history from sediment flux records

    NASA Astrophysics Data System (ADS)

    Li, Q.; Gasparini, N. M.; Straub, K. M.

    2015-12-01

    Changes in tectonics can affect erosion rates across a mountain belt, leading to non-steady sediment flux delivery to fluvial transport systems. The sediment flux signal produced from time-varying tectonics may eventually be recorded in a depositional basin. However, before the sediment flux from an erosional watershed is fed to the downstream transport system and preserved in sedimentary deposits, tectonic signals can be distorted or even destroyed as they are transformed into a sediment-flux signal that is exported out of a watershed . In this study, we use the Channel-Hillslope Integrated Landscape Development (CHILD) model to explore how the sediment flux delivered from a mountain watershed responds to non-steady rock uplift. We observe that (1) a non-linear relationship between the erosion response and tectonic perturbation can lead to a sediment-flux signal that is out of phase with the change in uplift rate; (2) in some cases in which the uplift perturbation is short, the sediment flux signal may contain no record of the change; (3) uplift rates interpreted from sediment flux at the outlet of a transient erosional landscape are likely to be underestimated. All these observations highlight the difficulty in accurately reconstructing tectonic history from sediment flux records. Results from this study will help to constrain what tectonic signals may be evident in the sediment flux delivered from an erosional system and therefore have the potential to be recorded in stratigraphy, ultimately improving our ability to interpret stratigraphy.

  2. TreSpEx—Detection of Misleading Signal in Phylogenetic Reconstructions Based on Tree Information

    PubMed Central

    Struck, Torsten H

    2014-01-01

    Phylogenies of species or genes are commonplace nowadays in many areas of comparative biological studies. However, for phylogenetic reconstructions one must refer to artificial signals such as paralogy, long-branch attraction, saturation, or conflict between different datasets. These signals might eventually mislead the reconstruction even in phylogenomic studies employing hundreds of genes. Unfortunately, there has been no program allowing the detection of such effects in combination with an implementation into automatic process pipelines. TreSpEx (Tree Space Explorer) now combines different approaches (including statistical tests), which utilize tree-based information like nodal support or patristic distances (PDs) to identify misleading signals. The program enables the parallel analysis of hundreds of trees and/or predefined gene partitions, and being command-line driven, it can be integrated into automatic process pipelines. TreSpEx is implemented in Perl and supported on Linux, Mac OS X, and MS Windows. Source code, binaries, and additional material are freely available at http://www.annelida.de/research/bioinformatics/software.html. PMID:24701118

  3. Bayesian reconstruction of gravitational wave burst signals from simulations of rotating stellar core collapse and bounce

    SciTech Connect

    Roever, Christian; Bizouard, Marie-Anne; Christensen, Nelson; Dimmelmeier, Harald; Heng, Ik Siong; Meyer, Renate

    2009-11-15

    Presented in this paper is a technique that we propose for extracting the physical parameters of a rotating stellar core collapse from the observation of the associated gravitational wave signal from the collapse and core bounce. Data from interferometric gravitational wave detectors can be used to provide information on the mass of the progenitor model, precollapse rotation, and the nuclear equation of state. We use waveform libraries provided by the latest numerical simulations of rotating stellar core collapse models in general relativity, and from them create an orthogonal set of eigenvectors using principal component analysis. Bayesian inference techniques are then used to reconstruct the associated gravitational wave signal that is assumed to be detected by an interferometric detector. Posterior probability distribution functions are derived for the amplitudes of the principal component analysis eigenvectors, and the pulse arrival time. We show how the reconstructed signal and the principal component analysis eigenvector amplitude estimates may provide information on the physical parameters associated with the core collapse event.

  4. Sparse Methods for Biomedical Data

    PubMed Central

    Ye, Jieping; Liu, Jun

    2013-01-01

    Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the ℓ1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data. PMID:24076585

  5. Statistical-Physics-Based Reconstruction in Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Krzakala, F.; Mézard, M.; Sausset, F.; Sun, Y. F.; Zdeborová, L.

    2012-04-01

    Compressed sensing has triggered a major evolution in signal acquisition. It consists of sampling a sparse signal at low rate and later using computational power for the exact reconstruction of the signal, so that only the necessary information is measured. Current reconstruction techniques are limited, however, to acquisition rates larger than the true density of the signal. We design a new procedure that is able to reconstruct the signal exactly with a number of measurements that approaches the theoretical limit, i.e., the number of nonzero components of the signal, in the limit of large systems. The design is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired by the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical-physics methods. The obtained improvement is confirmed by numerical studies of several cases.

  6. Sparse representation utilizing tight frame for phase retrieval

    NASA Astrophysics Data System (ADS)

    Shi, Baoshun; Lian, Qiusheng; Chen, Shuzhen

    2015-12-01

    We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is ill-posed. Several techniques have been used to address this problem by utilizing various priors such as non-negative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding non-convex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noise-free and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsity-based algorithms in terms of reconstruction quality.

  7. Virtual head rotation reveals a process of route reconstruction from human vestibular signals.

    PubMed

    Day, Brian L; Fitzpatrick, Richard C

    2005-09-01

    The vestibular organs can feed perceptual processes that build a picture of our route as we move about in the world. However, raw vestibular signals do not define the path taken because, during travel, the head can undergo accelerations unrelated to the route and also be orientated in any direction to vary the signal. This study investigated the computational process by which the brain transforms raw vestibular signals for the purpose of route reconstruction. We electrically stimulated the vestibular nerves of human subjects to evoke a virtual head rotation fixed in skull co-ordinates and measure its perceptual effect. The virtual head rotation caused subjects to perceive an illusory whole-body rotation that was a cyclic function of head-pitch angle. They perceived whole-body yaw rotation in one direction with the head pitched forwards, the opposite direction with the head pitched backwards, and no rotation with the head in an intermediate position. A model based on vector operations and the anatomy and firing properties of semicircular canals precisely predicted these perceptions. In effect, a neural process computes the vector dot product between the craniocentric vestibular vector of head rotation and the gravitational unit vector. This computation yields the signal of body rotation in the horizontal plane that feeds our perception of the route travelled.

  8. Fault feature extraction of rolling element bearings using sparse representation

    NASA Astrophysics Data System (ADS)

    He, Guolin; Ding, Kang; Lin, Huibin

    2016-03-01

    Influenced by factors such as speed fluctuation, rolling element sliding and periodical variation of load distribution and impact force on the measuring direction of sensor, the impulse response signals caused by defective rolling bearing are non-stationary, and the amplitudes of the impulse may even drop to zero when the fault is out of load zone. The non-stationary characteristic and impulse missing phenomenon reduce the effectiveness of the commonly used demodulation method on rolling element bearing fault diagnosis. Based on sparse representation theories, a new approach for fault diagnosis of rolling element bearing is proposed. The over-complete dictionary is constructed by the unit impulse response function of damped second-order system, whose natural frequencies and relative damping ratios are directly identified from the fault signal by correlation filtering method. It leads to a high similarity between atoms and defect induced impulse, and also a sharply reduction of the redundancy of the dictionary. To improve the matching accuracy and calculation speed of sparse coefficient solving, the fault signal is divided into segments and the matching pursuit algorithm is carried out by segments. After splicing together all the reconstructed signals, the fault feature is extracted successfully. The simulation and experimental results show that the proposed method is effective for the fault diagnosis of rolling element bearing in large rolling element sliding and low signal to noise ratio circumstances.

  9. Audio signal reconstruction based on adaptively selected seed points from laser speckle images

    NASA Astrophysics Data System (ADS)

    Chen, Ziyi; Wang, Cheng; Huang, Chaohong; Fu, Hongyan; Luo, Huan; Wang, Hanyun

    2014-11-01

    Speckle patterns, present in the laser reflection from an object, reflect the micro-structure of the object where the laser is illuminated on. When the object surface vibrates, the speckle patterns move accordingly, and this movement can be recovered with a high-speed camera system. Due to the low signal to noise ratio (SNR), it is a challenging task to recover the micro-vibration information and reconstruct the audio signal from the captured speckle image sequences fast and effectively. In this paper, we propose a novel method based on pixels' gray value variations in laser speckle patterns to work out with the challenging task. The major contribution of the proposed method relies on using the intensity variations of the adaptively selected seed points to recover the vibration information and the audio signal with a novel model that effectively fuses the multiple seed points' information together. Experiments show that our method not only recovers the vibration information with high quality but is also robust and runs fast. The SNR of the experimental results reach about 20 dB and 10 dB at the detection distances of 10 m and 50 m, respectively.

  10. Accelerating Dynamic Cardiac MR Imaging Using Structured Sparse Representation

    PubMed Central

    Cai, Nian; Wang, Shengru; Zhu, Shasha

    2013-01-01

    Compressed sensing (CS) has produced promising results on dynamic cardiac MR imaging by exploiting the sparsity in image series. In this paper, we propose a new method to improve the CS reconstruction for dynamic cardiac MRI based on the theory of structured sparse representation. The proposed method user the PCA subdictionaries for adaptive sparse representation and suppresses the sparse coding noise to obtain good reconstructions. An accelerated iterative shrinkage algorithm is used to solve the optimization problem and achieve a fast convergence rate. Experimental results demonstrate that the proposed method improves the reconstruction quality of dynamic cardiac cine MRI over the state-of-the-art CS method. PMID:24454528

  11. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  12. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes

    PubMed Central

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-01-01

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable. PMID:27110789

  13. Signal Analysis and Waveform Reconstruction of Shock Waves Generated by Underwater Electrical Wire Explosions with Piezoelectric Pressure Probes.

    PubMed

    Zhou, Haibin; Zhang, Yongmin; Han, Ruoyu; Jing, Yan; Wu, Jiawei; Liu, Qiaojue; Ding, Weidong; Qiu, Aici

    2016-01-01

    Underwater shock waves (SWs) generated by underwater electrical wire explosions (UEWEs) have been widely studied and applied. Precise measurement of this kind of SWs is important, but very difficult to accomplish due to their high peak pressure, steep rising edge and very short pulse width (on the order of tens of μs). This paper aims to analyze the signals obtained by two kinds of commercial piezoelectric pressure probes, and reconstruct the correct pressure waveform from the distorted one measured by the pressure probes. It is found that both PCB138 and Müller-plate probes can be used to measure the relative SW pressure value because of their good uniformities and linearities, but none of them can obtain precise SW waveforms. In order to approach to the real SW signal better, we propose a new multi-exponential pressure waveform model, which has considered the faster pressure decay at the early stage and the slower pressure decay in longer times. Based on this model and the energy conservation law, the pressure waveform obtained by the PCB138 probe has been reconstructed, and the reconstruction accuracy has been verified by the signals obtained by the Müller-plate probe. Reconstruction results show that the measured SW peak pressures are smaller than the real signal. The waveform reconstruction method is both reasonable and reliable. PMID:27110789

  14. Sparsity: a ubiquitous but unexplored property of geophysical signals for multi-scale modeling and reconstruction

    NASA Astrophysics Data System (ADS)

    Fouofula-Georgiou, E.; Ebtehaj, A. M.

    2012-04-01

    Sparsity: a ubiquitous but unexplored property of geophysical signals for multi-scale modeling and reconstruction Efi Foufoula-Georgiou and Ardeshir Mohammad Ebtehaj Department of Civil Engineering and National Center for Earth-surface Dynamics University of Minnesota, Minneapolis, MN 55414 Many geophysical processes exhibit variability over a wide range of scales. Yet, in numerical modeling or remote sensing observations not all of this variability is explicitly resolved due to limitations in computational resources or sensor configurations. As a result, sub-grid scale parameterizations and downscaling/upscaling representations are essential. Such representations take advantage of scale invariance which has been theoretically or empirically documented in a wide range of geophysical processes, including precipitation, soil moisture, and topography. Here we present a new direction in the field of multi-scale analysis and reconstruction. It capitalizes on the fact that most geophysical signals are naturally redundant, due to spatial dependence and coherence over a range of scales, and thus when projected onto an appropriate space (e.g, Fourier or wavelet) only a few representation coefficients are non-zero -- this property is called sparsity. The sparsity can serve as a priori knowledge to properly regularize the otherwise ill-posed inverse problem of creating information at scales smaller than resolved, which is at the heart of sub-grid scale and downscaling parameterizations. The same property of sparsity is also shown to play a revolutionary role in revisiting the problem of optimal estimation of non-Gaussian processes. Theoretical concepts are borrowed from the new field of compressive sampling and super-resolution and the merits of the methodology are demonstrated using examples from precipitation downscaling, multi-scale data fusion and data assimilation.

  15. Learning discriminative dictionary for group sparse representation.

    PubMed

    Sun, Yubao; Liu, Qingshan; Tang, Jinhui; Tao, Dacheng

    2014-09-01

    In recent years, sparse representation has been widely used in object recognition applications. How to learn the dictionary is a key issue to sparse representation. A popular method is to use l1 norm as the sparsity measurement of representation coefficients for dictionary learning. However, the l1 norm treats each atom in the dictionary independently, so the learned dictionary cannot well capture the multisubspaces structural information of the data. In addition, the learned subdictionary for each class usually shares some common atoms, which weakens the discriminative ability of the reconstruction error of each subdictionary. This paper presents a new dictionary learning model to improve sparse representation for image classification, which targets at learning a class-specific subdictionary for each class and a common subdictionary shared by all classes. The model is composed of a discriminative fidelity, a weighted group sparse constraint, and a subdictionary incoherence term. The discriminative fidelity encourages each class-specific subdictionary to sparsely represent the samples in the corresponding class. The weighted group sparse constraint term aims at capturing the structural information of the data. The subdictionary incoherence term is to make all subdictionaries independent as much as possible. Because the common subdictionary represents features shared by all classes, we only use the reconstruction error of each class-specific subdictionary for classification. Extensive experiments are conducted on several public image databases, and the experimental results demonstrate the power of the proposed method, compared with the state-of-the-arts.

  16. Sparse-view image reconstruction in inverse-geometry CT (IGCT) for fast, low-dose, volumetric dental X-ray imaging

    NASA Astrophysics Data System (ADS)

    Hong, D. K.; Cho, H. S.; Oh, J. E.; Je, U. K.; Lee, M. S.; Kim, H. J.; Lee, S. H.; Park, Y. O.; Choi, S. I.; Koo, Y. S.; Cho, H. M.

    2012-12-01

    As a new direction for computed tomography (CT) imaging, inverse-geometry CT (IGCT) has been recently introduced and is intended to overcome limitations in conventional cone-beam CT (CBCT) such as the cone-beam artifacts, imaging dose, temporal resolution, scatter, cost, and so on. While the CBCT geometry consists of X-rays emanating from a small focal spot and collimated toward a larger detector, the IGCT geometry employs a large-area scanned source array with the Xray beams collimated toward a smaller-area detector. In this research, we explored an effective IGCT reconstruction algorithm based on the total-variation (TV) minimization method and studied the feasibility of the IGCT geometry for potential applications to fast, low-dose volumetric dental X-ray imaging. We implemented the algorithm, performed systematic simulation works, and evaluated the imaging characteristics quantitatively. Although much engineering and validation works are required to achieve clinical implementation, our preliminary results have demonstrated a potential for improved volumetric imaging with reduced dose.

  17. An estimation method of MR signal parameters for improved image reconstruction in unilateral scanner

    NASA Astrophysics Data System (ADS)

    Bergman, Elad; Yeredor, Arie; Nevo, Uri

    2013-12-01

    Unilateral NMR devices are used in various applications including non-destructive testing and well logging, but are not used routinely for imaging. This is mainly due to the inhomogeneous magnetic field (B0) in these scanners. This inhomogeneity results in low sensitivity and further forces the use of the slow single point imaging scan scheme. Improving the measurement sensitivity is therefore an important factor as it can improve image quality and reduce imaging times. Short imaging times can facilitate the use of this affordable and portable technology for various imaging applications. This work presents a statistical signal-processing method, designed to fit the unique characteristics of imaging with a unilateral device. The method improves the imaging capabilities by improving the extraction of image information from the noisy data. This is done by the use of redundancy in the acquired MR signal and by the use of the noise characteristics. Both types of data were incorporated into a Weighted Least Squares estimation approach. The method performance was evaluated with a series of imaging acquisitions applied on phantoms. Images were extracted from each measurement with the proposed method and were compared to the conventional image reconstruction. All measurements showed a significant improvement in image quality based on the MSE criterion - with respect to gold standard reference images. An integration of this method with further improvements may lead to a prominent reduction in imaging times aiding the use of such scanners in imaging application.

  18. Climate signal age effects in boreal tree-rings: Lessons to be learned for paleoclimatic reconstructions

    NASA Astrophysics Data System (ADS)

    Konter, Oliver; Büntgen, Ulf; Carrer, Marco; Timonen, Mauri; Esper, Jan

    2016-06-01

    Age-related alternation in the sensitivity of tree-ring width (TRW) to climate variability has been reported for different forest species and environments. The resulting growth-climate response patterns are, however, often inconsistent and similar assessments using maximum latewood density (MXD) are still missing. Here, we analyze climate signal age effects (CSAE, age-related changes in the climate sensitivity of tree growth) in a newly aggregated network of 692 Pinus sylvestris L. TRW and MXD series from northern Fennoscandia. Although summer temperature sensitivity of TRW (rAll = 0.48) ranges below that of MXD (rAll = 0.76), it declines for both parameters as cambial age increases. Assessment of CSAE for individual series further reveals decreasing correlation values as a function of time. This declining signal strength remains temporally robust and negative for MXD, while age-related trends in TRW exhibit resilient meanderings of positive and negative trends. Although CSAE are significant and temporally variable in both tree-ring parameters, MXD is more suitable for the development of climate reconstructions. Our results indicate that sampling of young and old trees, and testing for CSAE, should become routine for TRW and MXD data prior to any paleoclimatic endeavor.

  19. Balanced Sparse Model for Tight Frames in Compressed Sensing Magnetic Resonance Imaging

    PubMed Central

    Liu, Yunsong; Cai, Jian-Feng; Zhan, Zhifang; Guo, Di; Ye, Jing; Chen, Zhong; Qu, Xiaobo

    2015-01-01

    Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B). PMID:25849209

  20. Sparse Bayesian Learning for DOA Estimation with Mutual Coupling

    PubMed Central

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  1. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-01-01

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise. PMID:26501284

  2. A Non-Uniformly Under-Sampled Blade Tip-Timing Signal Reconstruction Method for Blade Vibration Monitoring

    PubMed Central

    Hu, Zheng; Lin, Jun; Chen, Zhong-Sheng; Yang, Yong-Min; Li, Xue-Jun

    2015-01-01

    High-speed blades are often prone to fatigue due to severe blade vibrations. In particular, synchronous vibrations can cause irreversible damages to the blade. Blade tip-timing methods (BTT) have become a promising way to monitor blade vibrations. However, synchronous vibrations are unsuitably monitored by uniform BTT sampling. Therefore, non-equally mounted probes have been used, which will result in the non-uniformity of the sampling signal. Since under-sampling is an intrinsic drawback of BTT methods, how to analyze non-uniformly under-sampled BTT signals is a big challenge. In this paper, a novel reconstruction method for non-uniformly under-sampled BTT data is presented. The method is based on the periodically non-uniform sampling theorem. Firstly, a mathematical model of a non-uniform BTT sampling process is built. It can be treated as the sum of certain uniform sample streams. For each stream, an interpolating function is required to prevent aliasing in the reconstructed signal. Secondly, simultaneous equations of all interpolating functions in each sub-band are built and corresponding solutions are ultimately derived to remove unwanted replicas of the original signal caused by the sampling, which may overlay the original signal. In the end, numerical simulations and experiments are carried out to validate the feasibility of the proposed method. The results demonstrate the accuracy of the reconstructed signal depends on the sampling frequency, the blade vibration frequency, the blade vibration bandwidth, the probe static offset and the number of samples. In practice, both types of blade vibration signals can be particularly reconstructed by non-uniform BTT data acquired from only two probes. PMID:25621612

  3. Scattered radiation in flat-detector based cone-beam CT: propagation of signal, contrast, and noise into reconstructed volumes

    NASA Astrophysics Data System (ADS)

    Wiegert, Jens; Hohmann, Steffen; Bertram, Matthias

    2007-03-01

    This paper presents a novel framework for the systematic assessment of the impact of scattered radiation in .at-detector based cone-beam CT. While it is well known that scattered radiation causes three di.erent types of artifacts in reconstructed images (inhomogeneity artifacts such as cupping and streaks, degradation of contrast, and enhancement of noise), investigations in the literature quantify the impact of scatter mostly only in terms of inhomogeneity artifacts, giving little insight, e.g., into the visibility of low contrast lesions. Therefore, for this study a novel framework has been developed that in addition to normal reconstruction of the CT (HU) number allows for reconstruction of voxelized expectation values of three additional important characteristics of image quality: signal degradation, contrast reduction, and noise variances. The new framework has been applied to projection data obtained with voxelized Monte-Carlo simulations of clinical CT data sets of high spatial resolution. Using these data, the impact of scattered radiation was thoroughly studied for realistic and clinically relevant patient geometries of the head, thorax, and pelvis region. By means of spatially resolved reconstructions of contrast and noise propagation, the image quality of a scenario with using standard antiscatter grids could be evaluated with great detail. Results show the spatially resolved contrast degradation and the spatially resolved expected standard deviation of the noise at any position in the reconstructed object. The new framework represents a general tool for analyzing image quality in reconstructed images.

  4. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    PubMed

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  5. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  6. Protein crystal structure from non-oriented, single-axis sparse X-ray data.

    PubMed

    Wierman, Jennifer L; Lan, Ti-Yen; Tate, Mark W; Philipp, Hugh T; Elser, Veit; Gruner, Sol M

    2016-01-01

    X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so 'sparse' in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the EMC algorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ∼200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using the EMC algorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. This suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of the EMC algorithm even in cases where the data are

  7. Visual tracking via robust multitask sparse prototypes

    NASA Astrophysics Data System (ADS)

    Zhang, Huanlong; Hu, Shiqiang; Yu, Junyang

    2015-03-01

    Sparse representation has been applied to an online subspace learning-based tracking problem. To handle partial occlusion effectively, some researchers introduce l1 regularization to principal component analysis (PCA) reconstruction. However, in these traditional tracking methods, the representation of each object observation is often viewed as an individual task so the inter-relationship between PCA basis vectors is ignored. We propose a new online visual tracking algorithm with multitask sparse prototypes, which combines multitask sparse learning with PCA-based subspace representation. We first extend a visual tracking algorithm with sparse prototypes in multitask learning framework to mine inter-relations between subtasks. Then, to avoid the problem that enforcing all subtasks to share the same structure may result in degraded tracking results, we impose group sparse constraints on the coefficients of PCA basis vectors and element-wise sparse constraints on the error coefficients, respectively. Finally, we show that the proposed optimization problem can be effectively solved using the accelerated proximal gradient method with the fast convergence. Experimental results compared with the state-of-the-art tracking methods demonstrate that the proposed algorithm achieves favorable performance when the object undergoes partial occlusion, motion blur, and illumination changes.

  8. Sparse Representation for Infrared Dim Target Detection via a Discriminative Over-Complete Dictionary Learned Online

    PubMed Central

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively. PMID:24871988

  9. Sparse representation for infrared Dim target detection via a discriminative over-complete dictionary learned online.

    PubMed

    Li, Zheng-Zhou; Chen, Jing; Hou, Qian; Fu, Hong-Xia; Dai, Zhen; Jin, Gang; Li, Ru-Zhang; Liu, Chang-Ju

    2014-01-01

    It is difficult for structural over-complete dictionaries such as the Gabor function and discriminative over-complete dictionary, which are learned offline and classified manually, to represent natural images with the goal of ideal sparseness and to enhance the difference between background clutter and target signals. This paper proposes an infrared dim target detection approach based on sparse representation on a discriminative over-complete dictionary. An adaptive morphological over-complete dictionary is trained and constructed online according to the content of infrared image by K-singular value decomposition (K-SVD) algorithm. Then the adaptive morphological over-complete dictionary is divided automatically into a target over-complete dictionary describing target signals, and a background over-complete dictionary embedding background by the criteria that the atoms in the target over-complete dictionary could be decomposed more sparsely based on a Gaussian over-complete dictionary than the one in the background over-complete dictionary. This discriminative over-complete dictionary can not only capture significant features of background clutter and dim targets better than a structural over-complete dictionary, but also strengthens the sparse feature difference between background and target more efficiently than a discriminative over-complete dictionary learned offline and classified manually. The target and background clutter can be sparsely decomposed over their corresponding over-complete dictionaries, yet couldn't be sparsely decomposed based on their opposite over-complete dictionary, so their residuals after reconstruction by the prescribed number of target and background atoms differ very visibly. Some experiments are included and the results show that this proposed approach could not only improve the sparsity more efficiently, but also enhance the performance of small target detection more effectively.

  10. Analog system for computing sparse codes

    DOEpatents

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  11. Sparse pseudospectral approximation method

    NASA Astrophysics Data System (ADS)

    Constantine, Paul G.; Eldred, Michael S.; Phipps, Eric T.

    2012-07-01

    Multivariate global polynomial approximations - such as polynomial chaos or stochastic collocation methods - are now in widespread use for sensitivity analysis and uncertainty quantification. The pseudospectral variety of these methods uses a numerical integration rule to approximate the Fourier-type coefficients of a truncated expansion in orthogonal polynomials. For problems in more than two or three dimensions, a sparse grid numerical integration rule offers accuracy with a smaller node set compared to tensor product approximation. However, when using a sparse rule to approximately integrate these coefficients, one often finds unacceptable errors in the coefficients associated with higher degree polynomials. By reexamining Smolyak's algorithm and exploiting the connections between interpolation and projection in tensor product spaces, we construct a sparse pseudospectral approximation method that accurately reproduces the coefficients of basis functions that naturally correspond to the sparse grid integration rule. The compelling numerical results show that this is the proper way to use sparse grid integration rules for pseudospectral approximation.

  12. Paleoenvironmental reconstruction of Lake Azul (Azores archipelago, Portugal) and its implications for the NAO signal.

    NASA Astrophysics Data System (ADS)

    Jesús Rubio, Maria; Sanchez, Guiomar; Saez, Alberto; Vázquez-Loureiro, David; Bao, Roberto; José Pueyo, Juan; Gómez-Paccard, Miriam; Gonçalves, Vitor; Raposeiro, Pedro M.; Francus, Pierre; Hernández, Armand; Margalef, Olga; Buchaca, Teresa; Pla, Sergi; Barreiro-Lostres, Fernando; Valero-Garcés, Blas L.; Giralt, Santiago

    2013-04-01

    radiocarbon date at the base of this fine mixture manifests the record for the last ca 650 cal. years B.P., which corresponds to the last recorded eruption. The dark brown layers are dominated by organic matter (low XRF signal and almost no diatoms) whereas light brown facies are mainly made up of terrigenous particles (high XRF signal and high content of benthic diatoms) and vascular plant macroremains. Bulk organic matter analyses have revealed that algae constitute the main compound of the organic fraction. However, the organic matter in the dark layers is composed by C3 plants, coherent with the clastic nature of this facies deposited during flood events. Increase of precipitation, ruled by the negative phase of the NAO, together with the steep borders of the Sete Cidades crater prompts a substantial increase in the erosion of the catchment and hence an enhancement of runoff that reaches Azul Lake and the occurrence of the flood events. Therefore, identifying, characterizing and counting the dark layers would allow to reconstruct the intensity and periodicity of the negative phase of the NAO climate mode.

  13. Inverse polynomial reconstruction method in DCT domain

    NASA Astrophysics Data System (ADS)

    Dadkhahi, Hamid; Gotchev, Atanas; Egiazarian, Karen

    2012-12-01

    The discrete cosine transform (DCT) offers superior energy compaction properties for a large class of functions and has been employed as a standard tool in many signal and image processing applications. However, it suffers from spurious behavior in the vicinity of edge discontinuities in piecewise smooth signals. To leverage the sparse representation provided by the DCT, in this article, we derive a framework for the inverse polynomial reconstruction in the DCT expansion. It yields the expansion of a piecewise smooth signal in terms of polynomial coefficients, obtained from the DCT representation of the same signal. Taking advantage of this framework, we show that it is feasible to recover piecewise smooth signals from a relatively small number of DCT coefficients with high accuracy. Furthermore, automatic methods based on minimum description length principle and cross-validation are devised to select the polynomial orders, as a requirement of the inverse polynomial reconstruction method in practical applications. The developed framework can considerably enhance the performance of the DCT in sparse representation of piecewise smooth signals. Numerical results show that denoising and image approximation algorithms based on the proposed framework indicate significant improvements over wavelet counterparts for this class of signals.

  14. Noise and signal properties in PSF-based fully 3D PET image reconstruction: an experimental evaluation

    NASA Astrophysics Data System (ADS)

    Tong, S.; Alessio, A. M.; Kinahan, P. E.

    2010-03-01

    The addition of accurate system modeling in PET image reconstruction results in images with distinct noise texture and characteristics. In particular, the incorporation of point spread functions (PSF) into the system model has been shown to visually reduce image noise, but the noise properties have not been thoroughly studied. This work offers a systematic evaluation of noise and signal properties in different combinations of reconstruction methods and parameters. We evaluate two fully 3D PET reconstruction algorithms: (1) OSEM with exact scanner line of response modeled (OSEM+LOR), (2) OSEM with line of response and a measured point spread function incorporated (OSEM+LOR+PSF), in combination with the effects of four post-reconstruction filtering parameters and 1-10 iterations, representing a range of clinically acceptable settings. We used a modified NEMA image quality (IQ) phantom, which was filled with 68Ge and consisted of six hot spheres of different sizes with a target/background ratio of 4:1. The phantom was scanned 50 times in 3D mode on a clinical system to provide independent noise realizations. Data were reconstructed with OSEM+LOR and OSEM+LOR+PSF using different reconstruction parameters, and our implementations of the algorithms match the vendor's product algorithms. With access to multiple realizations, background noise characteristics were quantified with four metrics. Image roughness and the standard deviation image measured the pixel-to-pixel variation; background variability and ensemble noise quantified the region-to-region variation. Image roughness is the image noise perceived when viewing an individual image. At matched iterations, the addition of PSF leads to images with less noise defined as image roughness (reduced by 35% for unfiltered data) and as the standard deviation image, while it has no effect on background variability or ensemble noise. In terms of signal to noise performance, PSF-based reconstruction has a 7% improvement in

  15. Temporal Super Resolution Enhancement of Echocardiographic Images Based on Sparse Representation.

    PubMed

    Gifani, Parisa; Behnam, Hamid; Haddadi, Farzan; Sani, Zahra Alizadeh; Shojaeifard, Maryam

    2016-01-01

    A challenging issue for echocardiographic image interpretation is the accurate analysis of small transient motions of myocardium and valves during real-time visualization. A higher frame rate video may reduce this difficulty, and temporal super resolution (TSR) is useful for illustrating the fast-moving structures. In this paper, we introduce a novel framework that optimizes TSR enhancement of echocardiographic images by utilizing temporal information and sparse representation. The goal of this method is to increase the frame rate of echocardiographic videos, and therefore enable more accurate analyses of moving structures. For the proposed method, we first derived temporal information by extracting intensity variation time curves (IVTCs) assessed for each pixel. We then designed both low-resolution and high-resolution overcomplete dictionaries based on prior knowledge of the temporal signals and a set of prespecified known functions. The IVTCs can then be described as linear combinations of a few prototype atoms in the low-resolution dictionary. We used the Bayesian compressive sensing (BCS) sparse recovery algorithm to find the sparse coefficients of the signals. We extracted the sparse coefficients and the corresponding active atoms in the low-resolution dictionary to construct new sparse coefficients corresponding to the high-resolution dictionary. Using the estimated atoms and the high-resolution dictionary, a new IVTC with more samples was constructed. Finally, by placing the new IVTC signals in the original IVTC positions, we were able to reconstruct the original echocardiography video with more frames. The proposed method does not require training of low-resolution and high-resolution dictionaries, nor does it require motion estimation; it does not blur fast-moving objects, and does not have blocking artifacts.

  16. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    SciTech Connect

    Chartrand, Rick

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  17. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  18. Multi-frame image super resolution based on sparse coding.

    PubMed

    Kato, Toshiyuki; Hino, Hideitsu; Murata, Noboru

    2015-06-01

    An image super-resolution method from multiple observation of low-resolution images is proposed. The method is based on sub-pixel accuracy block matching for estimating relative displacements of observed images, and sparse signal representation for estimating the corresponding high-resolution image, where correspondence between high- and low-resolution images are modeled by a certain degradation process. Relative displacements of small patches of observed low-resolution images are accurately estimated by a computationally efficient block matching method. The matching scores of the block matching are used to select a subset of low-resolution patches for reconstructing a high-resolution patch, that is, an adaptive selection of informative low-resolution images is realized. The proposed method is shown to perform comparable or superior to conventional super-resolution methods through experiments using various images.

  19. An Improved Sparse Representation over Learned Dictionary Method for Seizure Detection.

    PubMed

    Li, Junhui; Zhou, Weidong; Yuan, Shasha; Zhang, Yanli; Li, Chengcheng; Wu, Qi

    2016-02-01

    Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.

  20. Robust face recognition via sparse representation.

    PubMed

    Wright, John; Yang, Allen Y; Ganesh, Arvind; Sastry, S Shankar; Ma, Yi

    2009-02-01

    We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by l{1}-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

  1. X-ray computed tomography using curvelet sparse regularization

    SciTech Connect

    Wieczorek, Matthias Vogel, Jakob; Lasser, Tobias; Frikel, Jürgen; Demaret, Laurent; Eggl, Elena; Pfeiffer, Franz; Kopp, Felix; Noël, Peter B.

    2015-04-15

    Purpose: Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. Methods: In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Results: Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method’s strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. Conclusions: The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  2. Characterizing heterogeneity among virus particles by stochastic 3D signal reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Nan; Gong, Yunye; Wang, Qiu; Zheng, Yili; Doerschuk, Peter C.

    2015-09-01

    In single-particle cryo electron microscopy, many electron microscope images each of a single instance of a biological particle such as a virus or a ribosome are measured and the 3-D electron scattering intensity of the particle is reconstructed by computation. Because each instance of the particle is imaged separately, it should be possible to characterize the heterogeneity of the different instances of the particle as well as a nominal reconstruction of the particle. In this paper, such an algorithm is described and demonstrated on the bacteriophage Hong Kong 97. The algorithm is a statistical maximum likelihood estimator computed by an expectation maximization algorithm implemented in Matlab software.

  3. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  4. Pollen reconstructions, tree-rings and early climate data from Minnesota, USA: a cautionary tale of bias and signal attentuation

    NASA Astrophysics Data System (ADS)

    St-Jacques, J. M.; Cumming, B. F.; Smol, J. P.; Sauchyn, D.

    2015-12-01

    High-resolution proxy reconstructions are essential to assess the rate and magnitude of anthropogenic global warming. High-resolution pollen records are being critically examined for the production of accurate climate reconstructions of the last millennium, often as extensions of tree-ring records. Past climate inference from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have directly altered vegetation, and hence modern pollen deposition is a product of landscape disturbance and climate, unlike in the past with its dominance of climate-derived processes. This could cause serious bias in pollen reconstructions. In the US Midwest, direct human impacts have greatly altered the vegetation and pollen rain since Euro-American settlement in the mid-19th century. Using instrumental climate data from the early 1800s from Fort Snelling (Minnesota), we assessed the bias from the conventional method of inferring climate from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The pre-settlement calibration set provides more accurate reconstructions of 19th century temperature than the modern set does. When both calibration sets are used to reconstruct temperatures since AD 1116 from a varve-dated pollen record from Lake Mina, Minnesota, the conventional method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 oC, resulting in an overestimation of Little Ice Age temperature and an underestimation of anthropogenic warming. We also compared the pollen-inferred moisture reconstruction to a four-century tree-ring-inferred moisture record from Minnesota and Dakotas, which shows that the tree-ring reconstruction is biased towards dry conditions and records wet periods relatively poorly, giving a false impression of regional aridity. The tree-ring chronology also suggests varve

  5. Protein crystal structure from non-oriented, single-axis sparse X-ray data

    PubMed Central

    Wierman, Jennifer L.; Lan, Ti-Yen; Tate, Mark W.; Philipp, Hugh T.; Elser, Veit; Gruner, Sol M.

    2016-01-01

    X-ray free-electron lasers (XFELs) have inspired the development of serial femtosecond crystallography (SFX) as a method to solve the structure of proteins. SFX datasets are collected from a sequence of protein microcrystals injected across ultrashort X-ray pulses. The idea behind SFX is that diffraction from the intense, ultrashort X-ray pulses leaves the crystal before the crystal is obliterated by the effects of the X-ray pulse. The success of SFX at XFELs has catalyzed interest in analogous experiments at synchrotron-radiation (SR) sources, where data are collected from many small crystals and the ultrashort pulses are replaced by exposure times that are kept short enough to avoid significant crystal damage. The diffraction signal from each short exposure is so ‘sparse’ in recorded photons that the process of recording the crystal intensity is itself a reconstruction problem. Using the EMC algorithm, a successful reconstruction is demonstrated here in a sparsity regime where there are no Bragg peaks that conventionally would serve to determine the orientation of the crystal in each exposure. In this proof-of-principle experiment, a hen egg-white lysozyme (HEWL) crystal rotating about a single axis was illuminated by an X-ray beam from an X-ray generator to simulate the diffraction patterns of microcrystals from synchrotron radiation. Millions of these sparse frames, typically containing only ∼200 photons per frame, were recorded using a fast-framing detector. It is shown that reconstruction of three-dimensional diffraction intensity is possible using the EMC algorithm, even with these extremely sparse frames and without knowledge of the rotation angle. Further, the reconstructed intensity can be phased and refined to solve the protein structure using traditional crystallographic software. This suggests that synchrotron-based serial crystallography of micrometre-sized crystals can be practical with the aid of the EMC algorithm even in cases where the data

  6. Millennial precipitation reconstruction for the Jemez Mountains, New Mexico, reveals changingb drought signal

    USGS Publications Warehouse

    Touchan, R.; Woodhouse, C.A.; Meko, D.M.; Allen, C.

    2011-01-01

    Drought is a recurring phenomenon in the American Southwest. Since the frequency and severity of hydrologic droughts and other hydroclimatic events are of critical importance to the ecology and rapidly growing human population of this region, knowledge of long-term natural hydroclimatic variability is valuable for resource managers and policy-makers. An October-June precipitation reconstruction for the period AD 824-2007 was developed from multi-century tree-ring records of Pseudotsuga menziesii (Douglas-fir), Pinus strobiformis (Southwestern white pine) and Pinus ponderosa (Ponderosa pine) for the Jemez Mountains in Northern New Mexico. Calibration and verification statistics for the period 1896-2007 show a high level of skill, and account for a significant portion of the observed variance (>50%) irrespective of which period is used to develop or verify the regression model. Split-sample validation supports our use of a reconstruction model based on the full period of reliable observational data (1896-2007). A recent segment of the reconstruction (2000-2006) emerges as the driest 7-year period sensed by the trees in the entire record. That this period was only moderately dry in precipitation anomaly likely indicates accentuated stress from other factors, such as warmer temperatures. Correlation field maps of actual and reconstructed October-June total precipitation, sea surface temperatures and 500-mb geopotential heights show characteristics that are similar to those indicative of El Ni??o-Southern Oscillation patterns, particularly with regard to ocean and atmospheric conditions in the equatorial and north Pacific. Our 1184-year reconstruction of hydroclimatic variability provides long-term perspective on current and 20th century wet and dry events in Northern New Mexico, is useful to guide expectations of future variability, aids sustainable water management, provides scenarios for drought planning and as inputs for hydrologic models under a broader range of

  7. Reconstruction of compressive multispectral sensing data using a multilayered conditional random field approach

    NASA Astrophysics Data System (ADS)

    Kazemzadeh, Farnoud; Shafiee, Mohammad J.; Wong, Alexander; Clausi, David A.

    2014-09-01

    The prevalence of compressive sensing is continually growing in all facets of imaging science. Com- pressive sensing allows for the capture and reconstruction of an entire signal from a sparse (under- sampled), yet sufficient, set of measurements that is representative of the target being observed. This compressive sensing strategy reduces the duration of the data capture, the size of the acquired data, and the cost of the imaging hardware as well as complexity while preserving the necessary underlying information. Compressive sensing systems require the accompaniment of advanced re- construction algorithms to reconstruct complete signals from the sparse measurements made. Here, a new reconstruction algorithm is introduced specifically for the reconstruction of compressive multispectral (MS) sensing data that allows for high-quality reconstruction from acquisitions at sub-Nyquist rates. We propose a multilayered conditional random field (MCRF) model, which extends upon the CRF model by incorporating two joint layers of certainty and estimated states. The proposed algorithm treats the reconstruction of each spectral channel as a MCRF given the sparse MS measurements. Since the observations are incomplete, the MCRF incorporates an extra layer determining the certainty of the measurements. The proposed MCRF approach was evaluated using simulated compressive MS data acquisitions, and is shown to enable fast acquisition of MS sensing data with reduced imaging hardware cost and complexity.

  8. Structured Multifrontal Sparse Solver

    2014-05-01

    StruMF is an algebraic structured preconditioner for the interative solution of large sparse linear systems. The preconditioner corresponds to a multifrontal variant of sparse LU factorization in which some dense blocks of the factors are approximated with low-rank matrices. It is algebraic in that it only requires the linear system itself, and the approximation threshold that determines the accuracy of individual low-rank approximations. Favourable rank properties are obtained using a block partitioning which is amore » refinement of the partitioning induced by nested dissection ordering.« less

  9. Simultaneous Greedy Analysis Pursuit for compressive sensing of multi-channel ECG signals.

    PubMed

    Avonds, Yurrit; Liu, Yipeng; Van Huffel, Sabine

    2014-01-01

    This paper addresses compressive sensing for multi-channel ECG. Compared to the traditional sparse signal recovery approach which decomposes the signal into the product of a dictionary and a sparse vector, the recently developed cosparse approach exploits sparsity of the product of an analysis matrix and the original signal. We apply the cosparse Greedy Analysis Pursuit (GAP) algorithm for compressive sensing of ECG signals. Moreover, to reduce processing time, classical signal-channel GAP is generalized to the multi-channel GAP algorithm, which simultaneously reconstructs multiple signals with similar support. Numerical experiments show that the proposed method outperforms the classical sparse multi-channel greedy algorithms in terms of accuracy and the single-channel cosparse approach in terms of processing speed.

  10. Increasing signal-to-noise ratio of reconstructed digital holograms by using light spatial noise portrait of camera's photosensor

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.

    2015-01-01

    Digital holography is technique which includes recording of interference pattern with digital photosensor, processing of obtained holographic data and reconstruction of object wavefront. Increase of signal-to-noise ratio (SNR) of reconstructed digital holograms is especially important in such fields as image encryption, pattern recognition, static and dynamic display of 3D scenes, and etc. In this paper compensation of photosensor light spatial noise portrait (LSNP) for increase of SNR of reconstructed digital holograms is proposed. To verify the proposed method, numerical experiments with computer generated Fresnel holograms with resolution equal to 512×512 elements were performed. Simulation of shots registration with digital camera Canon EOS 400D was performed. It is shown that solo use of the averaging over frames method allows to increase SNR only up to 4 times, and further increase of SNR is limited by spatial noise. Application of the LSNP compensation method in conjunction with the averaging over frames method allows for 10 times SNR increase. This value was obtained for LSNP measured with 20 % error. In case of using more accurate LSNP, SNR can be increased up to 20 times.

  11. Non-local total-variation (NLTV) minimization combined with reweighted L1-norm for compressed sensing CT reconstruction

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Chen, Josephine; Wang, Adam; Chuang, Cynthia; Held, Mareike; Pouliot, Jean

    2016-09-01

    The compressed sensing (CS) technique has been employed to reconstruct CT/CBCT images from fewer projections as it is designed to recover a sparse signal from highly under-sampled measurements. Since the CT image itself cannot be sparse, a variety of transforms were developed to make the image sufficiently sparse. The total-variation (TV) transform with local image gradient in L1-norm was adopted in most cases. This approach, however, which utilizes very local information and penalizes the weight at a constant rate regardless of different degrees of spatial gradient, may not produce qualified reconstructed images from noise-contaminated CT projection data. This work presents a new non-local operator of total-variation (NLTV) to overcome the deficits stated above by utilizing a more global search and non-uniform weight penalization in reconstruction. To further improve the reconstructed results, a reweighted L1-norm that approximates the ideal sparse signal recovery of the L0-norm is incorporated into the NLTV reconstruction with additional iterates. This study tested the proposed reconstruction method (reweighted NLTV) from under-sampled projections of 4 objects and 5 experiments (1 digital phantom with low and high noise scenarios, 1 pelvic CT, and 2 CBCT images). We assessed its performance against the conventional TV, NLTV and reweighted TV transforms in the tissue contrast, reconstruction accuracy, and imaging resolution by comparing contrast-noise-ratio (CNR), normalized root-mean square error (nRMSE), and profiles of the reconstructed images. Relative to the conventional NLTV, combining the reweighted L1-norm with NLTV further enhanced the CNRs by 2-4 times and improved reconstruction accuracy. Overall, except for the digital phantom with low noise simulation, our proposed algorithm produced the reconstructed image with the lowest nRMSEs and the highest CNRs for each experiment.

  12. Non-local total-variation (NLTV) minimization combined with reweighted L1-norm for compressed sensing CT reconstruction

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Chen, Josephine; Wang, Adam; Chuang, Cynthia; Held, Mareike; Pouliot, Jean

    2016-09-01

    The compressed sensing (CS) technique has been employed to reconstruct CT/CBCT images from fewer projections as it is designed to recover a sparse signal from highly under-sampled measurements. Since the CT image itself cannot be sparse, a variety of transforms were developed to make the image sufficiently sparse. The total-variation (TV) transform with local image gradient in L1-norm was adopted in most cases. This approach, however, which utilizes very local information and penalizes the weight at a constant rate regardless of different degrees of spatial gradient, may not produce qualified reconstructed images from noise-contaminated CT projection data. This work presents a new non-local operator of total-variation (NLTV) to overcome the deficits stated above by utilizing a more global search and non-uniform weight penalization in reconstruction. To further improve the reconstructed results, a reweighted L1-norm that approximates the ideal sparse signal recovery of the L0-norm is incorporated into the NLTV reconstruction with additional iterates. This study tested the proposed reconstruction method (reweighted NLTV) from under-sampled projections of 4 objects and 5 experiments (1 digital phantom with low and high noise scenarios, 1 pelvic CT, and 2 CBCT images). We assessed its performance against the conventional TV, NLTV and reweighted TV transforms in the tissue contrast, reconstruction accuracy, and imaging resolution by comparing contrast-noise-ratio (CNR), normalized root-mean square error (nRMSE), and profiles of the reconstructed images. Relative to the conventional NLTV, combining the reweighted L1-norm with NLTV further enhanced the CNRs by 2–4 times and improved reconstruction accuracy. Overall, except for the digital phantom with low noise simulation, our proposed algorithm produced the reconstructed image with the lowest nRMSEs and the highest CNRs for each experiment.

  13. On ECG reconstruction using weighted-compressive sensing.

    PubMed

    Zonoobi, Dornoosh; Kassim, Ashraf A

    2014-06-01

    The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices.

  14. On ECG reconstruction using weighted-compressive sensing

    PubMed Central

    Kassim, Ashraf A.

    2014-01-01

    The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices. PMID:26609381

  15. A boostrap algorithm for temporal signal reconstruction in the presence of noise from its fractional Fourier transformed intensity spectra

    SciTech Connect

    Tan, Cheng-Yang; /Fermilab

    2011-02-01

    A bootstrap algorithm for reconstructing the temporal signal from four of its fractional Fourier intensity spectra in the presence of noise is described. An optical arrangement is proposed which realises the bootstrap method for the measurement of ultrashort laser pulses. The measurement of short laser pulses which are less than 1 ps is an ongoing challenge in optical physics. One reason is that no oscilloscope exists today which can directly measure the time structure of these pulses and so it becomes necessary to invent other techniques which indirectly provide the necessary information for temporal pulse reconstruction. One method called FROG (frequency resolved optical gating) has been in use since 19911 and is one of the popular methods for recovering these types of short pulses. The idea behind FROG is the use of multiple time-correlated pulse measurements in the frequency domain for the reconstruction. Multiple data sets are required because only intensity information is recorded and not phase, and thus by collecting multiple data sets, there is enough redundant measurements to yield the original time structure, but not necessarily uniquely (or even up to an arbitrary constant phase offset). The objective of this paper is to describe another method which is simpler than FROG. Instead of collecting many auto-correlated data sets, only two spectral intensity measurements of the temporal signal are needed in the absence of noise. The first can be from the intensity components of its usual Fourier transform and the second from its FrFT (fractional Fourier transform). In the presence of noise, a minimum of four measurements are required with the same FrFT order but with two different apertures. Armed with these two or four measurements, a unique solution up to a constant phase offset can be constructed.

  16. An infrared image super-resolution reconstruction method based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei

    2016-05-01

    Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.

  17. Neuromagnetic source reconstruction

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.; Leahy, R.M.

    1994-12-31

    In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.

  18. Multilevel sparse functional principal component analysis.

    PubMed

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  19. Multilevel sparse functional principal component analysis

    PubMed Central

    Di, Chongzhi; Crainiceanu, Ciprian M.; Jank, Wolfgang S.

    2014-01-01

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions. PMID:24872597

  20. SAR Image despeckling via sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Zhongmei; Yang, Xiaomei; Zheng, Liang

    2014-11-01

    SAR image despeckling is an active research area in image processing due to its importance in improving the quality of image for object detection and classification.In this paper, a new approach is proposed for multiplicative noise in SAR image removal based on nonlocal sparse representation by dictionary learning and collaborative filtering. First, a image is divided into many patches, and then a cluster is formed by clustering log-similar image patches using Fuzzy C-means (FCM). For each cluster, an over-complete dictionary is computed using the K-SVD method that iteratively updates the dictionary and the sparse coefficients. The patches belonging to the same cluster are then reconstructed by a sparse combination of the corresponding dictionary atoms. The reconstructed patches are finally collaboratively aggregated to build the denoised image. The experimental results show that the proposed method achieves much better results than many state-of-the-art algorithms in terms of both objective evaluation index (PSNR and ENL) and subjective visual perception.

  1. Sparse inpainting and isotropy

    SciTech Connect

    Feeney, Stephen M.; McEwen, Jason D.; Peiris, Hiranya V.; Marinucci, Domenico; Cammarota, Valentina; Wandelt, Benjamin D. E-mail: marinucc@axp.mat.uniroma2.it E-mail: h.peiris@ucl.ac.uk E-mail: cammarot@axp.mat.uniroma2.it

    2014-01-01

    Sparse inpainting techniques are gaining in popularity as a tool for cosmological data analysis, in particular for handling data which present masked regions and missing observations. We investigate here the relationship between sparse inpainting techniques using the spherical harmonic basis as a dictionary and the isotropy properties of cosmological maps, as for instance those arising from cosmic microwave background (CMB) experiments. In particular, we investigate the possibility that inpainted maps may exhibit anisotropies in the behaviour of higher-order angular polyspectra. We provide analytic computations and simulations of inpainted maps for a Gaussian isotropic model of CMB data, suggesting that the resulting angular trispectrum may exhibit small but non-negligible deviations from isotropy.

  2. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  3. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system.

  4. Sparse distributed memory

    SciTech Connect

    Kanerva, P.

    1988-01-01

    Theoretical models of the human brain and proposed neural-network computers are developed analytically. Chapters are devoted to the mathematical foundations, background material from computer science, the theory of idealized neurons, neurons as address decoders, and the search of memory for the best match. Consideration is given to sparse memory, distributed storage, the storage and retrieval of sequences, the construction of distributed memory, and the organization of an autonomous learning system. 63 refs.

  5. Sparse matrix test collections

    SciTech Connect

    Duff, I.

    1996-12-31

    This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.

  6. A fatal shot with a signal flare--a crime reconstruction.

    PubMed

    Brozek-Mucha, Zuzanna

    2009-05-01

    A reconstruction of an incident of a fatal wounding of a football fan with a parachute flare was performed. Physical and chemical examinations of the victim's trousers and parts of a flare removed from the wound in his leg were performed by means of an optical microscope and a scanning electron microscope coupled with an energy dispersive X-ray spectrometer. Signs of burning were seen on the front upper part of the trousers, including a 35-40 mm circular hole with melted and charred edges. Postblast residue present on the surface of the trousers contained strontium, magnesium, potassium, and chlorine. Also the case files--the medical reports and the witnesses' testimonies--were thoroughly studied. It has been found that the evidence collected in the case supported the version of the victim being shot by another person from a distance. PMID:19432745

  7. Reconstruction of the High-Osmolarity Glycerol (HOG) Signaling Pathway from the Halophilic Fungus Wallemia ichthyophaga in Saccharomyces cerevisiae

    PubMed Central

    Konte, Tilen; Terpitz, Ulrich; Plemenitaš, Ana

    2016-01-01

    The basidiomycetous fungus Wallemia ichthyophaga grows between 1.7 and 5.1 M NaCl and is the most halophilic eukaryote described to date. Like other fungi, W. ichthyophaga detects changes in environmental salinity mainly by the evolutionarily conserved high-osmolarity glycerol (HOG) signaling pathway. In Saccharomyces cerevisiae, the HOG pathway has been extensively studied in connection to osmotic regulation, with a valuable knock-out strain collection established. In the present study, we reconstructed the architecture of the HOG pathway of W. ichthyophaga in suitable S. cerevisiae knock-out strains, through heterologous expression of the W. ichthyophaga HOG pathway proteins. Compared to S. cerevisiae, where the Pbs2 (ScPbs2) kinase of the HOG pathway is activated via the SHO1 and SLN1 branches, the interactions between the W. ichthyophaga Pbs2 (WiPbs2) kinase and the W. ichthyophaga SHO1 branch orthologs are not conserved: as well as evidence of poor interactions between the WiSho1 Src-homology 3 (SH3) domain and the WiPbs2 proline-rich motif, the absence of a considerable part of the osmosensing apparatus in the genome of W. ichthyophaga suggests that the SHO1 branch components are not involved in HOG signaling in this halophilic fungus. In contrast, the conserved activation of WiPbs2 by the S. cerevisiae ScSsk2/ScSsk22 kinase and the sensitivity of W. ichthyophaga cells to fludioxonil, emphasize the significance of two-component (SLN1-like) signaling via Group III histidine kinase. Combined with protein modeling data, our study reveals conserved and non-conserved protein interactions in the HOG signaling pathway of W. ichthyophaga and therefore significantly improves the knowledge of hyperosmotic signal processing in this halophilic fungus. PMID:27379041

  8. Vector sparse representation of color image using quaternion matrix analysis.

    PubMed

    Xu, Yi; Yu, Licheng; Xu, Hongteng; Zhang, Hao; Nguyen, Truong

    2015-04-01

    Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain. PMID:25643407

  9. Classification of normal and epileptic seizure EEG signals using wavelet transform, phase-space reconstruction, and Euclidean distance.

    PubMed

    Lee, Sang-Hong; Lim, Joon S; Kim, Jae-Kwon; Yang, Junggi; Lee, Youngho

    2014-08-01

    This paper proposes new combined methods to classify normal and epileptic seizure EEG signals using wavelet transform (WT), phase-space reconstruction (PSR), and Euclidean distance (ED) based on a neural network with weighted fuzzy membership functions (NEWFM). WT, PSR, ED, and statistical methods that include frequency distributions and variation, were implemented to extract 24 initial features to use as inputs. Of the 24 initial features, 4 minimum features with the highest accuracy were selected using a non-overlap area distribution measurement method supported by the NEWFM. These 4 minimum features were used as inputs for the NEWFM and this resulted in performance sensitivity, specificity, and accuracy of 96.33%, 100%, and 98.17%, respectively. In addition, the area under Receiver Operating Characteristic (ROC) curve was used to measure the performances of NEWFM both without and with feature selections.

  10. k-t Group sparse: a method for accelerating dynamic MRI.

    PubMed

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G

    2011-10-01

    Compressed sensing (CS) is a data-reduction technique that has been applied to speed up the acquisition in MRI. However, the use of this technique in dynamic MR applications has been limited in terms of the maximum achievable reduction factor. In general, noise-like artefacts and bad temporal fidelity are visible in standard CS MRI reconstructions when high reduction factors are used. To increase the maximum achievable reduction factor, additional or prior information can be incorporated in the CS reconstruction. Here, a novel CS reconstruction method is proposed that exploits the structure within the sparse representation of a signal by enforcing the support components to be in the form of groups. These groups act like a constraint in the reconstruction. The information about the support region can be easily obtained from training data in dynamic MRI acquisitions. The proposed approach was tested in two-dimensional cardiac cine MRI with both downsampled and undersampled data. Results show that higher acceleration factors (up to 9-fold), with improved spatial and temporal quality, can be obtained with the proposed approach in comparison to the standard CS reconstructions. PMID:21394781

  11. Reconstruction of cellular signal transduction networks using perturbation assays and linear programming.

    PubMed

    Knapp, Bettina; Kaderali, Lars

    2013-01-01

    Perturbation experiments for example using RNA interference (RNAi) offer an attractive way to elucidate gene function in a high throughput fashion. The placement of hit genes in their functional context and the inference of underlying networks from such data, however, are challenging tasks. One of the problems in network inference is the exponential number of possible network topologies for a given number of genes. Here, we introduce a novel mathematical approach to address this question. We formulate network inference as a linear optimization problem, which can be solved efficiently even for large-scale systems. We use simulated data to evaluate our approach, and show improved performance in particular on larger networks over state-of-the art methods. We achieve increased sensitivity and specificity, as well as a significant reduction in computing time. Furthermore, we show superior performance on noisy data. We then apply our approach to study the intracellular signaling of human primary nave CD4(+) T-cells, as well as ErbB signaling in trastuzumab resistant breast cancer cells. In both cases, our approach recovers known interactions and points to additional relevant processes. In ErbB signaling, our results predict an important role of negative and positive feedback in controlling the cell cycle progression.

  12. Imaging method for downward-looking sparse linear array three-dimensional synthetic aperture radar based on reweighted atomic norm

    NASA Astrophysics Data System (ADS)

    Bao, Qian; Han, Kuoye; Lin, Yun; Zhang, Bingchen; Liu, Jianguo; Hong, Wen

    2016-01-01

    We propose an imaging algorithm for downward-looking sparse linear array three-dimensional synthetic aperture radar (DLSLA 3-D SAR) in the circumstance of cross-track sparse and nonuniform array configuration. Considering the off-grid effect and the resolution improvement, the algorithm combines pseudo-polar formatting algorithm, reweighed atomic norm minimization (RANM), and a parametric relaxation-based cyclic approach (RELAX) to improve the imaging performance with a reduced number of array antennas. RANM is employed in the cross-track imaging after pseudo-polar formatting the DLSLA 3-D SAR echo signal, then the reconstructed results are refined by RELAX. By taking advantage of the reweighted scheme, RANM can improve the resolution of the atomic norm minimization, and outperforms discretized compressive sensing schemes that suffer from off-grid effect. The simulated and real data experiments of DLSLA 3-D SAR verify the performance of the proposed algorithm.

  13. Compressive sensing of sparse tensors.

    PubMed

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan

    2014-10-01

    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  14. Sparse representation for color image restoration.

    PubMed

    Mairal, Julien; Elad, Michael; Sapiro, Guillermo

    2008-01-01

    Sparse representations of signals have drawn considerable interest in recent years. The assumption that natural signals, such as images, admit a sparse decomposition over a redundant dictionary leads to efficient algorithms for handling such sources of data. In particular, the design of well adapted dictionaries for images has been a major challenge. The K-SVD has been recently proposed for this task and shown to perform very well for various grayscale image processing tasks. In this paper, we address the problem of learning dictionaries for color images and extend the K-SVD-based grayscale image denoising algorithm that appears in. This work puts forward ways for handling nonhomogeneous noise and missing information, paving the way to state-of-the-art results in applications such as color image denoising, demosaicing, and inpainting, as demonstrated in this paper. PMID:18229804

  15. Input reconstruction for networked control systems subject to deception attacks and data losses on control signals

    NASA Astrophysics Data System (ADS)

    Keller, J. Y.; Chabir, K.; Sauter, D.

    2016-03-01

    State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.

  16. TASMANIAN Sparse Grids Module

    SciTech Connect

    and Drayton Munster, Miroslav Stoyanov

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library that provides a command line interface via text files ad a MATLAB interface via the command line tool.

  17. Sparse Image Format

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. Itmore » supports large files (> 2GB) and is designed to build in Windows and Linux environments.« less

  18. TASMANIAN Sparse Grids Module

    2013-09-20

    Sparse Grids are the family of methods of choice for multidimensional integration and interpolation in low to moderate number of dimensions. The method is to select extend a one dimensional set of abscissas, weights and basis functions by taking a subset of all possible tensor products. The module provides the ability to create global and local approximations based on polynomials and wavelets. The software has three components, a library, a wrapper for the library thatmore » provides a command line interface via text files ad a MATLAB interface via the command line tool.« less

  19. Sparse Image Format

    SciTech Connect

    Eads, Damian Ryan

    2007-04-12

    The Sparse Image Format (SIF) is a file format for storing spare raster images. It works by breaking an image down into tiles. Space is savid by only storing non-uniform tiles, i.e. tiles with at least two different pixel values. If a tile is completely uniform, its common pixel value is stored instead of the complete tile raster. The software is a library in the C language used for manipulating files in SIF format. It supports large files (> 2GB) and is designed to build in Windows and Linux environments.

  20. Typical reconstruction performance for distributed compressed sensing based on ℓ2,1-norm regularized least square and Bayesian optimal reconstruction: influences of noise

    NASA Astrophysics Data System (ADS)

    Shiraki, Yoshifumi; Kabashima, Yoshiyuki

    2016-06-01

    A signal model called joint sparse model 2 (JSM-2) or the multiple measurement vector problem, in which all sparse signals share their support, is important for dealing with practical signal processing problems. In this paper, we investigate the typical reconstruction performance of noisy measurement JSM-2 problems for {{\\ell}2,1} -norm regularized least square reconstruction and the Bayesian optimal reconstruction scheme in terms of mean square error. Employing the replica method, we show that these schemes, which exploit the knowledge of the sharing of the signal support, can recover the signals more precisely as the number of channels increases. In addition, we compare the reconstruction performance of two different ensembles of observation matrices: one is composed of independent and identically distributed random Gaussian entries and the other is designed so that row vectors are orthogonal to one another. As reported for the single-channel case in earlier studies, our analysis indicates that the latter ensemble offers better performance than the former ones for the noisy JSM-2 problem. The results of numerical experiments with a computationally feasible approximation algorithm we developed for this study agree with the theoretical estimation.

  1. Time-frequency scale decomposition of tectonic tremor signals for space-time reconstruction of tectonic tremor sources

    NASA Astrophysics Data System (ADS)

    Poiata, N.; Satriano, C.; Vilotte, J. P.; Bernard, P.; Obara, K.

    2015-12-01

    Seismic radiation associated with transient deformations along the faults and subduction interfaces encompasses a variety of events, i.e., tectonic tremors, low-frequency earthquakes (LFE), very low-frequency earthquakes (VLFs), and slow-slip events (SSE), with a wide range of seismic moment and characteristic durations. Characterizing in space and time the complex sources of these slow earthquakes, and their relationship with background seismicity and large earthquakes generation, is of great importance for understanding the physics and mechanics of the processes of active deformations along the plate interfaces. We present here first developments towards a methodology for: (1) extracting the different frequency and scale components of observed tectonic tremor signal, using advanced time-frequency and time-scale signal representation such as Gabor transform scheme based on, e.g. Wilson bases or Modified Discrete Cosine Transform (MDCT) bases; (2) reconstructing their corresponding potential sources in space and time, using the array method of Poiata et al. (2015). The methodology is assessed using a dataset of tectonic tremor episodes from Shikoku, Japan, recorded by the Hi-net seismic network operated by NIED. We illustrate its performance and potential in providing activity maps - associated to different scale-components of tectonic tremors - that can be analyzed statistically to improve our understanding of tremor sources and scaling, as well as their relation with the background seismicity.

  2. Generation of dense statistical connectomes from sparse morphological data

    PubMed Central

    Egger, Robert; Dercksen, Vincent J.; Udvary, Daniel; Hege, Hans-Christian; Oberlaender, Marcel

    2014-01-01

    Sensory-evoked signal flow, at cellular and network levels, is primarily determined by the synaptic wiring of the underlying neuronal circuitry. Measurements of synaptic innervation, connection probabilities and subcellular organization of synaptic inputs are thus among the most active fields of research in contemporary neuroscience. Methods to measure these quantities range from electrophysiological recordings over reconstructions of dendrite-axon overlap at light-microscopic levels to dense circuit reconstructions of small volumes at electron-microscopic resolution. However, quantitative and complete measurements at subcellular resolution and mesoscopic scales to obtain all local and long-range synaptic in/outputs for any neuron within an entire brain region are beyond present methodological limits. Here, we present a novel concept, implemented within an interactive software environment called NeuroNet, which allows (i) integration of sparsely sampled (sub)cellular morphological data into an accurate anatomical reference frame of the brain region(s) of interest, (ii) up-scaling to generate an average dense model of the neuronal circuitry within the respective brain region(s) and (iii) statistical measurements of synaptic innervation between all neurons within the model. We illustrate our approach by generating a dense average model of the entire rat vibrissal cortex, providing the required anatomical data, and illustrate how to measure synaptic innervation statistically. Comparing our results with data from paired recordings in vitro and in vivo, as well as with reconstructions of synaptic contact sites at light- and electron-microscopic levels, we find that our in silico measurements are in line with previous results. PMID:25426033

  3. Group-based sparse representation for image restoration.

    PubMed

    Zhang, Jian; Zhao, Debin; Gao, Wen

    2014-08-01

    Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. In addition, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman-based technique is developed to solve the proposed GSR-driven ℓ0 minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both peak signal-to-noise ratio and visual perception.

  4. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts. PMID:27036798

  5. Modified sparse regularization for electrical impedance tomography.

    PubMed

    Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi

    2016-03-01

    Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.

  6. A non-iterative method for the electrical impedance tomography based on joint sparse recovery

    NASA Astrophysics Data System (ADS)

    Lee, Ok Kyun; Kang, Hyeonbae; Ye, Jong Chul; Lim, Mikyoung

    2015-07-01

    The purpose of this paper is to propose a non-iterative method for the inverse conductivity problem of recovering multiple small anomalies from the boundary measurements. When small anomalies are buried in a conducting object, the electric potential values inside the object can be expressed by integrals of densities with a common sparse support on the location of anomalies. Based on this integral expression, we formulate the reconstruction problem of small anomalies as a joint sparse recovery and present an efficient non-iterative recovery algorithm of small anomalies. Furthermore, we also provide a slightly modified algorithm to reconstruct an extended anomaly. We validate the effectiveness of the proposed algorithm over the linearized method and the multiple signal classification algorithm by numerical simulations. This work is supported by the Korean Ministry of Education, Sciences and Technology through NRF grant No. NRF-2010-0017532 (to H K), the Korean Ministry of Science, ICT & Future Planning; through NRF grant No. NRF-2013R1A1A3012931 (to M L), the R&D Convergence Program of NST (National Research Council of Science & Technology) of Republic of Korea (Grant CAP-13-3-KERI) (to O K L and J C Y).

  7. Digital DC-Reconstruction of AC-Coupled Electrophysiological Signals with a Single Inverting Filter

    PubMed Central

    Schmid, Ramun; Leber, Remo; Schmid, Hans-Jakob; Generali, Gianluca

    2016-01-01

    Since the introduction of digital electrocardiographs, high-pass filters have been necessary for successful analog-to-digital conversion with a reasonable amplitude resolution. On the other hand, such high-pass filters may distort the diagnostically significant ST-segment of the ECG, which can result in a misleading diagnosis. We present an inverting filter that successfully undoes the effects of a 0.05 Hz single pole high-pass filter. The inverting filter has been tested on more than 1600 clinical ECGs with one-minute durations and produces a negligible mean RMS-error of 3.1*10−8 LSB. Alternative, less strong inverting filters have also been tested, as have different applications of the filters with respect to rounding of the signals after filtering. A design scheme for the alternative inverting filters has been suggested, based on the maximum strength of the filter. With the use of the suggested filters, it is possible to recover the original DC-coupled ECGs from AC-coupled ECGs, at least when a 0.05 Hz first order digital single pole high-pass filter is used for the AC-coupling. PMID:26938769

  8. Digital DC-Reconstruction of AC-Coupled Electrophysiological Signals with a Single Inverting Filter.

    PubMed

    Abächerli, Roger; Isaksen, Jonas; Schmid, Ramun; Leber, Remo; Schmid, Hans-Jakob; Generali, Gianluca

    2016-01-01

    Since the introduction of digital electrocardiographs, high-pass filters have been necessary for successful analog-to-digital conversion with a reasonable amplitude resolution. On the other hand, such high-pass filters may distort the diagnostically significant ST-segment of the ECG, which can result in a misleading diagnosis. We present an inverting filter that successfully undoes the effects of a 0.05 Hz single pole high-pass filter. The inverting filter has been tested on more than 1600 clinical ECGs with one-minute durations and produces a negligible mean RMS-error of 3.1*10(-8) LSB. Alternative, less strong inverting filters have also been tested, as have different applications of the filters with respect to rounding of the signals after filtering. A design scheme for the alternative inverting filters has been suggested, based on the maximum strength of the filter. With the use of the suggested filters, it is possible to recover the original DC-coupled ECGs from AC-coupled ECGs, at least when a 0.05 Hz first order digital single pole high-pass filter is used for the AC-coupling. PMID:26938769

  9. Digital DC-Reconstruction of AC-Coupled Electrophysiological Signals with a Single Inverting Filter.

    PubMed

    Abächerli, Roger; Isaksen, Jonas; Schmid, Ramun; Leber, Remo; Schmid, Hans-Jakob; Generali, Gianluca

    2016-01-01

    Since the introduction of digital electrocardiographs, high-pass filters have been necessary for successful analog-to-digital conversion with a reasonable amplitude resolution. On the other hand, such high-pass filters may distort the diagnostically significant ST-segment of the ECG, which can result in a misleading diagnosis. We present an inverting filter that successfully undoes the effects of a 0.05 Hz single pole high-pass filter. The inverting filter has been tested on more than 1600 clinical ECGs with one-minute durations and produces a negligible mean RMS-error of 3.1*10(-8) LSB. Alternative, less strong inverting filters have also been tested, as have different applications of the filters with respect to rounding of the signals after filtering. A design scheme for the alternative inverting filters has been suggested, based on the maximum strength of the filter. With the use of the suggested filters, it is possible to recover the original DC-coupled ECGs from AC-coupled ECGs, at least when a 0.05 Hz first order digital single pole high-pass filter is used for the AC-coupling.

  10. Blind source separation by sparse decomposition

    NASA Astrophysics Data System (ADS)

    Zibulevsky, Michael; Pearlmutter, Barak A.

    2000-04-01

    The blind source separation problem is to extract the underlying source signals from a set of their linear mixtures, where the mixing matrix is unknown. This situation is common, eg in acoustics, radio, and medical signal processing. We exploit the property of the sources to have a sparse representation in a corresponding signal dictionary. Such a dictionary may consist of wavelets, wavelet packets, etc., or be obtained by learning from a given family of signals. Starting from the maximum a posteriori framework, which is applicable to the case of more sources than mixtures, we derive a few other categories of objective functions, which provide faster and more robust computations, when there are an equal number of sources and mixtures. Our experiments with artificial signals and with musical sounds demonstrate significantly better separation than other known techniques.

  11. Fast Sparse Level Sets on Graphics Hardware.

    PubMed

    Jalba, Andrei C; van der Laan, Wladimir J; Roerdink, Jos B T M

    2013-01-01

    The level-set method is one of the most popular techniques for capturing and tracking deformable interfaces. Although level sets have demonstrated great potential in visualization and computer graphics applications, such as surface editing and physically based modeling, their use for interactive simulations has been limited due to the high computational demands involved. In this paper, we address this computational challenge by leveraging the increased computing power of graphics processors, to achieve fast simulations based on level sets. Our efficient, sparse GPU level-set method is substantially faster than other state-of-the-art, parallel approaches on both CPU and GPU hardware. We further investigate its performance through a method for surface reconstruction, based on GPU level sets. Our novel multiresolution method for surface reconstruction from unorganized point clouds compares favorably with recent, existing techniques and other parallel implementations. Finally, we point out that both level-set computations and rendering of level-set surfaces can be performed at interactive rates, even on large volumetric grids. Therefore, many applications based on level sets can benefit from our sparse level-set method.

  12. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  13. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking

    PubMed Central

    Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness.

  14. Robust visual multitask tracking via composite sparse model

    NASA Astrophysics Data System (ADS)

    Jin, Bo; Jing, Zhongliang; Wang, Meng; Pan, Han

    2014-11-01

    Recently, multitask learning was applied to visual tracking by learning sparse particle representations in a joint task, which led to the so-called multitask tracking algorithm (MTT). Although MTT shows impressive tracking performances by mining the interdependencies between particles, the individual feature of each particle is underestimated. The utilized L1,q norm regularization assumes all features are shared between all particles and results in nearly identical representation coefficients in nonsparse rows. We propose a composite sparse multitask tracking algorithm (CSMTT). We develop a composite sparse model to formulate the object appearance as a combination of the shared feature component, the individual feature component, and the outlier component. The composite sparsity is achieved via the L and L1,1 norm minimization, and is optimized by the alternating direction method of multipliers, which provides a favorable reconstruction performance and an impressive computational efficiency. Moreover, a dynamical dictionary updating scheme is proposed to capture appearance changes. CSMTT is tested on real-world video sequences under various challenges, and experimental results show that the composite sparse model achieves noticeable lower reconstruction errors and higher computational speeds than traditional sparse models, and CSMTT has consistently better tracking performances against seven state-of-the-art trackers.

  15. Online Hierarchical Sparse Representation of Multifeature for Robust Object Tracking.

    PubMed

    Yang, Honghong; Qu, Shiru

    2016-01-01

    Object tracking based on sparse representation has given promising tracking results in recent years. However, the trackers under the framework of sparse representation always overemphasize the sparse representation and ignore the correlation of visual information. In addition, the sparse coding methods only encode the local region independently and ignore the spatial neighborhood information of the image. In this paper, we propose a robust tracking algorithm. Firstly, multiple complementary features are used to describe the object appearance; the appearance model of the tracked target is modeled by instantaneous and stable appearance features simultaneously. A two-stage sparse-coded method which takes the spatial neighborhood information of the image patch and the computation burden into consideration is used to compute the reconstructed object appearance. Then, the reliability of each tracker is measured by the tracking likelihood function of transient and reconstructed appearance models. Finally, the most reliable tracker is obtained by a well established particle filter framework; the training set and the template library are incrementally updated based on the current tracking results. Experiment results on different challenging video sequences show that the proposed algorithm performs well with superior tracking accuracy and robustness. PMID:27630710

  16. Sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Sparse distributed memory was proposed be Pentti Kanerva as a realizable architecture that could store large patterns and retrieve them based on partial matches with patterns representing current sensory inputs. This memory exhibits behaviors, both in theory and in experiment, that resemble those previously unapproached by machines - e.g., rapid recognition of faces or odors, discovery of new connections between seemingly unrelated ideas, continuation of a sequence of events when given a cue from the middle, knowing that one doesn't know, or getting stuck with an answer on the tip of one's tongue. These behaviors are now within reach of machines that can be incorporated into the computing systems of robots capable of seeing, talking, and manipulating. Kanerva's theory is a break with the Western rationalistic tradition, allowing a new interpretation of learning and cognition that respects biology and the mysteries of individual human beings.

  17. Sparse Hashing Tracking.

    PubMed

    Zhang, Lihe; Lu, Huchuan; Du, Dandan; Liu, Luning

    2016-02-01

    In this paper, we propose a novel tracking framework based on a sparse and discriminative hashing method. Different from the previous work, we treat object tracking as an approximate nearest neighbor searching process in a binary space. Using the hash functions, the target templates and the candidates can be projected into the Hamming space, facilitating the distance calculation and tracking efficiency. First, we integrate both the inter-class and intra-class information to train multiple hash functions for better classification, while most classifiers in previous tracking methods usually neglect the inter-class correlation, which may cause the inaccuracy. Then, we introduce sparsity into the hash coefficient vectors for dynamic feature selection, which is crucial to select the discriminative and stable features to adapt to visual variations during the tracking process. Extensive experiments on various challenging sequences show that the proposed algorithm performs favorably against the state-of-the-art methods.

  18. Reconstructing the Fastest Chemical and Electrical Signalling Responses to Microgravity Stress in Plants

    NASA Astrophysics Data System (ADS)

    Mugnai, Sergio; Pandolfi, Camilla; Masi, Elisa; Azzarello, Elisa; Voigt, Boris; Baluska, Frantisek; Volkmann, Dieter; Mancuso, Stefano

    Plants are particularly suited to study the response of a living organism to gravity as they are extremely sensitive to its changes. Gravity perception is a well-studied phenomenon, but the chain of events related to signal transduction and transmission still suffers lack of information. Preliminary results obtained in previous parabolic flight campaigns (PFCs) by our Lab show that microgravity (¡0.05g), but not hypergravity (1.8g), repeatedly induced immediate (less than 1.5 s) oxygen bursts when maize roots experienced loss of gravity forces. Interestingly, these changes were located exclusively in the apex, but not in the mature zone of the root. Ground experiments have also revealed the onset of strong and rapid electrical responses in maize root apices subjected to stress, which lead to the hypothesis of an intrinsic capacity of the root apex to generate functional networks. Experiments during the 49th and 51st ESA PFCs were aimed 1) to find out if the different consumption of oxygen at root level recorded in the previous PFCs can lead to a subsequent local emissions of ROS in living root apices; 2) to study the space-temporal pattern of the neuronal network generated by roots under gravity changing conditions; 3) to evaluate the onset of synchronization events during gravity changes conditions. Concerning oxygen bursts, results indicate that they probably implicate a strong generation of ROS (such as nitric oxide) matching exactly the microgravity events suggesting that the sensing mechanism is not only related to a general mechanical stress (i.e. tensegrity model, present also during hypergravity), but can be specific for the microgravity event. To further investigate this hypothesis we studied the distributed/synchronized electrical activity of cells by the use of a Multi-Electrode Array (MEA). The main results obtained are: root transition zone (TZ) showed a higher spike rate activity compared to the mature zone (MZ). Also, microgravity appeared to

  19. Genetic reconstruction of dopamine D1 receptor signaling in the nucleus accumbens facilitates natural and drug reward responses.

    PubMed

    Gore, Bryan B; Zweifel, Larry S

    2013-05-15

    The dopamine D1 receptor (D1R) facilitates reward acquisition and its alteration leads to profound learning deficits. However, its minimal functional circuit requirement is unknown. Using conditional reconstruction of functional D1R signaling in D1R knock-out mice, we define distinct requirements of D1R in subregions of the nucleus accumbens (NAc) for specific dimensions of reward. We demonstrate that D1R expression in the core region of the NAc (NAc(Core)), but not the shell (NAc(Shell)), enhances selectively a unique form of pavlovian conditioned approach and mediates D1R-dependent cocaine sensitization. However, D1R expression in either the NAc(Core) or the NAc(Shell) improves instrumental responding for reward. In contrast, neither NAc(Core) nor NAc(Shell) D1R is sufficient to promote motivation to work for reward in a progressive ratio task or for motor learning. These results highlight dissociated circuit requirements of D1R for dopamine-dependent behaviors. PMID:23678109

  20. Reconstructing direct and indirect interactions in networked public goods game

    NASA Astrophysics Data System (ADS)

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system.

  1. Reconstructing direct and indirect interactions in networked public goods game.

    PubMed

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-07-22

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system.

  2. Reconstructing direct and indirect interactions in networked public goods game.

    PubMed

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-01-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system. PMID:27444774

  3. Reconstructing direct and indirect interactions in networked public goods game

    PubMed Central

    Han, Xiao; Shen, Zhesi; Wang, Wen-Xu; Lai, Ying-Cheng; Grebogi, Celso

    2016-01-01

    Network reconstruction is a fundamental problem for understanding many complex systems with unknown interaction structures. In many complex systems, there are indirect interactions between two individuals without immediate connection but with common neighbors. Despite recent advances in network reconstruction, we continue to lack an approach for reconstructing complex networks with indirect interactions. Here we introduce a two-step strategy to resolve the reconstruction problem, where in the first step, we recover both direct and indirect interactions by employing the Lasso to solve a sparse signal reconstruction problem, and in the second step, we use matrix transformation and optimization to distinguish between direct and indirect interactions. The network structure corresponding to direct interactions can be fully uncovered. We exploit the public goods game occurring on complex networks as a paradigm for characterizing indirect interactions and test our reconstruction approach. We find that high reconstruction accuracy can be achieved for both homogeneous and heterogeneous networks, and a number of empirical networks in spite of insufficient data measurement contaminated by noise. Although a general framework for reconstructing complex networks with arbitrary types of indirect interactions is yet lacking, our approach opens new routes to separate direct and indirect interactions in a representative complex system. PMID:27444774

  4. D Super-Resolution Approach for Sparse Laser Scanner Data

    NASA Astrophysics Data System (ADS)

    Hosseinyalamdary, S.; Yilmaz, A.

    2015-08-01

    Laser scanner point cloud has been emerging in Photogrammetry and computer vision to achieve high level tasks such as object tracking, object recognition and scene understanding. However, low cost laser scanners are noisy, sparse and prone to systematic errors. This paper proposes a novel 3D super resolution approach to reconstruct surface of the objects in the scene. This method works on sparse, unorganized point clouds and has superior performance over other surface recovery approaches. Since the proposed approach uses anisotropic diffusion equation, it does not deteriorate the object boundaries and it preserves topology of the object.

  5. Sub-Nyquist signal-reconstruction-free operational modal analysis and damage detection in the presence of noise

    NASA Astrophysics Data System (ADS)

    Gkoktsi, Kyriaki; Giaralis, Agathoklis; TauSiesakul, Bamrung

    2016-04-01

    Motivated by a need to reduce energy consumption in wireless sensors for vibration-based structural health monitoring (SHM) associated with data acquisition and transmission, this paper puts forth a novel approach for undertaking operational modal analysis (OMA) and damage localization relying on compressed vibrations measurements sampled at rates well below the Nyquist rate. Specifically, non-uniform deterministic sub-Nyquist multi-coset sampling of response acceleration signals in white noise excited linear structures is considered in conjunction with a power spectrum blind sampling/estimation technique which retrieves/samples the power spectral density matrix from arrays of sensors directly from the sub-Nyquist measurements (i.e., in the compressed domain) without signal reconstruction in the time-domain and without posing any signal sparsity conditions. The frequency domain decomposition algorithm is then applied to the power spectral density matrix to extract natural frequencies and mode shapes as a standard OMA step. Further, the modal strain energy index (MSEI) is considered for damage localization based on the mode shapes extracted directly from the compressed measurements. The effectiveness and accuracy of the proposed approach is numerically assessed by considering simulated vibration data pertaining to a white-noise excited simply supported beam in healthy and in 3 damaged states, contaminated with Gaussian white noise. Good accuracy is achieved in estimating mode shapes (quantified in terms of the modal assurance criterion) and natural frequencies from an array of 15 multi-coset devices sampling at a 70% slower than the Nyquist frequency rate for SNRs as low as 10db. Damage localization of equal level/quality is also achieved by the MSEI applied to mode shapes derived from noisy sub-Nyquist (70% compression) and Nyquist measurements for all damaged states considered. Overall, the furnished numerical results demonstrate that the herein considered sub

  6. Sparse deformable models with application to cardiac motion analysis.

    PubMed

    Yu, Yang; Zhang, Shaoting; Huang, Junzhou; Metaxas, Dimitris; Axel, Leon

    2013-01-01

    Deformable models have been widely used with success in medical image analysis. They combine bottom-up information derived from image appearance cues, with top-down shape-based constraints within a physics-based formulation. However, in many real world problems the observations extracted from the image data often contain gross errors, which adversely affect the deformation accuracy. To alleviate this issue, we introduce a new family of deformable models that are inspired from compressed sensing, a technique for efficiently reconstructing a signal based on its sparseness in some domain. In this problem, we employ sparsity to represent the outliers or gross errors, and combine it seamlessly with deformable models. The proposed new formulation is applied to the analysis of cardiac motion, using tagged magnetic resonance imaging (tMRI), where the automated tagging line tracking results are very noisy due to the poor image quality. Our new deformable models track the heart motion robustly, and the resulting strains are consistent with those calculated from manual labels. PMID:24683970

  7. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  8. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    SciTech Connect

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Keall, Paul J.; Kuncic, Zdenka

    2014-04-15

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR

  9. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  10. Super-sparsely view-sampled cone-beam CT by incorporating prior data.

    PubMed

    Abbas, Sajid; Min, Jonghwan; Cho, Seungryong

    2013-01-01

    Computed tomography (CT) is widely used in medicine for diagnostics or for image-guided therapies, and is also popular in industrial applications for nondestructive testing. CT conventionally requires a large number of projections to produce volumetric images of a scanned object, because the conventional image reconstruction algorithm is based on filtered-backprojection. This requirement may result in relatively high radiation dose to the patients in medical CT unless the radiation dose at each view angle is reduced, and can cause expensive scanning time and efforts in industrial CT applications. Sparse- view CT may provide a viable option to address both issues including high radiation dose and expensive scanning efforts. However, image reconstruction from sparsely sampled data in CT is in general very challenging, and much efforts have been made to develop algorithms for such an image reconstruction problem. Image total-variation minimization algorithm inspired by compressive sensing theory has recently been developed, which exploits the sparseness of the image derivative magnitude and can reconstruct images from sparse-view data to a similar quality of the images conventionally reconstructed from many views. In successive CT scans, prior CT image of an object and its projection data may be readily available, and the current CT image may have not much difference from the prior image. Considering the sparseness of such a difference image between the successive scans, image reconstruction of the difference image may be achieved from very sparsely sampled data. In this work, we showed that one can further reduce the number of projections, resulting in a super-sparse scan, for a good quality image reconstruction with the aid of a prior data. Both numerical and experimental results are provided.

  11. Spectrotemporal CT data acquisition and reconstruction at low dose

    PubMed Central

    Clark, Darin P.; Lee, Chang-Lung; Kirsch, David G.; Badea, Cristian T.

    2015-01-01

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction

  12. Spectrotemporal CT data acquisition and reconstruction at low dose

    SciTech Connect

    Clark, Darin P.; Badea, Cristian T.; Lee, Chang-Lung; Kirsch, David G.

    2015-11-15

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction

  13. Sparse Matrix for ECG Identification with Two-Lead Features

    PubMed Central

    Tseng, Kuo-Kun; Luo, Jiao; Wang, Wenmin; Haiting, Dong

    2015-01-01

    Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods. PMID:25961074

  14. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Hernandez, Svea

    2012-10-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positioning of some slits. All exposures are internals.

  15. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Wolfe, Michael

    2011-10-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positionings of some slits. All exposures are internals.

  16. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Wolfe, Michael

    2010-09-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positionings of some slits. All exposures are internals.

  17. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Hernandez, Svea

    2013-10-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positioning of some slits. All exposures are internals.

  18. CCD Sparse Field CTE Internal

    NASA Astrophysics Data System (ADS)

    Wolfe, Michael

    2009-07-01

    CTE measurements are made using the "internal sparse field test", along the parallelaxis. The "POS=" optional parameter, introduced during cycle 11, is used to provideoff-center MSM positionings of some slits. All exposures are internals.

  19. Threaded Operations on Sparse Matrices

    SciTech Connect

    Sneed, Brett

    2015-09-01

    We investigate the use of sparse matrices and OpenMP multi-threading on linear algebra operations involving them. Several sparse matrix data structures are presented. Implementation of the multi- threading primarily occurs in the level one and two BLAS functions used within the four algorithms investigated{the Power Method, Conjugate Gradient, Biconjugate Gradient, and Jacobi's Method. The bene ts of launching threads once per high level algorithm are explored.

  20. Feasibility study of sparse-angular sampling and sinogram interpolation in material decomposition with a photon-counting detector

    NASA Astrophysics Data System (ADS)

    Kim, Dohyeon; Jo, Byungdu; Park, Su-Jin; Kim, Hyemi; Kim, Hee-Joung

    2016-03-01

    Spectral computed tomography (SCT) is a promising technique for obtaining enhanced image with contrast agent and distinguishing different materials. We focused on developing the analytic reconstruction algorithm in material decomposition technique with lower radiation exposure and shorter acquisition time. Sparse-angular sampling can reduce patient dose and scanning time for obtaining the reconstruction images. In this study, the sinogram interpolation method was used to improve the quality of material decomposed images in sparse angular sampling. A prototype of spectral CT system with 64 pixels CZT-based photon counting detector was used. The source-to-detector distance and the source-tocenter of rotation distance were 1200 and 1015 mm, respectively. The x-ray spectrum at 90 kVp with a tube current of 110 μA was used. Two energy bins (23-33 keV and 34-44 keV) were set to obtain the two images for decomposed iodine and calcification. We used PMMA phantom and its height and radius were 50 mm and 17.5 mm, respectively. The phantom contained 4 materials including iodine, gadolinium, calcification, and liquid state lipid. We evaluated the signal to noise ratio (SNR) of materials to examine the significance of sinogram interpolation method. The decomposed iodine and calcification images were obtained by projection based subtraction method using two energy bins with 36 projection data. The SNR in decomposed images were improved by using sinogram interpolation method. And these results indicated that the signal of decomposed material was increased and the noise of decomposed material was reduced. In conclusion, the sinogram interpolation method can be used in material decomposition method with sparse-angular sampling.

  1. Reconstructing WIMP properties through an interplay of signal measurements in direct detection, Fermi-LAT, and CTA searches for dark matter

    NASA Astrophysics Data System (ADS)

    Roszkowski, Leszek; Sessolo, Enrico Maria; Trojanowski, Sebastian; Williams, Andrew J.

    2016-08-01

    We examine the projected ability to reconstruct the mass, scattering, and annihilation cross section of dark matter in the new generation of large underground detectors, XENON-1T, SuperCDMS, and DarkSide-G2, in combination with diffuse gamma radiation from expected 15 years of data from Fermi-LAT observation of 46 local spiral dwarf galaxies and projected CTA sensitivity to a signal from the Galactic Center. To this end we consider several benchmark points spanning a wide range of WIMP mass, different annihilation final states, and large enough event rates to warrant detection in one or more experiments. As previously shown, below some 100 GeV only direct detection experiments will in principle be able to reconstruct WIMP mass well. This may, in case a signal at Fermi-LAT is also detected, additionally help restricting σv and the allowed decay branching rates. In the intermediate range between some 100 GeV and up a few hundred GeV, direct and indirect detection experiments can be used in complementarity to ameliorate the respective determinations, which in individual experiments can at best be rather poor, thus making the WIMP reconstruction in this mass range very challenging. At large WIMP mass, ~ 1 TeV, CTA will have the ability to reconstruct mass, annihilation cross section, and the allowed decay branching rates to very good precision for the τ+τ‑ or purely leptonic final state, good for the W+W‑ case, and rather poor for bbar b. A substantial improvement can potentially be achieved by reducing the systematic uncertainties, increasing exposure, or by an additional measurement at Fermi-LAT that would help reconstruct the annihilation cross section and the allowed branching fractions to different final states.

  2. Reconstructing WIMP properties through an interplay of signal measurements in direct detection, Fermi-LAT, and CTA searches for dark matter

    NASA Astrophysics Data System (ADS)

    Roszkowski, Leszek; Sessolo, Enrico Maria; Trojanowski, Sebastian; Williams, Andrew J.

    2016-08-01

    We examine the projected ability to reconstruct the mass, scattering, and annihilation cross section of dark matter in the new generation of large underground detectors, XENON-1T, SuperCDMS, and DarkSide-G2, in combination with diffuse gamma radiation from expected 15 years of data from Fermi-LAT observation of 46 local spiral dwarf galaxies and projected CTA sensitivity to a signal from the Galactic Center. To this end we consider several benchmark points spanning a wide range of WIMP mass, different annihilation final states, and large enough event rates to warrant detection in one or more experiments. As previously shown, below some 100 GeV only direct detection experiments will in principle be able to reconstruct WIMP mass well. This may, in case a signal at Fermi-LAT is also detected, additionally help restricting σv and the allowed decay branching rates. In the intermediate range between some 100 GeV and up a few hundred GeV, direct and indirect detection experiments can be used in complementarity to ameliorate the respective determinations, which in individual experiments can at best be rather poor, thus making the WIMP reconstruction in this mass range very challenging. At large WIMP mass, ~ 1 TeV, CTA will have the ability to reconstruct mass, annihilation cross section, and the allowed decay branching rates to very good precision for the τ+τ- or purely leptonic final state, good for the W+W- case, and rather poor for bbar b. A substantial improvement can potentially be achieved by reducing the systematic uncertainties, increasing exposure, or by an additional measurement at Fermi-LAT that would help reconstruct the annihilation cross section and the allowed branching fractions to different final states.

  3. Algebraic reconstruction combined with the signal space separation method for the inverse magnetoencephalography problem with a dipole-quadrupole source

    NASA Astrophysics Data System (ADS)

    Nara, T.; Koiwa, K.; Takagi, S.; Oyama, D.; Uehara, G.

    2014-05-01

    This paper presents an algebraic reconstruction method for dipole-quadrupole sources using magnetoencephalography data. Compared to the conventional methods with the equivalent current dipoles source model, our method can more accurately reconstruct two close, oppositely directed sources. Numerical simulations show that two sources on both sides of the longitudinal fissure of cerebrum are stably estimated. The method is verified using a quadrupolar source phantom, which is composed of two isosceles-triangle-coils with parallel bases.

  4. Method and apparatus for distinguishing actual sparse events from sparse event false alarms

    DOEpatents

    Spalding, Richard E.; Grotbeck, Carter L.

    2000-01-01

    Remote sensing method and apparatus wherein sparse optical events are distinguished from false events. "Ghost" images of actual optical phenomena are generated using an optical beam splitter and optics configured to direct split beams to a single sensor or segmented sensor. True optical signals are distinguished from false signals or noise based on whether the ghost image is presence or absent. The invention obviates the need for dual sensor systems to effect a false target detection capability, thus significantly reducing system complexity and cost.

  5. Bacterial community reconstruction using compressed sensing.

    PubMed

    Amir, Amnon; Zuk, Or

    2011-11-01

    Bacteria are the unseen majority on our planet, with millions of species and comprising most of the living protoplasm. We propose a novel approach for reconstruction of the composition of an unknown mixture of bacteria using a single Sanger-sequencing reaction of the mixture. Our method is based on compressive sensing theory, which deals with reconstruction of a sparse signal using a small number of measurements. Utilizing the fact that in many cases each bacterial community is comprised of a small subset of all known bacterial species, we show the feasibility of this approach for determining the composition of a bacterial mixture. Using simulations, we show that sequencing a few hundred base-pairs of the 16S rRNA gene sequence may provide enough information for reconstruction of mixtures containing tens of species, out of tens of thousands, even in the presence of realistic measurement noise. Finally, we show initial promising results when applying our method for the reconstruction of a toy experimental mixture with five species. Our approach may have a potential for a simple and efficient way for identifying bacterial species compositions in biological samples. All supplementary data and the MATLAB code are available at www.broadinstitute.org/?orzuk/publications/BCS/.

  6. Indian hedgehog signaling and the role of graft tension in tendon-to-bone healing: Evaluation in a rat ACL reconstruction model.

    PubMed

    Carbone, Andrew; Carballo, Camila; Ma, Richard; Wang, Hongsheng; Deng, Xianghua; Dahia, Chitra; Rodeo, Scott

    2016-04-01

    The structure and composition of the native enthesis is not recapitulated following tendon-to-bone repair. Indian Hedgehog (IHH) signaling has recently been shown to be important in enthesis development in a mouse model but no studies have evaluated IHH signaling in a healing model. Fourteen adult male rats underwent ACL reconstruction using a flexor tendon graft. Rats were assigned to two groups based on whether or not they received 0N or 10N of pre-tension of the graft. Specimens were evaluated at 3 and 6 weeks post-operatively using immunohistochemistry for three different protein markers of IHH signaling. Quantitative analysis of staining area and intensity using custom software demonstrated that IHH signaling was active in interface tissue formed at the healing tendon-bone interface. We also found increased staining area and intensity of IHH signaling proteins at 3 weeks in animals that received a pre-tensioned tendon graft. No significant differences were seen between the 3-week and 6-week time points. Our data suggests that the IHH signaling pathway is active during the tendon-bone healing process and appears to be mechanosensitive, as pre-tensioning of the graft at the time of surgery resulted in increased IHH signaling at three weeks. PMID:26447744

  7. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  8. Estimation of sparse null space functions for compressed sensing in SPECT

    NASA Astrophysics Data System (ADS)

    Mukherjee, Joyeeta Mitra; Sidky, Emil; King, Michael A.

    2014-03-01

    Compressed sensing (CS) [1] is a novel sensing (acquisition) paradigm that applies to discrete-to-discrete system models and asserts exact recovery of a sparse signal from far fewer measurements than the number of unknowns [1- 2]. Successful applications of CS may be found in MRI [3, 4] and optical imaging [5]. Sparse reconstruction methods exploiting CS principles have been investigated for CT [6-8] to reduce radiation dose, and to gain imaging speed and image quality in optical imaging [9]. In this work the objective is to investigate the applicability of compressed sensing principles for a faster brain imaging protocol on a hybrid collimator SPECT system. As a proofof- principle we study the null space of the fan-beam collimator component of our system with regards to a particular imaging object. We illustrate the impact of object sparsity on the null space using pixel and Haar wavelet basis functions to represent a piecewise smooth phantom chosen as our object of interest.

  9. High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures

    PubMed Central

    Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando

    2011-01-01

    Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017

  10. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  11. Precipitation reconstruction for the northwestern Chinese Altay since 1760 indicates the drought signals of the northern part of inner Asia

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Yuan, Yujiang; Zhang, Tongwen; Shang, Huaming

    2016-03-01

    Based on the significant positive correlations between the regional tree-ring width chronology and local climate data, the total precipitation of the previous July to the current June was reconstructed since AD 1760 for the northwestern Chinese Altay. The reconstruction model accounts for 40.7 % of the actual precipitation variance during the calibration period from 1959 to 2013. Wet conditions prevailed during the periods 1764-1777, 1784-1791, 1795-1805, 1829-1835, 1838-1846, 1850-1862, 1867-1872, 1907-1916, 1926-1931, 1935-1943, 1956-1961, 1968-1973, 1984-1997, and 2002-2006. Dry episodes occurred during 1760-1763, 1778-1783, 1792-1794, 1806-1828, 1836-1837, 1847-1849, 1863-1866, 1873-1906, 1917-1925, 1932-1934, 1944-1955, 1962-1967, 1974-1983, 1998-2001, and 2007-2012. The spectral analysis of the precipitation reconstruction shows the existence of some cycles (15.3, 4.5, 3.1, 2.7, and 2.1 years). The significant correlations with the gridded precipitation dataset revealed that the precipitation reconstruction represents the precipitation variation for a large area of the northern part of inner Asia. A comparison with the precipitation reconstruction from the southern Chinese Altay shows the high level of confidence for the precipitation reconstruction for the northwestern Chinese Altay. Precipitation variation of the northwestern Chinese Altay is positively correlated with sea surface temperatures in tropical oceans, suggesting a possible linkage of the precipitation variation of the northwestern Chinese Altay to the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). The synoptic climatology analysis reveals that there is the relationship between anomalous atmospheric circulation and extreme climate events in the northwestern Chinese Altay.

  12. Precipitation reconstruction for the northwestern Chinese Altay since 1760 indicates the drought signals of the northern part of inner Asia.

    PubMed

    Chen, Feng; Yuan, Yujiang; Zhang, Tongwen; Shang, Huaming

    2016-03-01

    Based on the significant positive correlations between the regional tree-ring width chronology and local climate data, the total precipitation of the previous July to the current June was reconstructed since AD 1760 for the northwestern Chinese Altay. The reconstruction model accounts for 40.7 % of the actual precipitation variance during the calibration period from 1959 to 2013. Wet conditions prevailed during the periods 1764-1777, 1784-1791, 1795-1805, 1829-1835, 1838-1846, 1850-1862, 1867-1872, 1907-1916, 1926-1931, 1935-1943, 1956-1961, 1968-1973, 1984-1997, and 2002-2006. Dry episodes occurred during 1760-1763, 1778-1783, 1792-1794, 1806-1828, 1836-1837, 1847-1849, 1863-1866, 1873-1906, 1917-1925, 1932-1934, 1944-1955, 1962-1967, 1974-1983, 1998-2001, and 2007-2012. The spectral analysis of the precipitation reconstruction shows the existence of some cycles (15.3, 4.5, 3.1, 2.7, and 2.1 years). The significant correlations with the gridded precipitation dataset revealed that the precipitation reconstruction represents the precipitation variation for a large area of the northern part of inner Asia. A comparison with the precipitation reconstruction from the southern Chinese Altay shows the high level of confidence for the precipitation reconstruction for the northwestern Chinese Altay. Precipitation variation of the northwestern Chinese Altay is positively correlated with sea surface temperatures in tropical oceans, suggesting a possible linkage of the precipitation variation of the northwestern Chinese Altay to the El Niño-Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO). The synoptic climatology analysis reveals that there is the relationship between anomalous atmospheric circulation and extreme climate events in the northwestern Chinese Altay.

  13. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor.

    PubMed

    Salehizadeh, Seyed M A; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H

    2015-12-23

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  14. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor

    PubMed Central

    Salehizadeh, Seyed M. A.; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H.

    2015-01-01

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  15. A Novel Time-Varying Spectral Filtering Algorithm for Reconstruction of Motion Artifact Corrupted Heart Rate Signals During Intense Physical Activities Using a Wearable Photoplethysmogram Sensor.

    PubMed

    Salehizadeh, Seyed M A; Dao, Duy; Bolkhovsky, Jeffrey; Cho, Chae; Mendelson, Yitzhak; Chon, Ki H

    2015-01-01

    Accurate estimation of heart rates from photoplethysmogram (PPG) signals during intense physical activity is a very challenging problem. This is because strenuous and high intensity exercise can result in severe motion artifacts in PPG signals, making accurate heart rate (HR) estimation difficult. In this study we investigated a novel technique to accurately reconstruct motion-corrupted PPG signals and HR based on time-varying spectral analysis. The algorithm is called Spectral filter algorithm for Motion Artifacts and heart rate reconstruction (SpaMA). The idea is to calculate the power spectral density of both PPG and accelerometer signals for each time shift of a windowed data segment. By comparing time-varying spectra of PPG and accelerometer data, those frequency peaks resulting from motion artifacts can be distinguished from the PPG spectrum. The SpaMA approach was applied to three different datasets and four types of activities: (1) training datasets from the 2015 IEEE Signal Process. Cup Database recorded from 12 subjects while performing treadmill exercise from 1 km/h to 15 km/h; (2) test datasets from the 2015 IEEE Signal Process. Cup Database recorded from 11 subjects while performing forearm and upper arm exercise. (3) Chon Lab dataset including 10 min recordings from 10 subjects during treadmill exercise. The ECG signals from all three datasets provided the reference HRs which were used to determine the accuracy of our SpaMA algorithm. The performance of the SpaMA approach was calculated by computing the mean absolute error between the estimated HR from the PPG and the reference HR from the ECG. The average estimation errors using our method on the first, second and third datasets are 0.89, 1.93 and 1.38 beats/min respectively, while the overall error on all 33 subjects is 1.86 beats/min and the performance on only treadmill experiment datasets (22 subjects) is 1.11 beats/min. Moreover, it was found that dynamics of heart rate variability can be

  16. Photoplethysmograph signal reconstruction based on a novel motion artifact detection-reduction approach. Part II: Motion and noise artifact removal.

    PubMed

    Salehizadeh, S M A; Dao, Duy K; Chong, Jo Woon; McManus, David; Darling, Chad; Mendelson, Yitzhak; Chon, Ki H

    2014-11-01

    We introduce a new method to reconstruct motion and noise artifact (MNA) contaminated photoplethysmogram (PPG) data. A method to detect MNA corrupted data is provided in a companion paper. Our reconstruction algorithm is based on an iterative motion artifact removal (IMAR) approach, which utilizes the singular spectral analysis algorithm to remove MNA artifacts so that the most accurate estimates of uncorrupted heart rates (HRs) and arterial oxygen saturation (SpO2) values recorded by a pulse oximeter can be derived. Using both computer simulations and three different experimental data sets, we show that the proposed IMAR approach can reliably reconstruct MNA corrupted data segments, as the estimated HR and SpO2 values do not significantly deviate from the uncorrupted reference measurements. Comparison of the accuracy of reconstruction of the MNA corrupted data segments between our IMAR approach and the time-domain independent component analysis (TD-ICA) is made for all data sets as the latter method has been shown to provide good performance. For simulated data, there were no significant differences in the reconstructed HR and SpO2 values starting from 10 dB down to -15 dB for both white and colored noise contaminated PPG data using IMAR; for TD-ICA, significant differences were observed starting at 10 dB. Two experimental PPG data sets were created with contrived MNA by having subjects perform random forehead and rapid side-to-side finger movements show that; the performance of the IMAR approach on these data sets was quite accurate as non-significant differences in the reconstructed HR and SpO2 were found compared to non-contaminated reference values, in most subjects. In comparison, the accuracy of the TD-ICA was poor as there were significant differences in reconstructed HR and SpO2 values in most subjects. For non-contrived MNA corrupted PPG data, which were collected with subjects performing walking and stair climbing tasks, the IMAR significantly

  17. Robust Sparse Sensing Using Weather Radar

    NASA Astrophysics Data System (ADS)

    Mishra, K. V.; Kruger, A.; Krajewski, W. F.; Xu, W.

    2014-12-01

    The ability of a weather radar to detect weak echoes is limited by the presence of noise or unwanted echoes. Some of these unwanted signals originate externally to the radar system, such as cosmic noise, radome reflections, interference from co-located radars, and power transmission lines. The internal source of noise in microwave radar receiver is mainly thermal. The thermal noise from various microwave devices in the radar receiver tends to lower the signal-to-noise ratio, thereby masking the weaker signals. Recently, the compressed sensing (CS) technique has emerged as a novel signal sampling paradigm that allows perfect reconstruction of signals sampled at frequencies lower than the Nyquist rate. Many radar and remote sensing applications require efficient and rapid data acquisition. The application of CS to weather radars may allow for faster target update rates without compromising the accuracy of target information. In our previous work, we demonstrated recovery of an entire precipitation scene from its compressed-sensed version by using the matrix completion approach. In this study, we characterize the performance of such a CS-based weather radar in the presence of additive noise. We use a signal model where the precipitation signals form a low-rank matrix that is corrupted with (bounded) noise. Using recent advances in algorithms for matrix completion from few noisy observations, we reconstruct the precipitation scene with reasonable accuracy. We test and demonstrate our approach using the data collected by Iowa X-band Polarimetric (XPOL) weather radars.

  18. Approximation and compression with sparse orthonormal transforms.

    PubMed

    Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel

    2015-08-01

    We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033

  19. STIS Sparse Field CTE test

    NASA Astrophysics Data System (ADS)

    Goudfrooij, Paul

    1997-07-01

    CTE measurements are made using the "sparse field test", along both the serial and parallel axes. This program needs special commanding to provide {a} off-center MSM positionings of some slits, and {b} the ability to read out with any amplifier {A, B, C, or D}. All exposures are internals.

  20. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering.

    PubMed

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children.

  1. Cerebellar Functional Parcellation Using Sparse Dictionary Learning Clustering

    PubMed Central

    Wang, Changqing; Kipping, Judy; Bao, Chenglong; Ji, Hui; Qiu, Anqi

    2016-01-01

    The human cerebellum has recently been discovered to contribute to cognition and emotion beyond the planning and execution of movement, suggesting its functional heterogeneity. We aimed to identify the functional parcellation of the cerebellum using information from resting-state functional magnetic resonance imaging (rs-fMRI). For this, we introduced a new data-driven decomposition-based functional parcellation algorithm, called Sparse Dictionary Learning Clustering (SDLC). SDLC integrates dictionary learning, sparse representation of rs-fMRI, and k-means clustering into one optimization problem. The dictionary is comprised of an over-complete set of time course signals, with which a sparse representation of rs-fMRI signals can be constructed. Cerebellar functional regions were then identified using k-means clustering based on the sparse representation of rs-fMRI signals. We solved SDLC using a multi-block hybrid proximal alternating method that guarantees strong convergence. We evaluated the reliability of SDLC and benchmarked its classification accuracy against other clustering techniques using simulated data. We then demonstrated that SDLC can identify biologically reasonable functional regions of the cerebellum as estimated by their cerebello-cortical functional connectivity. We further provided new insights into the cerebello-cortical functional organization in children. PMID:27199650

  2. A new sparse Bayesian learning method for inverse synthetic aperture radar imaging via exploiting cluster patterns

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Zhang, Lizao; Duan, Huiping; Huang, Lei; Li, Hongbin

    2016-05-01

    The application of sparse representation to SAR/ISAR imaging has attracted much attention over the past few years. This new class of sparse representation based imaging methods present a number of unique advantages over conventional range-Doppler methods, the basic idea behind these works is to formulate SAR/ISAR imaging as a sparse signal recovery problem. In this paper, we propose a new two-dimensional pattern-coupled sparse Bayesian learning(SBL) method to capture the underlying cluster patterns of the ISAR target images. Based on this model, an expectation-maximization (EM) algorithm is developed to infer the maximum a posterior (MAP) estimate of the hyperparameters, along with the posterior distribution of the sparse signal. Experimental results demonstrate that the proposed method is able to achieve a substantial performance improvement over existing algorithms, including the conventional SBL method.

  3. Reconstruction of images from compressive sensing based on the stagewise fast LASSO

    NASA Astrophysics Data System (ADS)

    Wu, Jiao; Liu, Fang; Jiao, Licheng

    2009-10-01

    Compressive sensing (CS) is a theory about that one may achieve a nearly exact signal reconstruction from the fewer samples, if the signal is sparse or compressible under some basis. The reconstruction of signal can be obtained by solving a convex program, which is equivalent to a LASSO problem with l1-formulation. In this paper, we propose a stage-wise fast LASSO (StF-LASSO) algorithm for the image reconstruction from CS. It uses an insensitive Huber loss function to the objective function of LASSO, and iteratively builds the decision function and updates the parameters by introducing a stagewise fast learning strategy. Simulation studies in the CS reconstruction of the natural images and SAR images widely applied in practice demonstrate that the good reconstruction performance both in evaluation indexes and visual effect can be achieved by StF-LASSO with the fast recovered speed among the algorithms which have been implemented in our simulations in most of the cases. Theoretical analysis and experiments show that StF-LASSO is a CS reconstruction algorithm with the low complexity and stability.

  4. Inferring sparse networks for noisy transient processes.

    PubMed

    Tran, Hoang M; Bukkapatnam, Satish T S

    2016-01-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the l1-min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of l1-min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues. PMID:26916813

  5. Inferring sparse networks for noisy transient processes

    NASA Astrophysics Data System (ADS)

    Tran, Hoang M.; Bukkapatnam, Satish T. S.

    2016-02-01

    Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues.

  6. Genetic algorithms for minimal source reconstructions

    SciTech Connect

    Lewis, P.S.; Mosher, J.C.

    1993-12-01

    Under-determined linear inverse problems arise in applications in which signals must be estimated from insufficient data. In these problems the number of potentially active sources is greater than the number of observations. In many situations, it is desirable to find a minimal source solution. This can be accomplished by minimizing a cost function that accounts from both the compatibility of the solution with the observations and for its ``sparseness``. Minimizing functions of this form can be a difficult optimization problem. Genetic algorithms are a relatively new and robust approach to the solution of difficult optimization problems, providing a global framework that is not dependent on local continuity or on explicit starting values. In this paper, the authors describe the use of genetic algorithms to find minimal source solutions, using as an example a simulation inspired by the reconstruction of neural currents in the human brain from magnetoencephalographic (MEG) measurements.

  7. Examining the signal-to-noise ratio of oceanic temperature and salinity proxies with a focus on their potential to reconstruct density

    NASA Astrophysics Data System (ADS)

    Heslop, D.; Paul, A.

    2009-12-01

    Proxy reconstructions of temperature and salinity based on Mg/Ca and stable oxygen isotope ratios form the cornerstone of many palaeoceanographic studies. To assess the fidelity of a given proxy record and the extent to which its variability can be interpreted, it is essential to have an appreciation of the data’s signal-to-noise ratio (SNR). This information, however, cannot usually be obtained from proxy records because the signal and noise contributions are inseparable. To investigate this problem we employ an Earth System Climate Model (ESCM) to assess the SNR of temperature and salinity proxy records simulated under forcing conditions similar to those of the “8.2 ka event”. A series of freshwater forcing scenarios, based on large meltwater pulses entering the North Atlantic via Hudson Bay, were simulated using the UVic ESCM. The magnitude of the resulting temperature and salinity anomalies produced at the surface and at intermediate water depths were then compared to accepted proxy uncertainties to yield SNR estimates throughout the Atlantic. Given this information it is then possible to determine for any given location how many replicate samples would need to be measured at each depth/time interval in order to achieve a predefined SNR for a specific magnitude of freshwater forcing. Clear spatial patterns are revealed, for large volume meltwater events a number of locations in the North Atlantic would require only a small number of replicate measurements per interval in order to achieve a reasonable SNR. In contrast other locations, such as the South Atlantic, would require many hundreds of replicates to be measured in order to improve the SNR to levels similar to those seen in the North. For smaller meltwater events nearly all locations in the Atlantic have a low SNR and would require large numbers of replicates to be measured in order to produce an interpretable signal. These ideas are extended to the equation of state and the number of replicate

  8. Amesos2 Templated Direct Sparse Solver Package

    2011-05-24

    Amesos2 is a templated direct sparse solver package. Amesos2 provides interfaces to direct sparse solvers, rather than providing native solver capabilities. Amesos2 is a derivative work of the Trilinos package Amesos.

  9. Reconstruction of productivity signal and deep-water conditions in Moroccan Atlantic margin (~35°N) from the last glacial to the Holocene.

    PubMed

    El Frihmat, Yassine; Hebbeln, Dierk; Jaaidi, El Bachir; Mhammdi, Nadia

    2015-01-01

    In order to assess the changes in sea-surface hydrology and productivity signal from the last glacial to the Holocene; a set of isotopic, geochemical and microgranulometric proxies was used for this study. Former studies revealed that the reconstruction of paleoproductivity from ocean sediment gives different results depending the measurement used. The comparison between our productivity proxies (total organic carbon, carbonate and planktonic δ(13)C) as well as previous results in nearby location indicates that the planktonic δ(13)C responds better to marine productivity changes and represents therefore a suitable proxy for paleoproductivity reconstruction in our studied area. The productivity signal reveals two main enrichments during the Young Dryas (YD) and the Heinrich Event 1 (HE 1) and correlates perfectly with upwelling activity mentioned by an increasing trend of aeolian proxies. In addition, our results show that biogenic components in the sediment have a marine origin and the proportion of organic matter preserved depends on the total sediment accumulation rate. PMID:25853024

  10. PAPERS DEVOTED TO THE 250TH ANNIVERSARY OF THE MOSCOW STATE UNIVERSITY: Monte Carlo simulation of an optical coherence Doppler tomograph signal: the effect of the concentration of particles in a flow on the reconstructed velocity profile

    NASA Astrophysics Data System (ADS)

    Bykov, A. V.; Kirillin, M. Yu; Priezzhev, A. V.

    2005-02-01

    Model signals of an optical coherence Doppler tomograph (OCDT) are obtained by the Monte Carlo method from a flow of a light-scattering suspension of lipid vesicles (intralipid) at concentrations from 0.7% to 1.5% with an a priori specified parabolic velocity profile. The velocity profile parameters reconstructed from the OCDT signal and scattering orders of the photons contributing to the signal are studied as functions of the suspension concentration. It is shown that the maximum of the reconstructed velocity profile at high concentrations shifts with respect to the symmetry axis of the flow and its value decreases due to a greater contribution from multiply scattered photons.

  11. Sparse spike coding : applications of neuroscience to the processing of natural images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.

    2008-04-01

    If modern computers are sometimes superior to cognition in some specialized tasks such as playing chess or browsing a large database, they can't beat the efficiency of biological vision for such simple tasks as recognizing a relative or following an object in a complex background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the eciency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will particularly focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.

  12. The bias and signal attenuation present in conventional pollen-based climate reconstructions as assessed by early climate data from Minnesota, USA.

    PubMed

    St Jacques, Jeannine-Marie; Cumming, Brian F; Sauchyn, David J; Smol, John P

    2015-01-01

    The inference of past temperatures from a sedimentary pollen record depends upon the stationarity of the pollen-climate relationship. However, humans have altered vegetation independent of changes to climate, and consequently modern pollen deposition is a product of landscape disturbance and climate, which is different from the dominance of climate-derived processes in the past. This problem could cause serious signal distortion in pollen-based reconstructions. In the north-central United States, direct human impacts have strongly altered the modern vegetation and hence the pollen rain since Euro-American settlement in the mid-19th century. Using instrumental temperature data from the early 1800 s from Fort Snelling (Minnesota), we assessed the signal distortion and bias introduced by using the conventional method of inferring temperature from pollen assemblages in comparison to a calibration set from pre-settlement pollen assemblages and the earliest instrumental climate data. The early post-settlement calibration set provides more accurate reconstructions of the 19th century instrumental record, with less bias, than the modern set does. When both modern and pre-industrial calibration sets are used to reconstruct past temperatures since AD 1116 from pollen counts from a varve-dated record from Lake Mina, Minnesota, the conventional inference method produces significant low-frequency (centennial-scale) signal attenuation and positive bias of 0.8-1.7 °C, resulting in an overestimation of Little Ice Age temperature and likely an underestimation of the extent and rate of anthropogenic warming in this region. However, high-frequency (annual-scale) signal attenuation exists with both methods. Hence, we conclude that any past pollen spectra from before Euro-American settlement in this region should be interpreted using a pre-Euro-American settlement pollen set, paired to the earliest instrumental climate records. It remains to be explored how widespread this problem is

  13. Supervised nonparametric sparse discriminant analysis for hyperspectral imagery classification

    NASA Astrophysics Data System (ADS)

    Wu, Longfei; Sun, Hao; Ji, Kefeng

    2016-03-01

    Owing to the high spectral sampling, the spectral information in hyperspectral imagery (HSI) is often highly correlated and contains redundancy. Motivated by the recent success of sparsity preserving based dimensionality reduction (DR) techniques in both computer vision and remote sensing image analysis community, a novel supervised nonparametric sparse discriminant analysis (NSDA) algorithm is presented for HSI classification. The objective function of NSDA aims at preserving the within-class sparse reconstructive relationship for within-class compactness characterization and maximizing the nonparametric between-class scatter simultaneously to enhance discriminative ability of the features in the projected space. Essentially, it seeks for the optimal projection matrix to identify the underlying discriminative manifold structure of a multiclass dataset. Experimental results on one visualization dataset and three recorded HSI dataset demonstrate NSDA outperforms several state-of-the-art feature extraction methods for HSI classification.

  14. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  15. Shape prior modeling using sparse representation and online dictionary learning.

    PubMed

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient. PMID:23286160

  16. Finding communities in sparse networks

    NASA Astrophysics Data System (ADS)

    Singh, Abhinav; Humphries, Mark D.

    2015-03-01

    Spectral algorithms based on matrix representations of networks are often used to detect communities, but classic spectral methods based on the adjacency matrix and its variants fail in sparse networks. New spectral methods based on non-backtracking random walks have recently been introduced that successfully detect communities in many sparse networks. However, the spectrum of non-backtracking random walks ignores hanging trees in networks that can contain information about their community structure. We introduce the reluctant backtracking operators that explicitly account for hanging trees as they admit a small probability of returning to the immediately previous node, unlike the non-backtracking operators that forbid an immediate return. We show that the reluctant backtracking operators can detect communities in certain sparse networks where the non-backtracking operators cannot, while performing comparably on benchmark stochastic block model networks and real world networks. We also show that the spectrum of the reluctant backtracking operator approximately optimises the standard modularity function. Interestingly, for this family of non- and reluctant-backtracking operators the main determinant of performance on real-world networks is whether or not they are normalised to conserve probability at each node.

  17. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  18. Sparsity-constrained PET image reconstruction with learned dictionaries

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.

  19. Improved total variation based CT reconstruction algorithm with noise estimation

    NASA Astrophysics Data System (ADS)

    Jin, Xin; Li, Liang; Shen, Le; Chen, Zhiqiang

    2012-10-01

    Nowadays a famous way to solve Computed Tomography (CT) inverse problems is to consider a constrained minimization problem following the Compressed Sensing (CS) theory. The CS theory proves the possibility of sparse signal recovery using under sampled measurements which gives a powerful tool for CT problems that have incomplete measurements or contain heavy noise. Among current CS reconstruction methods, one widely accepted reconstruction framework is to perform a total variation (TV) minimization process and a data fidelity constraint process in an alternative way by two separate iteration loops. However because the two processes are done independently certain misbalance may occur which leads to either over-smoothed or noisy reconstructions. Moreover, such misbalance is usually difficult to adjust as it varies according to the scanning objects and protocols. In our work we try to make good balance between the minimization and the constraint processes by estimating the variance of image noise. First, considering that the noise of projection data follows a Poisson distribution, the Anscombe transform (AT) and its inversion is utilized to calculate the unbiased variance of the projections. Second, an estimation of image noise is given through a noise transform model from projections to the image. Finally a modified CS reconstruction method is proposed which guarantees the desired variance on the reconstructed image thus prevents the block-wising or over-noised caused by misbalanced constrained minimizations. Results show the advantage in both image quality and convergence speed.

  20. Sparsity-constrained PET image reconstruction with learned dictionaries.

    PubMed

    Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie

    2016-09-01

    PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging. PMID:27494441

  1. Sparse Matrices in MATLAB: Design and Implementation

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Moler, Cleve; Schreiber, Robert

    1992-01-01

    The matrix computation language and environment MATLAB is extended to include sparse matrix storage and operations. The only change to the outward appearance of the MATLAB language is a pair of commands to create full or sparse matrices. Nearly all the operations of MATLAB now apply equally to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportional to the number of arithmetic operations on nonzeros.

  2. The least error method for sparse solution reconstruction

    NASA Astrophysics Data System (ADS)

    Bredies, K.; Kaltenbacher, B.; Resmerita, E.

    2016-09-01

    This work deals with a regularization method enforcing solution sparsity of linear ill-posed problems by appropriate discretization in the image space. Namely, we formulate the so called least error method in an ℓ 1 setting and perform the convergence analysis by choosing the discretization level according to an a priori rule, as well as two a posteriori rules, via the discrepancy principle and the monotone error rule, respectively. Depending on the setting, linear or sublinear convergence rates in the ℓ 1-norm are obtained under a source condition yielding sparsity of the solution. A part of the study is devoted to analyzing the structure of the approximate solutions and of the involved source elements.

  3. Sparse source configurations for asteroid tomography

    NASA Astrophysics Data System (ADS)

    Pursiainen, S.; Kaasalainen, M.

    2014-04-01

    The objective of our recent research has been to develop non-invasive imaging techniques for future planetary research and mining activities involving a challenging in situ environment and tight payload limits [1]. This presentation will deal in particular with an approach in which the internal relative permittivity ∈r or the refractive index n = √ ∈r of an asteroid is to be recovered based on radio signal transmitted by a sparse set [2] of fixed or movable landers. To address important aspects of mission planning, we have analyzed different signal source configurations to find the minimal number of source positions needed for robust localization of anomalies, such as internal voids. Characteristic to this inverse problem are the large relative changes in signal speed caused by the high permittivity of typical asteroid minerals (e.g. basalt), leading to strong refractions and reflections of the signal. Finding an appropriate problemspecific signaling arrangement is an important premission goal for successful in situ measurements. This presentation will include inversion results obtained with laboratory-recorded travel time data y of the form in which n δ denotes a perturbation of a refractive index n = n δ + nbg; gi estimates the total noise due to different error sources; (ybg)i = ∫Ci nbg ds is an entry of noiseless background data ybg; and Ci is a signal path. Also simulated time-evolution data will be covered with respect to potential u satisfying the wave equation ∈rδ2/δt2+ ōδu/δt-∆u = f, where ō is a (latent) conductivity distribution and f is a source term. Special interest will be paid to inversion robustness regarding changes of the prior model and source positioning. Among other things, our analysis suggests that strongly refractive anomalies can be detected with three or four sources independently of their positioning.

  4. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    NASA Astrophysics Data System (ADS)

    Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.

    2006-02-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.

  5. Sparse-view ultrasound diffraction tomography using compressed sensing with nonuniform FFT.

    PubMed

    Hua, Shaoyan; Ding, Mingyue; Yuchi, Ming

    2014-01-01

    Accurate reconstruction of the object from sparse-view sampling data is an appealing issue for ultrasound diffraction tomography (UDT). In this paper, we present a reconstruction method based on compressed sensing framework for sparse-view UDT. Due to the piecewise uniform characteristics of anatomy structures, the total variation is introduced into the cost function to find a more faithful sparse representation of the object. The inverse problem of UDT is iteratively resolved by conjugate gradient with nonuniform fast Fourier transform. Simulation results show the effectiveness of the proposed method that the main characteristics of the object can be properly presented with only 16 views. Compared to interpolation and multiband method, the proposed method can provide higher resolution and lower artifacts with the same view number. The robustness to noise and the computation complexity are also discussed. PMID:24868241

  6. Modified OMP Algorithm for Exponentially Decaying Signals

    PubMed Central

    Kazimierczuk, Krzysztof; Kasprzak, Paweł

    2015-01-01

    A group of signal reconstruction methods, referred to as compressed sensing (CS), has recently found a variety of applications in numerous branches of science and technology. However, the condition of the applicability of standard CS algorithms (e.g., orthogonal matching pursuit, OMP), i.e., the existence of the strictly sparse representation of a signal, is rarely met. Thus, dedicated algorithms for solving particular problems have to be developed. In this paper, we introduce a modification of OMP motivated by nuclear magnetic resonance (NMR) application of CS. The algorithm is based on the fact that the NMR spectrum consists of Lorentzian peaks and matches a single Lorentzian peak in each of its iterations. Thus, we propose the name Lorentzian peak matching pursuit (LPMP). We also consider certain modification of the algorithm by introducing the allowed positions of the Lorentzian peaks' centers. Our results show that the LPMP algorithm outperforms other CS algorithms when applied to exponentially decaying signals. PMID:25609044

  7. Multi-frame blind deconvolution using sparse priors

    NASA Astrophysics Data System (ADS)

    Dong, Wende; Feng, Huajun; Xu, Zhihai; Li, Qi

    2012-05-01

    In this paper, we propose a method for multi-frame blind deconvolution. Two sparse priors, i.e., the natural image gradient prior and an l1-norm based prior are used to regularize the latent image and point spread functions (PSFs) respectively. An alternating minimization approach is adopted to solve the resulted optimization problem. We use both gray scale blurred frames from a data set and some colored ones which are captured by a digital camera to verify the robustness of our approach. Experimental results show that the proposed method can accurately reconstruct PSFs with complex structures and the restored images are of high quality.

  8. Blind deconvolution of images using optimal sparse representations.

    PubMed

    Bronstein, Michael M; Bronstein, Alexander M; Zibulevsky, Michael; Zeevi, Yehoshua Y

    2005-06-01

    The relative Newton algorithm, previously proposed for quasi-maximum likelihood blind source separation and blind deconvolution of one-dimensional signals is generalized for blind deconvolution of images. Smooth approximation of the absolute value is used as the nonlinear term for sparse sources. In addition, we propose a method of sparsification, which allows blind deconvolution of arbitrary sources, and show how to find optimal sparsifying transformations by supervised learning.

  9. Robust Sparse Blind Source Separation

    NASA Astrophysics Data System (ADS)

    Chenot, Cecile; Bobin, Jerome; Rapin, Jeremy

    2015-11-01

    Blind Source Separation is a widely used technique to analyze multichannel data. In many real-world applications, its results can be significantly hampered by the presence of unknown outliers. In this paper, a novel algorithm coined rGMCA (robust Generalized Morphological Component Analysis) is introduced to retrieve sparse sources in the presence of outliers. It explicitly estimates the sources, the mixing matrix, and the outliers. It also takes advantage of the estimation of the outliers to further implement a weighting scheme, which provides a highly robust separation procedure. Numerical experiments demonstrate the efficiency of rGMCA to estimate the mixing matrix in comparison with standard BSS techniques.

  10. Note: simultaneous measurements of magnetization and electrical transport signal by a reconstructed superconducting quantum interference device magnetometer.

    PubMed

    Wang, H L; Yu, X Z; Wang, S L; Chen, L; Zhao, J H

    2013-08-01

    We have developed a sample rod which makes the conventional superconducting quantum interference device magnetometer capable of performing magnetization and electrical transport measurements simultaneously. The sample holder attached to the end of a 140 cm long sample rod is a nonmagnetic drinking straw or a 1.5 mm wide silicon strip with small magnetic background signal. Ferromagnetic semiconductor (Ga,Mn)As films are used to test the new sample rod, and the results are in good agreement with previous report.

  11. Dim moving target tracking algorithm based on particle discriminative sparse representation

    NASA Astrophysics Data System (ADS)

    Li, Zhengzhou; Li, Jianing; Ge, Fengzeng; Shao, Wanxing; Liu, Bing; Jin, Gang

    2016-03-01

    The small dim moving target usually submerged in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio (SNR). A target tracking algorithm based on particle filter and discriminative sparse representation is proposed in this paper to cope with the uncertainty of dim moving target tracking. The weight of every particle is the crucial factor to ensuring the accuracy of dim target tracking for particle filter (PF) that can achieve excellent performance even under the situation of non-linear and non-Gaussian motion. In discriminative over-complete dictionary constructed according to image sequence, the target dictionary describes target signal and the background dictionary embeds background clutter. The difference between target particle and background particle is enhanced to a great extent, and the weight of every particle is then measured by means of the residual after reconstruction using the prescribed number of target atoms and their corresponding coefficients. The movement state of dim moving target is then estimated and finally tracked by these weighted particles. Meanwhile, the subspace of over-complete dictionary is updated online by the stochastic estimation algorithm. Some experiments are induced and the experimental results show the proposed algorithm could improve the performance of moving target tracking by enhancing the consistency between the posteriori probability distribution and the moving target state.

  12. Sparse and Adaptive Diffusion Dictionary (SADD) for recovering intra-voxel white matter structure.

    PubMed

    Aranda, Ramon; Ramirez-Manzanares, Alonso; Rivera, Mariano

    2015-12-01

    On the analysis of the Diffusion-Weighted Magnetic Resonance Images, multi-compartment models overcome the limitations of the well-known Diffusion Tensor model for fitting in vivo brain axonal orientations at voxels with fiber crossings, branching, kissing or bifurcations. Some successful multi-compartment methods are based on diffusion dictionaries. The diffusion dictionary-based methods assume that the observed Magnetic Resonance signal at each voxel is a linear combination of the fixed dictionary elements (dictionary atoms). The atoms are fixed along different orientations and diffusivity profiles. In this work, we present a sparse and adaptive diffusion dictionary method based on the Diffusion Basis Functions Model to estimate in vivo brain axonal fiber populations. Our proposal overcomes the following limitations of the diffusion dictionary-based methods: the limited angular resolution and the fixed shapes for the atom set. We propose to iteratively re-estimate the orientations and the diffusivity profile of the atoms independently at each voxel by using a simplified and easier-to-solve mathematical approach. As a result, we improve the fitting of the Diffusion-Weighted Magnetic Resonance signal. The advantages with respect to the former Diffusion Basis Functions method are demonstrated on the synthetic data-set used on the 2012 HARDI Reconstruction Challenge and in vivo human data. We demonstrate that improvements obtained in the intra-voxel fiber structure estimations benefit brain research allowing to obtain better tractography estimations. Hence, these improvements result in an accurate computation of the brain connectivity patterns.

  13. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees

    PubMed Central

    2013-01-01

    Background Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10–30%. Results With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10–30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Conclusions Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the

  14. Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction.

    PubMed

    Wang, Bigong; Li, Liang

    2015-01-01

    As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL's development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training.

  15. Recent Development of Dual-Dictionary Learning Approach in Medical Image Analysis and Reconstruction

    PubMed Central

    Wang, Bigong; Li, Liang

    2015-01-01

    As an implementation of compressive sensing (CS), dual-dictionary learning (DDL) method provides an ideal access to restore signals of two related dictionaries and sparse representation. It has been proven that this method performs well in medical image reconstruction with highly undersampled data, especially for multimodality imaging like CT-MRI hybrid reconstruction. Because of its outstanding strength, short signal acquisition time, and low radiation dose, DDL has allured a broad interest in both academic and industrial fields. Here in this review article, we summarize DDL's development history, conclude the latest advance, and also discuss its role in the future directions and potential applications in medical imaging. Meanwhile, this paper points out that DDL is still in the initial stage, and it is necessary to make further studies to improve this method, especially in dictionary training. PMID:26089956

  16. Sparse-view proton computed tomography using modulated proton beams

    SciTech Connect

    Lee, Jiseoc; Kim, Changhwan; Cho, Seungryong; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong

    2015-02-15

    Purpose: Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Methods: Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method–projection onto convex sets (SM-POCS), superiorization method–expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Results: Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed

  17. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study.

    PubMed

    Jones, Ryan M; O'Reilly, Meaghan A; Hynynen, Kullervo

    2013-07-21

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337-43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source's emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system's resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring techniques currently exist.

  18. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    PubMed Central

    Jones, Ryan M.; O’Reilly, Meaghan A.; Hynynen, Kullervo

    2013-01-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337–43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring technique currently exists. PMID:23807573

  19. Transcranial passive acoustic mapping with hemispherical sparse arrays using CT-based skull-specific aberration corrections: a simulation study

    NASA Astrophysics Data System (ADS)

    Jones, Ryan M.; O'Reilly, Meaghan A.; Hynynen, Kullervo

    2013-07-01

    The feasibility of transcranial passive acoustic mapping with hemispherical sparse arrays (30 cm diameter, 16 to 1372 elements, 2.48 mm receiver diameter) using CT-based aberration corrections was investigated via numerical simulations. A multi-layered ray acoustic transcranial ultrasound propagation model based on CT-derived skull morphology was developed. By incorporating skull-specific aberration corrections into a conventional passive beamforming algorithm (Norton and Won 2000 IEEE Trans. Geosci. Remote Sens. 38 1337-43), simulated acoustic source fields representing the emissions from acoustically-stimulated microbubbles were spatially mapped through three digitized human skulls, with the transskull reconstructions closely matching the water-path control images. Image quality was quantified based on main lobe beamwidths, peak sidelobe ratio, and image signal-to-noise ratio. The effects on the resulting image quality of the source’s emission frequency and location within the skull cavity, the array sparsity and element configuration, the receiver element sensitivity, and the specific skull morphology were all investigated. The system’s resolution capabilities were also estimated for various degrees of array sparsity. Passive imaging of acoustic sources through an intact skull was shown possible with sparse hemispherical imaging arrays. This technique may be useful for the monitoring and control of transcranial focused ultrasound (FUS) treatments, particularly non-thermal, cavitation-mediated applications such as FUS-induced blood-brain barrier disruption or sonothrombolysis, for which no real-time monitoring techniques currently exist.

  20. Season-specific climate signal and reconstruction from a new tree-ring network in the southwestern U.S

    NASA Astrophysics Data System (ADS)

    Griffin, D.; Woodhouse, C. A.; Meko, D. M.; Stahle, D. W.; Faulstich, H.; Leavitt, S. W.; Touchan, R.; Castro, C. L.; Carrillo, C.

    2011-12-01

    Our research group has updated existing tree-ring collections from over 50 sampling sites in the southwestern U.S. The new and archived specimens, carefully dated with dendrochronology, have been analyzed for width variations of "earlywood" and "latewood." These are the two components of annual rings in conifers that form in spring and summer, respectively. The network of primary tree-ring data has been used to develop a suite of well-replicated chronologies that extend through the 2008 growing season and are sensitive to the season-specific climate variability of the Southwest. Correlation function analysis indicates that the earlywood chronologies are closely related to cool season (October-April) precipitation variability and the chronologies derived from latewood are generally sensitive to precipitation and temperature conditions during the warm season (June-August). These proxy data originate from biological organisms and are not without bias; however, they do constitute a new means for evaluating the recent paleoclimatic history of the North American summer monsoon. The monsoon is a major component of the region's climate, impacting social and environmental systems and delivering up to 60% of the annual precipitation in the southwestern U.S. We have developed latewood-based retrodictions of monsoon precipitation that explain over half of the variance in the instrumental record, pass standard verification tests, and point to periods of persistent drought and wetness during the last 300-500 years. These reconstructions are being used to evaluate the monsoon's long-term spatiotemporal variability and its relationship to cool season climate and the major modes of ocean-atmosphere variability.

  1. Image fusion using sparse overcomplete feature dictionaries

    DOEpatents

    Brumby, Steven P.; Bettencourt, Luis; Kenyon, Garrett T.; Chartrand, Rick; Wohlberg, Brendt

    2015-10-06

    Approaches for deciding what individuals in a population of visual system "neurons" are looking for using sparse overcomplete feature dictionaries are provided. A sparse overcomplete feature dictionary may be learned for an image dataset and a local sparse representation of the image dataset may be built using the learned feature dictionary. A local maximum pooling operation may be applied on the local sparse representation to produce a translation-tolerant representation of the image dataset. An object may then be classified and/or clustered within the translation-tolerant representation of the image dataset using a supervised classification algorithm and/or an unsupervised clustering algorithm.

  2. Mathematical strategies for filtering complex systems: Regularly spaced sparse observations

    SciTech Connect

    Harlim, J. Majda, A.J.

    2008-05-01

    Real time filtering of noisy turbulent signals through sparse observations on a regularly spaced mesh is a notoriously difficult and important prototype filtering problem. Simpler off-line test criteria are proposed here as guidelines for filter performance for these stiff multi-scale filtering problems in the context of linear stochastic partial differential equations with turbulent solutions. Filtering turbulent solutions of the stochastically forced dissipative advection equation through sparse observations is developed as a stringent test bed for filter performance with sparse regular observations. The standard ensemble transform Kalman filter (ETKF) has poor skill on the test bed and even suffers from filter divergence, surprisingly, at observable times with resonant mean forcing and a decaying energy spectrum in the partially observed signal. Systematic alternative filtering strategies are developed here including the Fourier Domain Kalman Filter (FDKF) and various reduced filters called Strongly Damped Approximate Filter (SDAF), Variance Strongly Damped Approximate Filter (VSDAF), and Reduced Fourier Domain Kalman Filter (RFDKF) which operate only on the primary Fourier modes associated with the sparse observation mesh while nevertheless, incorporating into the approximate filter various features of the interaction with the remaining modes. It is shown below that these much cheaper alternative filters have significant skill on the test bed of turbulent solutions which exceeds ETKF and in various regimes often exceeds FDKF, provided that the approximate filters are guided by the off-line test criteria. The skill of the various approximate filters depends on the energy spectrum of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a precise fashion elucidated here.

  3. Effects of sparse sampling schemes on image quality in low-dose CT

    SciTech Connect

    Abbas, Sajid; Lee, Taewon; Cho, Seungryong; Shin, Sukyoung; Lee, Rena

    2013-11-15

    Purpose: Various scanning methods and image reconstruction algorithms are actively investigated for low-dose computed tomography (CT) that can potentially reduce a health-risk related to radiation dose. Particularly, compressive-sensing (CS) based algorithms have been successfully developed for reconstructing images from sparsely sampled data. Although these algorithms have shown promises in low-dose CT, it has not been studied how sparse sampling schemes affect image quality in CS-based image reconstruction. In this work, the authors present several sparse-sampling schemes for low-dose CT, quantitatively analyze their data property, and compare effects of the sampling schemes on the image quality.Methods: Data properties of several sampling schemes are analyzed with respect to the CS-based image reconstruction using two measures: sampling density and data incoherence. The authors present five different sparse sampling schemes, and simulated those schemes to achieve a targeted dose reduction. Dose reduction factors of about 75% and 87.5%, compared to a conventional scan, were tested. A fully sampled circular cone-beam CT data set was used as a reference, and sparse sampling has been realized numerically based on the CBCT data.Results: It is found that both sampling density and data incoherence affect the image quality in the CS-based reconstruction. Among the sampling schemes the authors investigated, the sparse-view, many-view undersampling (MVUS)-fine, and MVUS-moving cases have shown promising results. These sampling schemes produced images with similar image quality compared to the reference image and their structure similarity index values were higher than 0.92 in the mouse head scan with 75% dose reduction.Conclusions: The authors found that in CS-based image reconstructions both sampling density and data incoherence affect the image quality, and suggest that a sampling scheme should be devised and optimized by use of these indicators. With this strategic

  4. Inverse sparse tracker with a locally weighted distance metric.

    PubMed

    Wang, Dong; Lu, Huchuan; Xiao, Ziyang; Yang, Ming-Hsuan

    2015-09-01

    Sparse representation has been recently extensively studied for visual tracking and generally facilitates more accurate tracking results than classic methods. In this paper, we propose a sparsity-based tracking algorithm that is featured with two components: 1) an inverse sparse representation formulation and 2) a locally weighted distance metric. In the inverse sparse representation formulation, the target template is reconstructed with particles, which enables the tracker to compute the weights of all particles by solving only one l1 optimization problem and thereby provides a quite efficient model. This is in direct contrast to most previous sparse trackers that entail solving one optimization problem for each particle. However, we notice that this formulation with normal Euclidean distance metric is sensitive to partial noise like occlusion and illumination changes. To this end, we design a locally weighted distance metric to replace the Euclidean one. Similar ideas of using local features appear in other works, but only being supported by popular assumptions like local models could handle partial noise better than holistic models, without any solid theoretical analysis. In this paper, we attempt to explicitly explain it from a mathematical view. On that basis, we further propose a method to assign local weights by exploiting the temporal and spatial continuity. In the proposed method, appearance changes caused by partial occlusion and shape deformation are carefully considered, thereby facilitating accurate similarity measurement and model update. The experimental validation is conducted from two aspects: 1) self validation on key components and 2) comparison with other state-of-the-art algorithms. Results over 15 challenging sequences show that the proposed tracking algorithm performs favorably against the existing sparsity-based trackers and the other state-of-the-art methods. PMID:25935033

  5. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    PubMed

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms. PMID:26353296

  6. Signals on Graphs: Uncertainty Principle and Sampling

    NASA Astrophysics Data System (ADS)

    Tsitsvero, Mikhail; Barbarossa, Sergio; Di Lorenzo, Paolo

    2016-09-01

    In many applications, the observations can be represented as a signal defined over the vertices of a graph. The analysis of such signals requires the extension of standard signal processing tools. In this work, first, we provide a class of graph signals that are maximally concentrated on the graph domain and on its dual. Then, building on this framework, we derive an uncertainty principle for graph signals and illustrate the conditions for the recovery of band-limited signals from a subset of samples. We show an interesting link between uncertainty principle and sampling and propose alternative signal recovery algorithms, including a generalization to frame-based reconstruction methods. After showing that the performance of signal recovery algorithms is significantly affected by the location of samples, we suggest and compare a few alternative sampling strategies. Finally, we provide the conditions for perfect recovery of a useful signal corrupted by sparse noise, showing that this problem is also intrinsically related to vertex-frequency localization properties.

  7. Robust image analysis with sparse representation on quantized visual features.

    PubMed

    Bao, Bing-Kun; Zhu, Guangyu; Shen, Jialie; Yan, Shuicheng

    2013-03-01

    Recent techniques based on sparse representation (SR) have demonstrated promising performance in high-level visual recognition, exemplified by the highly accurate face recognition under occlusion and other sparse corruptions. Most research in this area has focused on classification algorithms using raw image pixels, and very few have been proposed to utilize the quantized visual features, such as the popular bag-of-words feature abstraction. In such cases, besides the inherent quantization errors, ambiguity associated with visual word assignment and misdetection of feature points, due to factors such as visual occlusions and noises, constitutes the major cause of dense corruptions of the quantized representation. The dense corruptions can jeopardize the decision process by distorting the patterns of the sparse reconstruction coefficients. In this paper, we aim to eliminate the corruptions and achieve robust image analysis with SR. Toward this goal, we introduce two transfer processes (ambiguity transfer and mis-detection transfer) to account for the two major sources of corruption as discussed. By reasonably assuming the rarity of the two kinds of distortion processes, we augment the original SR-based reconstruction objective with l(0) norm regularization on the transfer terms to encourage sparsity and, hence, discourage dense distortion/transfer. Computationally, we relax the nonconvex l(0) norm optimization into a convex l(1) norm optimization problem, and employ the accelerated proximal gradient method to optimize the convergence provable updating procedure. Extensive experiments on four benchmark datasets, Caltech-101, Caltech-256, Corel-5k, and CMU pose, illumination, and expression, manifest the necessity of removing the quantization corruptions and the various advantages of the proposed framework.

  8. Optimization of the signal selection of exclusively reconstructed decays of B0 and B/s mesons at CDF-II

    SciTech Connect

    Doerr, Christian

    2006-06-23

    The work presented in this thesis is mainly focused on the application in a Δms measurement. Chapter 1 starts with a general theoretical introduction on the unitarity triangle with a focus on the impact of a Δms measurement. Chapter 2 then describes the experimental setup, consisting of the Tevatron collider and the CDF II detector, that was used to collect the data. In chapter 3 the concept of parameter estimation using binned and unbinned maximum likelihood fits is laid out. In addition an introduction to the NeuroBayes{reg_sign} neural network package is given. Chapter 4 outlines the analysis steps walking the path from the trigger level selection to fully reconstructed B mesons candidates. In chapter 5 the concepts and formulas that form the ingredients to an unbinned maximum likelihood fit of Δms (Δmd) from a sample of reconstructed B mesons are discussed. Chapter 6 then introduces the novel method of using neural networks to achieve an improved signal selection. First the method is developed, tested and validated using the decay B0 → Dπ, D → Kππ and then applied to the kinematically very similar decay Bs → Dsπ, Ds→ Φπ, Φ → KK. Chapter 7 uses events selected by the neural network selection as input to an unbinned maximum likelihood fit and extracts the B0 lifetime and Δmd. In addition, an amplitude scan and an unbinned maximum likelihood fit of Δms is performed, applying the neural network selection developed for the decay channel Bs → Dsπ, Ds → Φπ, Φ → KK. Finally chapter 8 summarizes and gives an outlook.

  9. Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms

    PubMed Central

    Petrantonakis, Panagiotis C.; Poirazi, Panayiota

    2015-01-01

    Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776

  10. Effect of asymmetrical eddy currents on magnetic diagnosis signals for equilibrium reconstruction in the Sino-UNIted Spherical Tokamak.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Wang, L

    2014-11-01

    The vacuum vessel of Sino-UNIted Spherical Tokamak was split into two insulated hemispheres, both of which were insulated from the central cylinder. The eddy currents flowing in the vacuum vessel would become asymmetrical due to discontinuity. A 3D finite elements model was applied in order to study the eddy currents. The modeling results indicated that when the Poloidal Field (PF) was applied, the induced eddy currents would flow in the toroidal direction in the center of the hemispheres and would be forced to turn to the poloidal and radial directions due to the insulated slit. Since the eddy currents converged on the top and bottom of the vessel, the current densities there tended to be much higher than those in the equatorial plane were. Moreover, the eddy currents on the top and bottom of vacuum vessel had the same direction when the current flowed in the PF coils. These features resulted in the leading phases of signals on the top and bottom flux loops when compared with the PF waveforms. PMID:25430380

  11. Energy-based scheme for reconstruction of piecewise constant signals observed in the movement of molecular machines

    NASA Astrophysics Data System (ADS)

    Rosskopf, Joachim; Paul-Yuan, Korbinian; Plenio, Martin B.; Michaelis, Jens

    2016-08-01

    Analyzing the physical and chemical properties of single DNA-based molecular machines such as polymerases and helicases requires to track stepping motion on the length scale of base pairs. Although high-resolution instruments have been developed that are capable of reaching that limit, individual steps are oftentimes hidden by experimental noise which complicates data processing. Here we present an effective two-step algorithm which detects steps in a high-bandwidth signal by minimizing an energy-based model (energy-based step finder, EBS). First, an efficient convex denoising scheme is applied which allows compression to tuples of amplitudes and plateau lengths. Second, a combinatorial clustering algorithm formulated on a graph is used to assign steps to the tuple data while accounting for prior information. Performance of the algorithm was tested on Poissonian stepping data simulated based on published kinetics data of RNA polymerase II (pol II). Comparison to existing step-finding methods shows that EBS is superior in speed while providing competitive step-detection results, especially in challenging situations. Moreover, the capability to detect backtracked intervals in experimental data of pol II as well as to detect stepping behavior of the Phi29 DNA packaging motor is demonstrated.

  12. Energy-based scheme for reconstruction of piecewise constant signals observed in the movement of molecular machines.

    PubMed

    Rosskopf, Joachim; Paul-Yuan, Korbinian; Plenio, Martin B; Michaelis, Jens

    2016-08-01

    Analyzing the physical and chemical properties of single DNA-based molecular machines such as polymerases and helicases requires to track stepping motion on the length scale of base pairs. Although high-resolution instruments have been developed that are capable of reaching that limit, individual steps are oftentimes hidden by experimental noise which complicates data processing. Here we present an effective two-step algorithm which detects steps in a high-bandwidth signal by minimizing an energy-based model (energy-based step finder, EBS). First, an efficient convex denoising scheme is applied which allows compression to tuples of amplitudes and plateau lengths. Second, a combinatorial clustering algorithm formulated on a graph is used to assign steps to the tuple data while accounting for prior information. Performance of the algorithm was tested on Poissonian stepping data simulated based on published kinetics data of RNA polymerase II (pol II). Comparison to existing step-finding methods shows that EBS is superior in speed while providing competitive step-detection results, especially in challenging situations. Moreover, the capability to detect backtracked intervals in experimental data of pol II as well as to detect stepping behavior of the Phi29 DNA packaging motor is demonstrated.

  13. Effect of asymmetrical eddy currents on magnetic diagnosis signals for equilibrium reconstruction in the Sino-UNIted Spherical Tokamaka)

    NASA Astrophysics Data System (ADS)

    Jiang, Y. Z.; Tan, Y.; Gao, Z.; Wang, L.

    2014-11-01

    The vacuum vessel of Sino-UNIted Spherical Tokamak was split into two insulated hemispheres, both of which were insulated from the central cylinder. The eddy currents flowing in the vacuum vessel would become asymmetrical due to discontinuity. A 3D finite elements model was applied in order to study the eddy currents. The modeling results indicated that when the Poloidal Field (PF) was applied, the induced eddy currents would flow in the toroidal direction in the center of the hemispheres and would be forced to turn to the poloidal and radial directions due to the insulated slit. Since the eddy currents converged on the top and bottom of the vessel, the current densities there tended to be much higher than those in the equatorial plane were. Moreover, the eddy currents on the top and bottom of vacuum vessel had the same direction when the current flowed in the PF coils. These features resulted in the leading phases of signals on the top and bottom flux loops when compared with the PF waveforms.

  14. Effect of asymmetrical eddy currents on magnetic diagnosis signals for equilibrium reconstruction in the Sino-UNIted Spherical Tokamak.

    PubMed

    Jiang, Y Z; Tan, Y; Gao, Z; Wang, L

    2014-11-01

    The vacuum vessel of Sino-UNIted Spherical Tokamak was split into two insulated hemispheres, both of which were insulated from the central cylinder. The eddy currents flowing in the vacuum vessel would become asymmetrical due to discontinuity. A 3D finite elements model was applied in order to study the eddy currents. The modeling results indicated that when the Poloidal Field (PF) was applied, the induced eddy currents would flow in the toroidal direction in the center of the hemispheres and would be forced to turn to the poloidal and radial directions due to the insulated slit. Since the eddy currents converged on the top and bottom of the vessel, the current densities there tended to be much higher than those in the equatorial plane were. Moreover, the eddy currents on the top and bottom of vacuum vessel had the same direction when the current flowed in the PF coils. These features resulted in the leading phases of signals on the top and bottom flux loops when compared with the PF waveforms.

  15. Energy-based scheme for reconstruction of piecewise constant signals observed in the movement of molecular machines.

    PubMed

    Rosskopf, Joachim; Paul-Yuan, Korbinian; Plenio, Martin B; Michaelis, Jens

    2016-08-01

    Analyzing the physical and chemical properties of single DNA-based molecular machines such as polymerases and helicases requires to track stepping motion on the length scale of base pairs. Although high-resolution instruments have been developed that are capable of reaching that limit, individual steps are oftentimes hidden by experimental noise which complicates data processing. Here we present an effective two-step algorithm which detects steps in a high-bandwidth signal by minimizing an energy-based model (energy-based step finder, EBS). First, an efficient convex denoising scheme is applied which allows compression to tuples of amplitudes and plateau lengths. Second, a combinatorial clustering algorithm formulated on a graph is used to assign steps to the tuple data while accounting for prior information. Performance of the algorithm was tested on Poissonian stepping data simulated based on published kinetics data of RNA polymerase II (pol II). Comparison to existing step-finding methods shows that EBS is superior in speed while providing competitive step-detection results, especially in challenging situations. Moreover, the capability to detect backtracked intervals in experimental data of pol II as well as to detect stepping behavior of the Phi29 DNA packaging motor is demonstrated. PMID:27627346

  16. A novel multivariate performance optimization method based on sparse coding and hyper-predictor learning.

    PubMed

    Yang, Jiachen; Ding, Zhiyong; Guo, Fei; Wang, Huogen; Hughes, Nick

    2015-11-01

    In this paper, we investigate the problem of optimization of multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a given candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms.

  17. Finding One Community in a Sparse Graph

    NASA Astrophysics Data System (ADS)

    Montanari, Andrea

    2015-10-01

    We consider a random sparse graph with bounded average degree, in which a subset of vertices has higher connectivity than the background. In particular, the average degree inside this subset of vertices is larger than outside (but still bounded). Given a realization of such graph, we aim at identifying the hidden subset of vertices. This can be regarded as a model for the problem of finding a tightly knitted community in a social network, or a cluster in a relational dataset. In this paper we present two sets of contributions: ( i) We use the cavity method from spin glass theory to derive an exact phase diagram for the reconstruction problem. In particular, as the difference in edge probability increases, the problem undergoes two phase transitions, a static phase transition and a dynamic one. ( ii) We establish rigorous bounds on the dynamic phase transition and prove that, above a certain threshold, a local algorithm (belief propagation) correctly identify most of the hidden set. Below the same threshold no local algorithm can achieve this goal. However, in this regime the subset can be identified by exhaustive search. For small hidden sets and large average degree, the phase transition for local algorithms takes an intriguingly simple form. Local algorithms succeed with high probability for deg _in - deg _out > √{deg _out/e} and fail for deg _in - deg _out < √{deg _out/e} (with deg _in, deg _out the average degrees inside and outside the community). We argue that spectral algorithms are also ineffective in the latter regime. It is an open problem whether any polynomial time algorithms might succeed for deg _in - deg _out < √{deg _out/e}.

  18. Sparse Modeling for Astronomical Data Analysis

    NASA Astrophysics Data System (ADS)

    Ikeda, Shiro; Odaka, Hirokazu; Uemura, Makoto

    2016-03-01

    For astronomical data analysis, there have been proposed multiple methods based on sparse modeling. We have proposed a method for Compton camera imaging. The proposed approach is a sparse modeling method, but the derived algorithm is different from LASSO. We explain the problem and how we derived the method.

  19. Wide field of view multifocal scanning microscopy with sparse sampling

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Wu, Jigang

    2016-02-01

    We propose to use sparsely sampled line scans with a sparsity-based reconstruction method to obtain images in a wide field of view (WFOV) multifocal scanning microscope. In the WFOV microscope, we used a holographically generated irregular focus grid to scan the sample in one dimension and then reconstructed the sample image from line scans by measuring the transmission of the foci through the sample during scanning. The line scans were randomly spaced with average spacing larger than the Nyquist sampling requirement, and the image was recovered with sparsity-based reconstruction techniques. With this scheme, the acquisition data can be significantly reduced and the restriction for equally spaced foci positions can be removed, indicating simpler experimental requirement. We built a prototype system and demonstrated the effectiveness of the reconstruction by recovering microscopic images of a U.S. Air Force target and an onion skin cell microscope slide with 40, 60, and 80% missing data with respect to the Nyquist sampling requirement.

  20. Universal regularizers for robust sparse coding and modeling.

    PubMed

    Ramírez, Ignacio; Sapiro, Guillermo

    2012-09-01

    Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared with the more standard l(0) or l(1) ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.

  1. Local sparse component analysis for blind source separation: an application to resting state FMRI.

    PubMed

    Vieira, Gilson; Amaro, Edson; Baccala, Luiz A

    2014-01-01

    We propose a new Blind Source Separation technique for whole-brain activity estimation that best profits from FMRI's intrinsic spatial sparsity. The Local Sparse Component Analysis (LSCA) combines wavelet analysis, group-separable regularizers, contiguity-constrained clusterization and principal components analysis (PCA) into a unique spatial sparse representation of FMRI images towards efficient dimensionality reduction without sacrificing physiological characteristics by avoiding artificial stochastic model constraints. The LSCA outperforms classical PCA source reconstruction for artificial data sets over many noise levels. A real FMRI data illustration reveals resting-state activities in regions hard to observe, such as thalamus and basal ganglia, because of their small spatial scale. PMID:25571267

  2. Gridding and fast Fourier transformation on non-uniformly sparse sampled multidimensional NMR data.

    PubMed

    Jiang, Bin; Jiang, Xianwang; Xiao, Nan; Zhang, Xu; Jiang, Ling; Mao, Xi-an; Liu, Maili

    2010-05-01

    For multidimensional NMR method, indirect dimensional non-uniform sparse sampling can dramatically shorten acquisition time of the experiments. However, the non-uniformly sampled NMR data cannot be processed directly using fast Fourier transform (FFT). We show that the non-uniformly sampled NMR data can be reconstructed to Cartesian grid with the gridding method that has been wide applied in MRI, and sequentially be processed using FFT. The proposed gridding-FFT (GFFT) method increases the processing speed sharply compared with the previously proposed non-uniform Fourier Transform, and may speed up application of the non-uniform sparse sampling approaches. PMID:20236843

  3. Gridding and fast Fourier transformation on non-uniformly sparse sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Jiang, Xianwang; Xiao, Nan; Zhang, Xu; Jiang, Ling; Mao, Xi-an; Liu, Maili

    2010-05-01

    For multidimensional NMR method, indirect dimensional non-uniform sparse sampling can dramatically shorten acquisition time of the experiments. However, the non-uniformly sampled NMR data cannot be processed directly using fast Fourier transform (FFT). We show that the non-uniformly sampled NMR data can be reconstructed to Cartesian grid with the gridding method that has been wide applied in MRI, and sequentially be processed using FFT. The proposed gridding-FFT (GFFT) method increases the processing speed sharply compared with the previously proposed non-uniform Fourier Transform, and may speed up application of the non-uniform sparse sampling approaches.

  4. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  5. Maximum constrained sparse coding for image representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Zhao, Danpei; Jiang, Zhiguo

    2015-12-01

    Sparse coding exhibits good performance in many computer vision applications by finding bases which capture highlevel semantics of the data and learning sparse coefficients in terms of the bases. However, due to the fact that bases are non-orthogonal, sparse coding can hardly preserve the samples' similarity, which is important for discrimination. In this paper, a new image representing method called maximum constrained sparse coding (MCSC) is proposed. Sparse representation with more active coefficients means more similarity information, and the infinite norm is added to the solution for this purpose. We solve the optimizer by constraining the codes' maximum and releasing the residual to other dictionary atoms. Experimental results on image clustering show that our method can preserve the similarity of adjacent samples and maintain the sparsity of code simultaneously.

  6. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  7. Sparse distributed memory: Principles and operation

    NASA Technical Reports Server (NTRS)

    Flynn, M. J.; Kanerva, P.; Bhadkamkar, N.

    1989-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long (1000 bit) binary words. Such words can be written into and read from the memory, and they can also be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech recognition and scene analysis, in signal detection and verification, and in adaptive control of automated equipment, in general, in dealing with real world information in real time. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. Major design issues were resolved which were faced in building the memories. The design is described of a prototype memory with 256 bit addresses and from 8 to 128 K locations for 256 bit words. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  8. Sparse coding for layered neural networks

    NASA Astrophysics Data System (ADS)

    Katayama, Katsuki; Sakata, Yasuo; Horiguchi, Tsuyoshi

    2002-07-01

    We investigate storage capacity of two types of fully connected layered neural networks with sparse coding when binary patterns are embedded into the networks by a Hebbian learning rule. One of them is a layered network, in which a transfer function of even layers is different from that of odd layers. The other is a layered network with intra-layer connections, in which the transfer function of inter-layer is different from that of intra-layer, and inter-layered neurons and intra-layered neurons are updated alternately. We derive recursion relations for order parameters by means of the signal-to-noise ratio method, and then apply the self-control threshold method proposed by Dominguez and Bollé to both layered networks with monotonic transfer functions. We find that a critical value αC of storage capacity is about 0.11|a ln a| -1 ( a≪1) for both layered networks, where a is a neuronal activity. It turns out that the basin of attraction is larger for both layered networks when the self-control threshold method is applied.

  9. Sparse distributed memory prototype: Principles of operation

    NASA Technical Reports Server (NTRS)

    Flynn, Michael J.; Kanerva, Pentti; Ahanin, Bahram; Bhadkamkar, Neal; Flaherty, Paul; Hickey, Philip

    1988-01-01

    Sparse distributed memory is a generalized random access memory (RAM) for long binary words. Such words can be written into and read from the memory, and they can be used to address the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original right address but also by giving one close to it as measured by the Hamming distance between addresses. Large memories of this kind are expected to have wide use in speech and scene analysis, in signal detection and verification, and in adaptive control of automated equipment. The memory can be realized as a simple, massively parallel computer. Digital technology has reached a point where building large memories is becoming practical. The research is aimed at resolving major design issues that have to be faced in building the memories. The design of a prototype memory with 256-bit addresses and from 8K to 128K locations for 256-bit words is described. A key aspect of the design is extensive use of dynamic RAM and other standard components.

  10. Improvement of signal-to-noise ratio using iterative reconstruction in a 99m Tc-ECD split-dose injection protocol.

    PubMed

    Yokoi, Takashi; Shinohara, Hiroyuki; Takaki, Akihiro

    2003-08-01

    Split-dose injection using technetium-99m ethyl cysteinate dimer ((99m)Tc-ECD) and consecutive SPET measurements performed before and after acetazolamide (ACZ) loading was used to estimate the cerebral perfusion reserve. The disadvantage of the split-dose method is that the signal-to-noise ratio (S/N) of ACZ-loaded images is decreased by subtraction of the 1st SPET data (rest) from the 2nd SPET data (ACZ loaded). To improve the S/N of reconstructed images, we implemented an iterative reconstruction algorithm including the term of remaining radioactivity in the brain from the 1st injection. It was expected that this method (the "addition method") would improve the S/N of rest and ACZ images compared with the conventional subtraction method owing to exclusion of the subtraction process. To evaluate the effect of statistical noise, we estimated the percentage coefficient of variation (%COV) as a function of total photon counts (from 1.35 to 86.5 Mcounts/slice) by Monte Carlo simulation with equal-volume split-dose injection. The %COV of the 2nd SPET study was higher than that of the 1st (e.g. 50.3% for the 1st and 80.5% for the 2nd at a total count of 2.70 Mcounts/slice) when using the conventional subtraction method. By contrast, the %COV of the 1st and 2nd SPET studies was almost equivalent (e.g. 43.1% for the 1st and 41.4% for the 2nd at a total count of 2.70 Mcounts/slice) when using the addition method. We also determined the optimal injection dose ratio of the 2nd to the 1st SPET study, which provides the equivalent %COV value between the 1st and 2nd images. With the subtraction method, the optimal injection dose ratio of the 2nd to the 1st SPET study was approximately 2.0, while with the addition method it was approximately 1.0. The absolute value of %COV at the optimal injection dose was about 54% and 43% with the subtraction method and the addition method, respectively. The addition method gave a lower %COV value than the subtraction method even at the

  11. Generalization of spectral fidelity with flexible measures for the sparse representation classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Wu, Bo; Zhu, Yong; Huang, Xin; Li, Jiayi

    2016-10-01

    Sparse representation classification (SRC) is becoming a promising tool for hyperspectral image (HSI) classification, where the Euclidean spectral distance (ESD) is widely used to reflect the fidelity between the original and reconstructed signals. In this paper, a generalized model is proposed to extend SRC by characterizing the spectral fidelity with flexible similarity measures. To validate the flexibility, several typical similarity measures-the spectral angle similarity (SAS), spectral information divergence (SID), the structural similarity index measure (SSIM), and the ESD-are included in the generalized model. Furthermore, a general solution based on a gradient descent technique is used to solve the nonlinear optimization problem formulated by the flexible similarity measures. To test the generalized model, two actual HSIs were used, and the experimental results confirm the ability of the proposed model to accommodate the various spectral similarity measures. Performance comparisons with the ESD, SAS, SID, and SSIM criteria were also conducted, and the results consistently show the advantages of the generalized model for HSI classification in terms of overall accuracy and kappa coefficient.

  12. Resolving complex fibre architecture by means of sparse spherical deconvolution in the presence of isotropic diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Michailovich, O.; Rathi, Y.

    2014-03-01

    High angular resolution diffusion imaging (HARDI) improves upon more traditional diffusion tensor imaging (DTI) in its ability to resolve the orientations of crossing and branching neural fibre tracts. The HARDI signals are measured over a spherical shell in q-space, and are usually used as an input to q-ball imaging (QBI) which allows estimation of the diffusion orientation distribution functions (ODFs) associated with a given region-of interest. Unfortunately, the partial nature of single-shell sampling imposes limits on the estimation accuracy. As a result, the recovered ODFs may not possess sufficient resolution to reveal the orientations of fibre tracts which cross each other at acute angles. A possible solution to the problem of limited resolution of QBI is provided by means of spherical deconvolution, a particular instance of which is sparse deconvolution. However, while capable of yielding high-resolution reconstructions over spacial locations corresponding to white matter, such methods tend to become unstable when applied to anatomical regions with a substantial content of isotropic diffusion. To resolve this problem, a new deconvolution approach is proposed in this paper. Apart from being uniformly stable across the whole brain, the proposed method allows one to quantify the isotropic component of cerebral diffusion, which is known to be a useful diagnostic measure by itself.

  13. Near-field acoustic holography using sparse regularization and compressive sampling principles.

    PubMed

    Chardon, Gilles; Daudet, Laurent; Peillot, Antoine; Ollivier, François; Bertin, Nancy; Gribonval, Rémi

    2012-09-01

    Regularization of the inverse problem is a complex issue when using near-field acoustic holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, alternative regularization schemes can be developed based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e., the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of compressive sampling; under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization. PMID:22978881

  14. Efficient nearest neighbors via robust sparse hashing.

    PubMed

    Cherian, Anoop; Sra, Suvrit; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2014-08-01

    This paper presents a new nearest neighbor (NN) retrieval framework: robust sparse hashing (RSH). Our approach is inspired by the success of dictionary learning for sparse coding. Our key idea is to sparse code the data using a learned dictionary, and then to generate hash codes out of these sparse codes for accurate and fast NN retrieval. But, direct application of sparse coding to NN retrieval poses a technical difficulty: when data are noisy or uncertain (which is the case with most real-world data sets), for a query point, an exact match of the hash code generated from the sparse code seldom happens, thereby breaking the NN retrieval. Borrowing ideas from robust optimization theory, we circumvent this difficulty via our novel robust dictionary learning and sparse coding framework called RSH, by learning dictionaries on the robustified counterparts of the perturbed data points. The algorithm is applied to NN retrieval on both simulated and real-world data. Our results demonstrate that RSH holds significant promise for efficient NN retrieval against the state of the art.

  15. Sparse-representation algorithms for blind estimation of acoustic-multipath channels.

    PubMed

    Zeng, Wen-Jun; Jiang, Xue; So, Hing Cheung

    2013-04-01

    Acoustic channel estimation is an important problem in various applications. Unlike many existing channel estimation techniques that need known probe or training signals, this paper develops a blind multipath channel identification algorithm. The proposed approach is based on the single-input multiple-output model and exploits the sparse multichannel structure. Three sparse representation algorithms, namely, matching pursuit, orthogonal matching pursuit, and basis pursuit, are applied to the blind sparse identification problem. Compared with the classical least squares approach to blind multichannel estimation, the proposed scheme does not require that the channel order be exactly determined and it is robust to channel order selection. Moreover, the ill-conditioning induced by the large delay spread is overcome by the sparse constraint. Simulation results for deconvolution of both underwater and room acoustic channels confirm the effectiveness of the proposed approach.

  16. Sparse recovery of the multimodal and dispersive characteristics of Lamb waves.

    PubMed

    Harley, Joel B; Moura, José M F

    2013-05-01

    Guided waves in plates, known as Lamb waves, are characterized by complex, multimodal, and frequency dispersive wave propagation, which distort signals and make their analysis difficult. Estimating these multimodal and dispersive characteristics from experimental data becomes a difficult, underdetermined inverse problem. To accurately and robustly recover these multimodal and dispersive properties, this paper presents a methodology referred to as sparse wavenumber analysis based on sparse recovery methods. By utilizing a general model for Lamb waves, waves propagating in a plate structure, and robust l1 optimization strategies, sparse wavenumber analysis accurately recovers the Lamb wave's frequency-wavenumber representation with a limited number of surface mounted transducers. This is demonstrated with both simulated and experimental data in the presence of multipath reflections. With accurate frequency-wavenumber representations, sparse wavenumber synthesis is then used to accurately remove multipath interference in each measurement and predict the responses between arbitrary points on a plate.

  17. Incorporation of prior knowledge for region of change imaging from sparse scan data in image-guided surgery

    NASA Astrophysics Data System (ADS)

    Lee, J.; Stayman, J. W.; Otake, Y.; Schafer, S.; Zbijewski, W.; Khanna, A. J.; Prince, J. L.; Siewerdsen, J. H.

    2012-02-01

    This paper proposes to utilize a patient-specific prior to augment intraoperative sparse-scan data to accurately reconstruct the aspects of the region that have changed by a surgical procedure in image-guided surgeries. When anatomical changes are introduced by a surgical procedure, only a sparse set of x-ray images are acquired, and the prior volume is registered to these data. Since all the information of the patient anatomy except for the surgical change is already known from the prior volume, we highlight only the change by creating difference images between the new scan and digitally reconstructed radiographs (DRR) computed from the registered prior volume. The region of change (RoC) is reconstructed from these sparse difference images by a penalized likelihood (PL) reconstruction method regularized by a compressed sensing penalty. When the surgical changes are local and relatively small, the RoC reconstruction involves only a small volume size and a small number of projections, allowing much faster computation and lower radiation dose than is needed to reconstruct the entire surgical volume. The reconstructed RoC merges with the prior volume to visualize an updated surgical field. We apply this novel approach to sacroplasty phantom data obtained from a conebeam CT (CBCT) test bench and vertebroplasty data with a fresh cadaver acquired from a C-arm CBCT system with a flat-panel detector (FPD).

  18. Sparse High Dimensional Models in Economics.

    PubMed

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2011-09-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  19. Sparse High Dimensional Models in Economics

    PubMed Central

    Fan, Jianqing; Lv, Jinchi; Qi, Lei

    2010-01-01

    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance. Recent developments of theory, methods, and implementations in penalized least squares and penalized likelihood methods are highlighted. These variable selection methods are proved to be effective in high dimensional sparse modeling. The limits of dimensionality that regularization methods can handle, the role of penalty functions, and their statistical properties are detailed. Some recent advances in ultra-high dimensional sparse modeling are also briefly discussed. PMID:22022635

  20. Enhancing Scalability of Sparse Direct Methods

    SciTech Connect

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia,Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-07-23

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers.

  1. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition. PMID:27386281

  2. Thin-film sparse boundary array design for passive acoustic mapping during ultrasound therapy.

    PubMed

    Coviello, Christian M; Kozick, Richard J; Hurrell, Andrew; Smith, Penny Probert; Coussios, Constantin-C

    2012-10-01

    A new 2-D hydrophone array for ultrasound therapy monitoring is presented, along with a novel algorithm for passive acoustic mapping using a sparse weighted aperture. The array is constructed using existing polyvinylidene fluoride (PVDF) ultrasound sensor technology, and is utilized for its broadband characteristics and its high receive sensitivity. For most 2-D arrays, high-resolution imagery is desired, which requires a large aperture at the cost of a large number of elements. The proposed array's geometry is sparse, with elements only on the boundary of the rectangular aperture. The missing information from the interior is filled in using linear imaging techniques. After receiving acoustic emissions during ultrasound therapy, this algorithm applies an apodization to the sparse aperture to limit side lobes and then reconstructs acoustic activity with high spatiotemporal resolution. Experiments show verification of the theoretical point spread function, and cavitation maps in agar phantoms correspond closely to predicted areas, showing the validity of the array and methodology. PMID:23143581

  3. Compressive Sensing of Foot Gait Signals and Its Application for the Estimation of Clinically Relevant Time Series.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2016-07-01

    A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals.

  4. Compressive Sensing of Foot Gait Signals and Its Application for the Estimation of Clinically Relevant Time Series.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2016-07-01

    A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals. PMID:25675451

  5. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes.

  6. Sparse coded image super-resolution using K-SVD trained dictionary based on regularized orthogonal matching pursuit.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2015-01-01

    Image super-resolution (SR) plays a vital role in medical imaging that allows a more efficient and effective diagnosis process. Usually, diagnosing is difficult and inaccurate from low-resolution (LR) and noisy images. Resolution enhancement through conventional interpolation methods strongly affects the precision of consequent processing steps, such as segmentation and registration. Therefore, we propose an efficient sparse coded image SR reconstruction technique using a trained dictionary. We apply a simple and efficient regularized version of orthogonal matching pursuit (ROMP) to seek the coefficients of sparse representation. ROMP has the transparency and greediness of OMP and the robustness of the L1-minization that enhance the dictionary learning process to capture feature descriptors such as oriented edges and contours from complex images like brain MRIs. The sparse coding part of the K-SVD dictionary training procedure is modified by substituting OMP with ROMP. The dictionary update stage allows simultaneously updating an arbitrary number of atoms and vectors of sparse coefficients. In SR reconstruction, ROMP is used to determine the vector of sparse coefficients for the underlying patch. The recovered representations are then applied to the trained dictionary, and finally, an optimization leads to high-resolution output of high-quality. Experimental results demonstrate that the super-resolution reconstruction quality of the proposed scheme is comparatively better than other state-of-the-art schemes. PMID:26405902

  7. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  8. On finding supernodes for sparse matrix computations

    SciTech Connect

    Liu, J.W.H. . Dept. of Computer Science); Ng, E.; Peyton, B.W. )

    1990-06-01

    A simple characterization of fundamental supernodes is given in terms of the row subtrees of sparse Cholesky factors in the elimination tree. Using this characterization, we present an efficient algorithm that determines the set of such supernodes in time proportional to the number of nonzeros and equations in the original matrix. Experimental results are included to demonstrate the use of this algorithm in the context of sparse supernodal symbolic factorization. 18 refs., 3 figs., 3 tabs.

  9. Sparse and compositionally robust inference of microbial ecological networks.

    PubMed

    Kurtz, Zachary D; Müller, Christian L; Miraldi, Emily R; Littman, Dan R; Blaser, Martin J; Bonneau, Richard A

    2015-05-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  10. Sparse and Compositionally Robust Inference of Microbial Ecological Networks

    PubMed Central

    Kurtz, Zachary D.; Müller, Christian L.; Miraldi, Emily R.; Littman, Dan R.; Blaser, Martin J.; Bonneau, Richard A.

    2015-01-01

    16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC

  11. Single-Trial Sparse Representation-Based Approach for VEP Extraction

    PubMed Central

    Yu, Nannan; Hu, Funian; Zou, Dexuan; Ding, Qisheng

    2016-01-01

    Sparse representation is a powerful tool in signal denoising, and visual evoked potentials (VEPs) have been proven to have strong sparsity over an appropriate dictionary. Inspired by this idea, we present in this paper a novel sparse representation-based approach to solving the VEP extraction problem. The extraction process is performed in three stages. First, instead of using the mixed signals containing the electroencephalogram (EEG) and VEPs, we utilise an EEG from a previous trial, which did not contain VEPs, to identify the parameters of the EEG autoregressive (AR) model. Second, instead of the moving average (MA) model, sparse representation is used to model the VEPs in the autoregressive-moving average (ARMA) model. Finally, we calculate the sparse coefficients and derive VEPs by using the AR model. Next, we tested the performance of the proposed algorithm with synthetic and real data, after which we compared the results with that of an AR model with exogenous input modelling and a mixed overcomplete dictionary-based sparse component decomposition method. Utilising the synthetic data, the algorithms are then employed to estimate the latencies of P100 of the VEPs corrupted by added simulated EEG at different signal-to-noise ratio (SNR) values. The validations demonstrate that our method can well preserve the details of the VEPs for latency estimation, even in low SNR environments. PMID:27807541

  12. CARS Spectral Fitting with Multiple Resonant Species using Sparse Libraries

    NASA Technical Reports Server (NTRS)

    Cutler, Andrew D.; Magnotti, Gaetano

    2010-01-01

    The dual pump CARS technique is often used in the study of turbulent flames. Fast and accurate algorithms are needed for fitting dual-pump CARS spectra for temperature and multiple chemical species. This paper describes the development of such an algorithm. The algorithm employs sparse libraries, whose size grows much more slowly with number of species than a conventional library. The method was demonstrated by fitting synthetic "experimental" spectra containing 4 resonant species (N2, O2, H2 and CO2), both with noise and without it, and by fitting experimental spectra from a H2-air flame produced by a Hencken burner. In both studies, weighted least squares fitting of signal, as opposed to least squares fitting signal or square-root signal, was shown to produce the least random error and minimize bias error in the fitted parameters.

  13. LASER APPLICATIONS IN MEDICINE: Analysis of distortions in the velocity profiles of suspension flows inside a light-scattering medium upon their reconstruction from the optical coherence Doppler tomograph signal

    NASA Astrophysics Data System (ADS)

    Bykov, A. V.; Kirillin, M. Yu; Priezzhev, A. V.

    2005-11-01

    Model signals from one and two plane flows of a particle suspension are obtained for an optical coherence Doppler tomograph (OCDT) by the Monte-Carlo method. The optical properties of particles mimic the properties of non-aggregating erythrocytes. The flows are considered in a stationary scattering medium with optical properties close to those of the skin. It is shown that, as the flow position depth increases, the flow velocity determined from the OCDT signal becomes smaller than the specified velocity and the reconstructed profile extends in the direction of the distant boundary, which is accompanied by the shift of its maximum. In the case of two flows, an increase in the velocity of the near-surface flow leads to the overestimated values of velocity of the reconstructed profile of the second flow. Numerical simulations were performed by using a multiprocessor parallel-architecture computer.

  14. Finding Nonoverlapping Substructures of a Sparse Matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2005-08-11

    Many applications of scientific computing rely on computations on sparse matrices. The design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of nonoverlapping dense blocks in a sparse matrix, which is previously not studied in the sparse matrix community. We show that the maximum nonoverlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm that runs in linear time in the number of nonzeros in the matrix. This extended abstract focuses on our results for 2x2 dense blocks. However we show that our results can be generalized to arbitrary sized dense blocks, and many other oriented substructures, which can be exploited to improve the memory performance of sparse matrix operations.

  15. Sparse extreme learning machine for classification.

    PubMed

    Bai, Zuo; Huang, Guang-Bin; Wang, Danwei; Wang, Han; Westover, M Brandon

    2014-10-01

    Extreme learning machine (ELM) was initially proposed for single-hidden-layer feedforward neural networks (SLFNs). In the hidden layer (feature mapping), nodes are randomly generated independently of training data. Furthermore, a unified ELM was proposed, providing a single framework to simplify and unify different learning methods, such as SLFNs, least square support vector machines, proximal support vector machines, and so on. However, the solution of unified ELM is dense, and thus, usually plenty of storage space and testing time are required for large-scale applications. In this paper, a sparse ELM is proposed as an alternative solution for classification, reducing storage space and testing time. In addition, unified ELM obtains the solution by matrix inversion, whose computational complexity is between quadratic and cubic with respect to the training size. It still requires plenty of training time for large-scale problems, even though it is much faster than many other traditional methods. In this paper, an efficient training algorithm is specifically developed for sparse ELM. The quadratic programming problem involved in sparse ELM is divided into a series of smallest possible sub-problems, each of which are solved analytically. Compared with SVM, sparse ELM obtains better generalization performance with much faster training speed. Compared with unified ELM, sparse ELM achieves similar generalization performance for binary classification applications, and when dealing with large-scale binary classification problems, sparse ELM realizes even faster training speed than unified ELM. PMID:25222727

  16. Sparse MEG source imaging in Landau-Kleffner syndrome.

    PubMed

    Zhu, Min; Zhang, Wenbo; Dickens, Deanna; Ding, Lei

    2011-01-01

    Epilepsy patients with Landau-Kleffner syndrome (LKS) usually have a normal brain structure, which makes it a challenge to identify the epileptogenic zone only based on magnetic resonance imaging (MRI) data. A sparse source imaging technique called variation based sparse cortical current density (VB-SCCD) imaging was adopted here to reconstruct cortical sources of magnetoencephalography (MEG) interictal spikes from an LKS patient. Realistic boundary element (BE) head and cortex models were built by segmenting structural MRI. 148-channel MEG was recorded for 10 minutes between seizures. Total 29 epileptiform spikes were selected for analysis. The primary cortical sources were observed locating at the left intra- and perisylvian cortex. Multiple extrasylvian sources were identified as the secondary sources. The spatio-temporal patterns of cortical sources provide more insights about the neuronal synchrony and propagation of epileptic discharges. Our observations were consistent with presurgical diagnosis for this patient and observation of aphasia in LKS. The present results suggest that the promising of VB-SCCD technique in assisting with presurgical planning and studying the neural network for LKS in determining the lateralization of epileptic origins. It can further be applied to non-invasively localize and/or lateralize eloquent cortex for language for epilepsy patients in general in the future.

  17. Channeled spectropolarimetry using iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Dennis J.; LaCasse, Charles F.; Craven, Julia M.

    2016-05-01

    Channeled spectropolarimeters (CSP) measure the polarization state of light as a function of wavelength. Conventional Fourier reconstruction suffers from noise, assumes the channels are band-limited, and requires uniformly spaced samples. To address these problems, we propose an iterative reconstruction algorithm. We develop a mathematical model of CSP measurements and minimize a cost function based on this model. We simulate a measured spectrum using example Stokes parameters, from which we compare conventional Fourier reconstruction and iterative reconstruction. Importantly, our iterative approach can reconstruct signals that contain more bandwidth, an advancement over Fourier reconstruction. Our results also show that iterative reconstruction mitigates noise effects, processes non-uniformly spaced samples without interpolation, and more faithfully recovers the ground truth Stokes parameters. This work offers a significant improvement to Fourier reconstruction for channeled spectropolarimetry.

  18. W-band sparse synthetic aperture for computational imaging.

    PubMed

    Venkatesh, S; Viswanathan, N; Schurig, D

    2016-04-18

    We present a sparse synthetic-aperture, active imaging system at W-band (75 - 110 GHz), which uses sub-harmonic mixer modules. The system employs mechanical scanning of the receiver module position, and a fixed transmitter module. A vector network analyzer provides the back end detection. A full-wave forward model allows accurate construction of the image transfer matrix. We solve the inverse problem to reconstruct scenes using the least squares technique. We demonstrate far-field, diffraction limited imaging of 2D and 3D objects and achieve a cross-range resolution of 3 mm and a depth-range resolution of 4 mm, respectively. Furthermore, we develop an information-based metric to evaluate the performance of a given image transfer matrix for noise-limited, computational imaging systems. We use this metric to find the optimal gain of the radiating element for a given range, both theoretically and experimentally in our system. PMID:27137270

  19. Image reconstruction for single detector rosette scanning systems based on compressive sensing theory

    NASA Astrophysics Data System (ADS)

    Uzeler, Hande; Cakir, Serdar; Aytaç, Tayfun

    2016-02-01

    Compressive sensing (CS) is a signal processing technique that enables a signal that has a sparse representation in a known basis to be reconstructed using measurements obtained below the Nyquist rate. Single detector image reconstruction applications using CS have been shown to give promising results. In this study, we investigate the application of CS theory to single detector infrared (IR) rosette scanning systems which suffer from low performance compared to costly focal plane array (FPA) detectors. The single detector pseudoimaging rosette scanning system scans the scene with a specific pattern and performs processing to estimate the target location without forming an image. In this context, this generation of scanning systems may be improved by utilizing the samples obtained by the rosette scanning pattern in conjunction with the CS framework. For this purpose, we consider surface-to-air engagement scenarios using IR images containing aerial targets and flares. The IR images have been reconstructed from samples obtained with the rosette scanning pattern and other baseline sampling strategies. It has been shown that the proposed scheme exhibits good reconstruction performance and a large size FPA imaging performance can be achieved using a single IR detector with a rosette scanning pattern.

  20. The application of a sparse, distributed memory to the detection, identification and manipulation of physical objects

    NASA Technical Reports Server (NTRS)

    Kanerva, P.

    1986-01-01

    To determine the relation of the sparse, distributed memory to other architectures, a broad review of the literature was made. The memory is called a pattern memory because they work with large patterns of features (high-dimensional vectors). A pattern is stored in a pattern memory by distributing it over a large number of storage elements and by superimposing it over other stored patterns. A pattern is retrieved by mathematical or statistical reconstruction from the distributed elements. Three pattern memories are discussed.

  1. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  2. Sparse/DCT (S/DCT) two-layered representation of prediction residuals for video coding.

    PubMed

    Kang, Je-Won; Gabbouj, Moncef; Kuo, C-C Jay

    2013-07-01

    In this paper, we propose a cascaded sparse/DCT (S/DCT) two-layer representation of prediction residuals, and implement this idea on top of the state-of-the-art high efficiency video coding (HEVC) standard. First, a dictionary is adaptively trained to contain featured patterns of residual signals so that a high portion of energy in a structured residual can be efficiently coded via sparse coding. It is observed that the sparse representation alone is less effective in the R-D performance due to the side information overhead at higher bit rates. To overcome this problem, the DCT representation is cascaded at the second stage. It is applied to the remaining signal to improve coding efficiency. The two representations successfully complement each other. It is demonstrated by experimental results that the proposed algorithm outperforms the HEVC reference codec HM5.0 in the Common Test Condition.

  3. Velocity analysis using high-resolution semblance based on sparse hyperbolic Radon transform

    NASA Astrophysics Data System (ADS)

    Gong, Xiangbo; Wang, Shengchao; Zhang, Tianze

    2016-11-01

    Semblance measures the lateral coherency of the seismic events in a common mid-point gather, and it has been widely used for the normal-moveout-based velocity estimation. In this paper, we propose a new velocity analysis method by using high-resolution semblance based on sparse hyperbolic Radon transform (SHRT). Conventional semblance can be defined as the ratio of signal energy to total energy in the time gate. We replace the signal energy with the square of the sparse Radon panel and replace the total energy with the sparse Radon panel of the square data. Because of the sparsity-constrained inversion of SHRT, the new approach can produce higher resolution semblance spectra than the conventional semblance. We test this new semblance on synthetic and field data to demonstrate the improvements in velocity analysis.

  4. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  5. Task-based optimization of image reconstruction in breast CT

    NASA Astrophysics Data System (ADS)

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  6. Detect signals of interdecadal climate variations from an enhanced suite of reconstructed precipitation products since 1850 using the historical station data from Global Historical Climatology Network and the dynamical patterns derived from Global Precipitation Climatology Project

    NASA Astrophysics Data System (ADS)

    Shen, S. S.

    2015-12-01

    This presentation describes the detection of interdecadal climate signals in a newly reconstructed precipitation data from 1850-present. Examples are on precipitation signatures of East Asian Monsoon (EAM), Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillations (AMO). The new reconstruction dataset is an enhanced edition of a suite of global precipitation products reconstructed by Spectral Optimal Gridding of Precipitation Version 1.0 (SOGP 1.0). The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). This enhanced version has three different temporal resolutions (5-day, monthly, and annual) and two different spatial resolutions (2.5 deg and 5.0 deg). It also has a friendly Graphical User Interface (GUI). SOGP uses a multivariate regression method using an empirical orthogonal function (EOF) expansion. The Global Precipitation Climatology Project (GPCP) precipitation data from 1981-20010 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed according to the number of EOF modes used in the reconstruction. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction has been validated by GPCP data after 1979. Our reconstruction successfully displays the 1877 El Nino (see the attached figure), which is considered a validation before 1900. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort of San Diego State University (Sam Shen, Gregori Clarke, Christian Junjinger, Nancy Tafolla, Barbara Sperberg, and

  7. Dictionary construction in sparse methods for image restoration

    SciTech Connect

    Wohlberg, Brendt

    2010-01-01

    Sparsity-based methods have achieved very good performance in a wide variety of image restoration problems, including denoising, inpainting, super-resolution, and source separation. These methods are based on the assumption that the image to be reconstructed may be represented as a superposition of a few known components, and the appropriate linear combination of components is estimated by solving an optimization such as Basis Pursuit De-Noising (BPDN). Considering that the K-SVD constructs a dictionary which has been optimised for mean performance over a training set, it is not too surprising that better performance can be achieved by selecting a custom dictionary for each individual block to be reconstructed. The nearest neighbor dictionary construction can be understood geometrically as a method for estimating the local projection into the manifold of image blocks, whereas the K-SVD dictionary makes more sense within a source-coding framework (it is presented as a generalization of the k-means algorithm for constructing a VQ codebook), is therefore, it could be argued, less appropriate in principle, for reconstruction problems. One can, of course, motivate the use of the K-SVD in reconstruction application on practical grounds, avoiding the computational expense of constructing a different dictionary for each block to be denoised. Since the performance of the nearest neighbor dictionary decreases when the dictionary becomes sufficiently large, this method is also superior to the approach of utilizing the entire training set as a dictionary (and this can also be understood within the image block manifold model). In practical terms, the tradeoff is between the computational cost of a nearest neighbor search (which can be achieved very efficiently), or of increased cost at the sparse optimization.

  8. Finding nonoverlapping substructures of a sparse matrix

    SciTech Connect

    Pinar, Ali; Vassilevska, Virginia

    2004-08-09

    Many applications of scientific computing rely on computations on sparse matrices, thus the design of efficient implementations of sparse matrix kernels is crucial for the overall efficiency of these applications. Due to the high compute-to-memory ratio and irregular memory access patterns, the performance of sparse matrix kernels is often far away from the peak performance on a modern processor. Alternative data structures have been proposed, which split the original matrix A into A{sub d} and A{sub s}, so that A{sub d} contains all dense blocks of a specified size in the matrix, and A{sub s} contains the remaining entries. This enables the use of dense matrix kernels on the entries of A{sub d} producing better memory performance. In this work, we study the problem of finding a maximum number of non overlapping rectangular dense blocks in a sparse matrix, which has not been studied in the sparse matrix community. We show that the maximum non overlapping dense blocks problem is NP-complete by using a reduction from the maximum independent set problem on cubic planar graphs. We also propose a 2/3-approximation algorithm for 2 times 2 blocks that runs in linear time in the number of nonzeros in the matrix. We discuss alternatives to rectangular blocks such as diagonal blocks and cross blocks and present complexity analysis and approximation algorithms.

  9. Removing sparse noise from hyperspectral images with sparse and low-rank penalties

    NASA Astrophysics Data System (ADS)

    Tariyal, Snigdha; Aggarwal, Hemant Kumar; Majumdar, Angshul

    2016-03-01

    In diffraction grating, at times, there are defective pixels on the focal plane array; this results in horizontal lines of corrupted pixels in some channels. Since only a few such pixels exist, the corruption/noise is sparse. Studies on sparse noise removal from hyperspectral noise are parsimonious. To remove such sparse noise, a prior work exploited the interband spectral correlation along with intraband spatial redundancy to yield a sparse representation in transform domains. We improve upon the prior technique. The intraband spatial redundancy is modeled as a sparse set of transform coefficients and the interband spectral correlation is modeled as a rank deficient matrix. The resulting optimization problem is solved using the split Bregman technique. Comparative experimental results show that our proposed approach is better than the previous one.

  10. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  11. Guaranteed Blind Sparse Spikes Deconvolution via Lifting and Convex Optimization

    NASA Astrophysics Data System (ADS)

    Chi, Yuejie

    2016-06-01

    Neural recordings, returns from radars and sonars, images in astronomy and single-molecule microscopy can be modeled as a linear superposition of a small number of scaled and delayed copies of a band-limited or diffraction-limited point spread function, which is either determined by the nature or designed by the users; in other words, we observe the convolution between a point spread function and a sparse spike signal with unknown amplitudes and delays. While it is of great interest to accurately resolve the spike signal from as few samples as possible, however, when the point spread function is not known a priori, this problem is terribly ill-posed. This paper proposes a convex optimization framework to simultaneously estimate the point spread function as well as the spike signal, by mildly constraining the point spread function to lie in a known low-dimensional subspace. By applying the lifting trick, we obtain an underdetermined linear system of an ensemble of signals with joint spectral sparsity, to which atomic norm minimization is applied. Under mild randomness assumptions of the low-dimensional subspace as well as a separation condition of the spike signal, we prove the proposed algorithm, dubbed as AtomicLift, is guaranteed to recover the spike signal up to a scaling factor as soon as the number of samples is large enough. The extension of AtomicLift to handle noisy measurements is also discussed. Numerical examples are provided to validate the effectiveness of the proposed approaches.

  12. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  13. Dictionary-based image reconstruction for superresolution in integrated circuit imaging.

    PubMed

    Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim

    2015-06-01

    Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.

  14. Dynamic Stochastic Superresolution of sparsely observed turbulent systems

    SciTech Connect

    Branicki, M.; Majda, A.J.

    2013-05-15

    of the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a fashion elucidated here. The DSS technique exploiting a simple Gaussian closure of the nonlinear stochastic forecast model emerges as the most suitable trade-off between the superresolution skill and computational complexity associated with estimating the cross-correlations between the aliasing modes of the sparsely observed turbulent signal. Such techniques offer a promising and efficient approach to constraining unresolved turbulent fluxes through stochastic superparameterization and a subsequent improvement in coarse-grained filtering and prediction of the next generation atmosphere–ocean system (AOS) models.

  15. Dynamic Stochastic Superresolution of sparsely observed turbulent systems

    NASA Astrophysics Data System (ADS)

    Branicki, M.; Majda, A. J.

    2013-05-01

    the turbulent signal and the observation time relative to the decorrelation time of the turbulence at a given spatial scale in a fashion elucidated here. The DSS technique exploiting a simple Gaussian closure of the nonlinear stochastic forecast model emerges as the most suitable trade-off between the superresolution skill and computational complexity associated with estimating the cross-correlations between the aliasing modes of the sparsely observed turbulent signal. Such techniques offer a promising and efficient approach to constraining unresolved turbulent fluxes through stochastic superparameterization and a subsequent improvement in coarse-grained filtering and prediction of the next generation atmosphere-ocean system (AOS) models.

  16. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  17. Multisnapshot Sparse Bayesian Learning for DOA

    NASA Astrophysics Data System (ADS)

    Gerstoft, Peter; Mecklenbrauker, Christoph F.; Xenaki, Angeliki; Nannuru, Santosh

    2016-10-01

    The directions of arrival (DOA) of plane waves are estimated from multi-snapshot sensor array data using Sparse Bayesian Learning (SBL). The prior source amplitudes is assumed independent zero-mean complex Gaussian distributed with hyperparameters the unknown variances (i.e. the source powers). For a complex Gaussian likelihood with hyperparameter the unknown noise variance, the corresponding Gaussian posterior distribution is derived. For a given number of DOAs, the hyperparameters are automatically selected by maximizing the evidence and promote sparse DOA estimates. The SBL scheme for DOA estimation is discussed and evaluated competitively against LASSO ($\\ell_1$-regularization), conventional beamforming, and MUSIC

  18. Efficient quantum circuits for arbitrary sparse unitaries

    SciTech Connect

    Jordan, Stephen P.; Wocjan, Pawel

    2009-12-15

    Arbitrary exponentially large unitaries cannot be implemented efficiently by quantum circuits. However, we show that quantum circuits can efficiently implement any unitary provided it has at most polynomially many nonzero entries in any row or column, and these entries are efficiently computable. One can formulate a model of computation based on the composition of sparse unitaries which includes the quantum Turing machine model, the quantum circuit model, anyonic models, permutational quantum computation, and discrete time quantum walks as special cases. Thus, we obtain a simple unified proof that these models are all contained in BQP. Furthermore, our general method for implementing sparse unitaries simplifies several existing quantum algorithms.

  19. Sparse Density Estimation on the Multinomial Manifold.

    PubMed

    Hong, Xia; Gao, Junbin; Chen, Sheng; Zia, Tanveer

    2015-11-01

    A new sparse kernel density estimator is introduced based on the minimum integrated square error criterion for the finite mixture model. Since the constraint on the mixing coefficients of the finite mixture model is on the multinomial manifold, we use the well-known Riemannian trust-region (RTR) algorithm for solving this problem. The first- and second-order Riemannian geometry of the multinomial manifold are derived and utilized in the RTR algorithm. Numerical examples are employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with an accuracy competitive with those of existing kernel density estimators. PMID:25647665

  20. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  1. Sparse Spatio-temporal Inference of Electromagnetic Brain Sources

    NASA Astrophysics Data System (ADS)

    Stahlhut, Carsten; Attias, Hagai T.; Wipf, David; Hansen, Lars K.; Nagarajan, Srikantan S.

    The electromagnetic brain activity measured via MEG (or EEG) can be interpreted as arising from a collection of current dipoles or sources located throughout the cortex. Because the number of candidate locations for these sources is much larger than the number of sensors, source reconstruction involves solving an inverse problem that is severely underdetermined. Bayesian graphical models provide a powerful means of incorporating prior assumptions that narrow the solution space and lead to tractable posterior distributions over the unknown sources given the observed data. In particular, this paper develops a hierarchical, spatio-temporal Bayesian model that accommodates the principled computation of sparse spatial and smooth temporal M/EEG source reconstructions consistent with neurophysiological assumptions in a variety of event-related imaging paradigms. The underlying methodology relies on the notion of automatic relevance determination (ARD) to express the unknown sources via a small collection of spatio-temporal basis functions. Experiments with several data sets provide evidence that the proposed model leads to improved source estimates. The underlying methodology is also well-suited for estimation problems that arise from other brain imaging modalities such as functional or diffusion weighted MRI.

  2. Contrast adaptive total p-norm variation minimization approach to CT reconstruction for artifact reduction in reduced-view brain perfusion CT

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Won; Kim, Jong-Hyo

    2011-03-01

    Perfusion CT (PCT) examinations are getting more frequently used for diagnosis of acute brain diseases such as hemorrhage and infarction, because the functional map images it produces such as regional cerebral blood flow (rCBF), regional cerebral blood volume (rCBV), and mean transit time (MTT) may provide critical information in the emergency work-up of patient care. However, a typical PCT scans the same slices several tens of times after injection of contrast agent, which leads to much increased radiation dose and is inevitability of growing concern for radiation-induced cancer risk. Reducing the number of views in projection in combination of TV minimization reconstruction technique is being regarded as an option for radiation reduction. However, reconstruction artifacts due to insufficient number of X-ray projections become problematic especially when high contrast enhancement signals are present or patient's motion occurred. In this study, we present a novel reconstruction technique using contrast-adaptive TpV minimization that can reduce reconstruction artifacts effectively by using different p-norms in high contrast and low contrast objects. In the proposed method, high contrast components are first reconstructed using thresholded projection data and low p-norm total variation to reflect sparseness in both projection and reconstruction spaces. Next, projection data are modified to contain only low contrast objects by creating projection data of reconstructed high contrast components and subtracting them from original projection data. Then, the low contrast projection data are reconstructed by using relatively high p-norm TV minimization technique, and are combined with the reconstructed high contrast component images to produce final reconstructed images. The proposed algorithm was applied to numerical phantom and a clinical data set of brain PCT exam, and the resultant images were compared with those using filtered back projection (FBP) and conventional TV

  3. Infrared image recognition based on structure sparse and atomic sparse parallel

    NASA Astrophysics Data System (ADS)

    Wu, Yalu; Li, Ruilong; Xu, Yi; Wang, Liping

    2015-12-01

    Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method. Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.

  4. Super-resolution method using sparse regularization for point-spread function recovery

    NASA Astrophysics Data System (ADS)

    Ngolè Mboula, F. M.; Starck, J.-L.; Ronayette, S.; Okumura, K.; Amiaux, J.

    2015-03-01

    In large-scale spatial surveys, such as the forthcoming ESA Euclid mission, images may be undersampled due to the optical sensors sizes. Therefore, one may consider using a super-resolution (SR) method to recover aliased frequencies, prior to further analysis. This is particularly relevant for point-source images, which provide direct measurements of the instrument point-spread function (PSF). We introduce SParse Recovery of InsTrumental rEsponse (SPRITE), which is an SR algorithm using a sparse analysis prior. We show that such a prior provides significant improvements over existing methods, especially on low signal-to-noise ratio PSFs.

  5. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    PubMed

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  6. Social biases determine spatiotemporal sparseness of ciliate mating heuristics.

    PubMed

    Clark, Kevin B

    2012-01-01

    Ciliates become highly social, even displaying animal-like qualities, in the joint presence of aroused conspecifics and nonself mating pheromones. Pheromone detection putatively helps trigger instinctual and learned courtship and dominance displays from which social judgments are made about the availability, compatibility, and fitness representativeness or likelihood of prospective mates and rivals. In earlier studies, I demonstrated the heterotrich Spirostomum ambiguum improves mating competence by effecting preconjugal strategies and inferences in mock social trials via behavioral heuristics built from Hebbian-like associative learning. Heuristics embody serial patterns of socially relevant action that evolve into ordered, topologically invariant computational networks supporting intra- and intermate selection. S. ambiguum employs heuristics to acquire, store, plan, compare, modify, select, and execute sets of mating propaganda. One major adaptive constraint over formation and use of heuristics involves a ciliate's initial subjective bias, responsiveness, or preparedness, as defined by Stevens' Law of subjective stimulus intensity, for perceiving the meaningfulness of mechanical pressures accompanying cell-cell contacts and additional perimating events. This bias controls durations and valences of nonassociative learning, search rates for appropriate mating strategies, potential net reproductive payoffs, levels of social honesty and deception, successful error diagnosis and correction of mating signals, use of insight or analysis to solve mating dilemmas, bioenergetics expenditures, and governance of mating decisions by classical or quantum statistical mechanics. I now report this same social bias also differentially affects the spatiotemporal sparseness, as measured with metric entropy, of ciliate heuristics. Sparseness plays an important role in neural systems through optimizing the specificity, efficiency, and capacity of memory representations. The present

  7. Improving synthesis and analysis prior blind compressed sensing with low-rank constraints for dynamic MRI reconstruction.

    PubMed

    Majumdar, Angshul

    2015-01-01

    In blind compressed sensing (BCS), both the sparsifying dictionary and the sparse coefficients are estimated simultaneously during signal recovery. A recent study adopted the BCS framework for recovering dynamic MRI sequences from under-sampled K-space measurements; the results were promising. Previous works in dynamic MRI reconstruction showed that, recovery accuracy can be improved by incorporating low-rank penalties into the standard compressed sensing (CS) optimization framework. Our work is motivated by these studies, and we improve upon the basic BCS framework by incorporating low-rank penalties into the optimization problem. The resulting optimization problem has not been solved before; hence we derive a Split Bregman type technique to solve the same. Experiments were carried out on real dynamic contrast enhanced MRI sequences. Results show that, with our proposed improvement, the reconstruction accuracy is better than BCS and other state-of-the-art dynamic MRI recovery algorithms.

  8. Second SIAM conference on sparse matrices: Abstracts. Final technical report

    SciTech Connect

    1996-12-31

    This report contains abstracts on the following topics: invited and long presentations (IP1 & LP1); sparse matrix reordering & graph theory I; sparse matrix tools & environments I; eigenvalue computations I; iterative methods & acceleration techniques I; applications I; parallel algorithms I; sparse matrix reordering & graphy theory II; sparse matrix tool & environments II; least squares & optimization I; iterative methods & acceleration techniques II; applications II; eigenvalue computations II; least squares & optimization II; parallel algorithms II; sparse direct methods; iterative methods & acceleration techniques III; eigenvalue computations III; and sparse matrix reordering & graph theory III.

  9. Facial expression recognition with facial parts based sparse representation classifier

    NASA Astrophysics Data System (ADS)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  10. Improved statistical power with a sparse shape model in detecting an aging effect in the hippocampus and amygdala

    NASA Astrophysics Data System (ADS)

    Chung, Moo K.; Kim, Seung-Goo; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matthew J.; Davidson, Richard J.

    2014-03-01

    The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.

  11. Reconstruction of Neural Activity from EEG Data Using Dynamic Spatiotemporal Constraints.

    PubMed

    Giraldo-Suarez, E; Martinez-Vargas, J D; Castellanos-Dominguez, G

    2016-11-01

    We present a novel iterative regularized algorithm (IRA) for neural activity reconstruction that explicitly includes spatiotemporal constraints, performing a trade-off between space and time resolutions. For improving the spatial accuracy provided by electroencephalography (EEG) signals, we explore a basis set that describes the smooth, localized areas of potentially active brain regions. In turn, we enhance the time resolution by adding the Markovian assumption for brain activity estimation at each time period. Moreover, to deal with applications that have either distributed or localized neural activity, the spatiotemporal constraints are expressed through [Formula: see text] and [Formula: see text] norms, respectively. For the purpose of validation, we estimate the neural reconstruction performance in time and space separately. Experimental testing is carried out on artificial data, simulating stationary and non-stationary EEG signals. Also, validation is accomplished on two real-world databases, one holding Evoked Potentials and another with EEG data of focal epilepsy. Moreover, responses of functional magnetic resonance imaging for the former EEG data have been measured in advance, allowing to contrast our findings. Obtained results show that the [Formula: see text]-based IRA produces a spatial resolution that is comparable to the one achieved by some widely used sparse-based estimators of brain activity. At the same time, the [Formula: see text]-based IRA outperforms other similar smooth solutions, providing a spatial resolution that is lower than the sparse [Formula: see text]-based solution. As a result, the proposed IRA is a promising method for improving the accuracy of brain activity reconstruction.

  12. Reconstruction of Neural Activity from EEG Data Using Dynamic Spatiotemporal Constraints.

    PubMed

    Giraldo-Suarez, E; Martinez-Vargas, J D; Castellanos-Dominguez, G

    2016-11-01

    We present a novel iterative regularized algorithm (IRA) for neural activity reconstruction that explicitly includes spatiotemporal constraints, performing a trade-off between space and time resolutions. For improving the spatial accuracy provided by electroencephalography (EEG) signals, we explore a basis set that describes the smooth, localized areas of potentially active brain regions. In turn, we enhance the time resolution by adding the Markovian assumption for brain activity estimation at each time period. Moreover, to deal with applications that have either distributed or localized neural activity, the spatiotemporal constraints are expressed through [Formula: see text] and [Formula: see text] norms, respectively. For the purpose of validation, we estimate the neural reconstruction performance in time and space separately. Experimental testing is carried out on artificial data, simulating stationary and non-stationary EEG signals. Also, validation is accomplished on two real-world databases, one holding Evoked Potentials and another with EEG data of focal epilepsy. Moreover, responses of functional magnetic resonance imaging for the former EEG data have been measured in advance, allowing to contrast our findings. Obtained results show that the [Formula: see text]-based IRA produces a spatial resolution that is comparable to the one achieved by some widely used sparse-based estimators of brain activity. At the same time, the [Formula: see text]-based IRA outperforms other similar smooth solutions, providing a spatial resolution that is lower than the sparse [Formula: see text]-based solution. As a result, the proposed IRA is a promising method for improving the accuracy of brain activity reconstruction. PMID:27354190

  13. Learning Stable Multilevel Dictionaries for Sparse Representations.

    PubMed

    Thiagarajan, Jayaraman J; Ramamurthy, Karthikeyan Natesan; Spanias, Andreas

    2015-09-01

    Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The increasing need for learning sparse models in large-scale applications motivates the development of efficient, robust, and provably good dictionary learning algorithms. Algorithmic stability and generalizability are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries, which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1-D subspace clustering procedure, the K-hyperline clustering, to learn a hierarchical dictionary with multiple levels. We also propose an information-theoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a low-complexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications.

  14. Algorithms for sparse nonnegative Tucker decompositions.

    PubMed

    Mørup, Morten; Hansen, Lars Kai; Arnfred, Sidse M

    2008-08-01

    There is a increasing interest in analysis of large-scale multiway data. The concept of multiway data refers to arrays of data with more than two dimensions, that is, taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions for tensors are the Tucker model and the more restricted PARAFAC model. Both models can be viewed as generalizations of the regular factor analysis to data of more than two modalities. Nonnegative matrix factorization (NMF), in conjunction with sparse coding, has recently been given much attention due to its part-based and easy interpretable representation. While NMF has been extended to the PARAFAC model, no such attempt has been done to extend NMF to the Tucker model. However, if the tensor data analyzed are nonnegative, it may well be relevant to consider purely additive (i.e., nonnegative) Tucker decompositions). To reduce ambiguities of this type of decomposition, we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse nonnegative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms for Tucker decompositions when the data and interactions can be considered nonnegative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is more appropriate