Sample records for transform domain sparsity

  1. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  2. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  3. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE PAGES

    Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...

    2018-04-06

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  4. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Qiang; Du, Qizhen; Gong, Xufei

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  5. Sparsity prediction and application to a new steganographic technique

    NASA Astrophysics Data System (ADS)

    Phillips, David; Noonan, Joseph

    2004-10-01

    Steganography is a technique of embedding information in innocuous data such that only the innocent data is visible. The wavelet transform lends itself to image steganography because it generates a large number of coefficients representing the information in the image. Altering a small set of these coefficients allows embedding of information (payload) into an image (cover) without noticeably altering the original image. We propose a novel, dual-wavelet steganographic technique, using transforms selected such that the transform of the cover image has low sparsity, while the payload transform has high sparsity. Maximizing the sparsity of the payload transform reduces the amount of information embedded in the cover, and minimizing the sparsity of the cover increases the locations that can be altered without significantly altering the image. Making this system effective on any given image pair requires a metric to indicate the best (maximum sparsity) and worst (minimum sparsity) wavelet transforms to use. This paper develops the first stage of this metric, which can predict, averaged across many wavelet families, which of two images will have a higher sparsity. A prototype implementation of the dual-wavelet system as a proof of concept is also developed.

  6. Interferometric redatuming by sparse inversion

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Herrmann, Felix J.

    2013-02-01

    Assuming that transmission responses are known between the surface and a particular depth level in the subsurface, seismic sources can be effectively mapped to this level by a process called interferometric redatuming. After redatuming, the obtained wavefields can be used for imaging below this particular depth level. Interferometric redatuming consists of two steps, namely (i) the decomposition of the observed wavefields into downgoing and upgoing constituents and (ii) a multidimensional deconvolution of the upgoing constituents with the downgoing constituents. While this method works in theory, sensitivity to noise and artefacts due to incomplete acquisition require a different formulation. In this letter, we demonstrate the benefits of formulating the two steps that undergird interferometric redatuming in terms of a transform-domain sparsity-promoting program. By exploiting compressibility of seismic wavefields in the curvelet domain, the method not only becomes robust with respect to noise but we are also able to remove certain artefacts while preserving the frequency content. Although we observe improvements when we promote sparsity in the redatumed data space, we expect better results when interferometric redatuming would be combined or integrated with least-squares migration with sparsity promotion in the image space.

  7. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  8. Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato

    2018-05-01

    We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.

  9. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  10. Deblending using an improved apex-shifted hyperbolic radon transform based on the Stolt migration operator

    NASA Astrophysics Data System (ADS)

    Gong, Xiangbo; Feng, Fei; Jiao, Xuming; Wang, Shengchao

    2017-10-01

    Simultaneous seismic source separation, also known as deblending, is an essential process for blended acquisition. With the assumption that the blending noise is coherent in the common shot domain but is incoherent in other domains, traditional deblending methods are commonly performed in the common receiver, common midpoint or common offset domain. In this paper, we propose an improved apex-shifted hyperbolic radon transform (ASHRT) to deblend directly in the common shot domain. A time-axis stretch strategy named Stolt-stretch is introduced to overcome the limitation of the constant velocity assumption of Stolt-based operators. To improve the sparsity in the transform domain, a total variation (TV) norm inversion is implemented to enhance the energy convergence in the radon panel. Because of highly efficient Stolt migration and the demigration operator in the frequency-wavenumber domain, as well as the flexible geometry condition of the source-receiver, this approach is quite suitable for quality control (QC) during streamer acquisition. The synthetic and field examples demonstrate that our proposition is robust and efficient.

  11. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    PubMed

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  12. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    PubMed

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Temporal flicker reduction and denoising in video using sparse directional transforms

    NASA Astrophysics Data System (ADS)

    Kanumuri, Sandeep; Guleryuz, Onur G.; Civanlar, M. Reha; Fujibayashi, Akira; Boon, Choong S.

    2008-08-01

    The bulk of the video content available today over the Internet and over mobile networks suffers from many imperfections caused during acquisition and transmission. In the case of user-generated content, which is typically produced with inexpensive equipment, these imperfections manifest in various ways through noise, temporal flicker and blurring, just to name a few. Imperfections caused by compression noise and temporal flicker are present in both studio-produced and user-generated video content transmitted at low bit-rates. In this paper, we introduce an algorithm designed to reduce temporal flicker and noise in video sequences. The algorithm takes advantage of the sparse nature of video signals in an appropriate transform domain that is chosen adaptively based on local signal statistics. When the signal corresponds to a sparse representation in this transform domain, flicker and noise, which are spread over the entire domain, can be reduced easily by enforcing sparsity. Our results show that the proposed algorithm reduces flicker and noise significantly and enables better presentation of compressed videos.

  14. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  15. Separation and imaging diffractions by a sparsity-promoting model and subspace trust-region algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Wang, Chengxiang; Geng, Weifeng

    2017-03-01

    The small-scale geologic inhomogeneities or discontinuities, such as tiny faults, cavities or fractures, generally have spatial scales comparable to or even smaller than the seismic wavelength. Therefore, the seismic responses of these objects are coded in diffractions and an attempt to high-resolution imaging can be made if we can appropriately image them. As the amplitudes of reflections can be several orders of magnitude larger than those of diffractions, one of the key problems of diffraction imaging is to suppress reflections and at the same time to preserve diffractions. A sparsity-promoting method for separating diffractions in the common-offset domain is proposed that uses the Kirchhoff integral formula to enforce the sparsity of diffractions and the linear Radon transform to formulate reflections. A subspace trust-region algorithm that can provide globally convergent solutions is employed for solving this large-scale computation problem. The method not only allows for separation of diffractions in the case of interfering events but also ensures a high fidelity of the separated diffractions. Numerical experiment and field application demonstrate the good performance of the proposed method in imaging the small-scale geological features related to the migration channel and storage spaces of carbonate reservoirs.

  16. Novel Spectral Representations and Sparsity-Driven Algorithms for Shape Modeling and Analysis

    NASA Astrophysics Data System (ADS)

    Zhong, Ming

    In this dissertation, we focus on extending classical spectral shape analysis by incorporating spectral graph wavelets and sparsity-seeking algorithms. Defined with the graph Laplacian eigenbasis, the spectral graph wavelets are localized both in the vertex domain and graph spectral domain, and thus are very effective in describing local geometry. With a rich dictionary of elementary vectors and forcing certain sparsity constraints, a real life signal can often be well approximated by a very sparse coefficient representation. The many successful applications of sparse signal representation in computer vision and image processing inspire us to explore the idea of employing sparse modeling techniques with dictionary of spectral basis to solve various shape modeling problems. Conventional spectral mesh compression uses the eigenfunctions of mesh Laplacian as shape bases, which are highly inefficient in representing local geometry. To ameliorate, we advocate an innovative approach to 3D mesh compression using spectral graph wavelets as dictionary to encode mesh geometry. The spectral graph wavelets are locally defined at individual vertices and can better capture local shape information than Laplacian eigenbasis. The multi-scale SGWs form a redundant dictionary as shape basis, so we formulate the compression of 3D shape as a sparse approximation problem that can be readily handled by greedy pursuit algorithms. Surface inpainting refers to the completion or recovery of missing shape geometry based on the shape information that is currently available. We devise a new surface inpainting algorithm founded upon the theory and techniques of sparse signal recovery. Instead of estimating the missing geometry directly, our novel method is to find this low-dimensional representation which describes the entire original shape. More specifically, we find that, for many shapes, the vertex coordinate function can be well approximated by a very sparse coefficient representation with respect to the dictionary comprising its Laplacian eigenbasis, and it is then possible to recover this sparse representation from partial measurements of the original shape. Taking advantage of the sparsity cue, we advocate a novel variational approach for surface inpainting, integrating data fidelity constraints on the shape domain with coefficient sparsity constraints on the transformed domain. Because of the powerful properties of Laplacian eigenbasis, the inpainting results of our method tend to be globally coherent with the remaining shape. Informative and discriminative feature descriptors are vital in qualitative and quantitative shape analysis for a large variety of graphics applications. We advocate novel strategies to define generalized, user-specified features on shapes. Our new region descriptors are primarily built upon the coefficients of spectral graph wavelets that are both multi-scale and multi-level in nature, consisting of both local and global information. Based on our novel spectral feature descriptor, we developed a user-specified feature detection framework and a tensor-based shape matching algorithm. Through various experiments, we demonstrate the competitive performance of our proposed methods and the great potential of spectral basis and sparsity-driven methods for shape modeling.

  17. Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing

    2015-01-01

    In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri–Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method. PMID:26569241

  18. Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing

    2015-11-10

    In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri-Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method.

  19. Inter-class sparsity based discriminative least square regression.

    PubMed

    Wen, Jie; Xu, Yong; Li, Zuoyong; Ma, Zhongli; Xu, Yuanrong

    2018-06-01

    Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero-one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero-one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Truncated feature representation for automatic target detection using transformed data-based decomposition

    NASA Astrophysics Data System (ADS)

    Riasati, Vahid R.

    2016-05-01

    In this work, the data covariance matrix is diagonalized to provide an orthogonal bases set using the eigen vectors of the data. The eigen-vector decomposition of the data is transformed and filtered in the transform domain to truncate the data for robust features related to a specified set of targets. These truncated eigen features are then combined and reconstructed to utilize in a composite filter and consequently utilized for the automatic target detection of the same class of targets. The results associated with the testing of the current technique are evaluated using the peak-correlation and peak-correlation energy metrics and are presented in this work. The inverse transformed eigen-bases of the current technique may be thought of as an injected sparsity to minimize data in representing the skeletal data structure information associated with the set of targets under consideration.

  1. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  2. Double temporal sparsity based accelerated reconstruction of compressively sensed resting-state fMRI.

    PubMed

    Aggarwal, Priya; Gupta, Anubha

    2017-12-01

    A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Dictionary learning and time sparsity in dynamic MRI.

    PubMed

    Caballero, Jose; Rueckert, Daniel; Hajnal, Joseph V

    2012-01-01

    Sparse representation methods have been shown to tackle adequately the inherent speed limits of magnetic resonance imaging (MRI) acquisition. Recently, learning-based techniques have been used to further accelerate the acquisition of 2D MRI. The extension of such algorithms to dynamic MRI (dMRI) requires careful examination of the signal sparsity distribution among the different dimensions of the data. Notably, the potential of temporal gradient (TG) sparsity in dMRI has not yet been explored. In this paper, a novel method for the acceleration of cardiac dMRI is presented which investigates the potential benefits of enforcing sparsity constraints on patch-based learned dictionaries and TG at the same time. We show that an algorithm exploiting sparsity on these two domains can outperform previous sparse reconstruction techniques.

  4. Compressed-Sensing Reconstruction Based on Block Sparse Bayesian Learning in Bearing-Condition Monitoring

    PubMed Central

    Sun, Jiedi; Yu, Yang; Wen, Jiangtao

    2017-01-01

    Remote monitoring of bearing conditions, using wireless sensor network (WSN), is a developing trend in the industrial field. In complicated industrial environments, WSN face three main constraints: low energy, less memory, and low operational capability. Conventional data-compression methods, which concentrate on data compression only, cannot overcome these limitations. Aiming at these problems, this paper proposed a compressed data acquisition and reconstruction scheme based on Compressed Sensing (CS) which is a novel signal-processing technique and applied it for bearing conditions monitoring via WSN. The compressed data acquisition is realized by projection transformation and can greatly reduce the data volume, which needs the nodes to process and transmit. The reconstruction of original signals is achieved in the host computer by complicated algorithms. The bearing vibration signals not only exhibit the sparsity property, but also have specific structures. This paper introduced the block sparse Bayesian learning (BSBL) algorithm which works by utilizing the block property and inherent structures of signals to reconstruct CS sparsity coefficients of transform domains and further recover the original signals. By using the BSBL, CS reconstruction can be improved remarkably. Experiments and analyses showed that BSBL method has good performance and is suitable for practical bearing-condition monitoring. PMID:28635623

  5. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    PubMed

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.

  6. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions

    NASA Astrophysics Data System (ADS)

    Wang, Yanxue; Yang, Lin; Xiang, Jiawei; Yang, Jianwei; He, Shuilong

    2017-12-01

    Rolling element bearings are one of the main elements in rotating machines, whose failure may lead to a fatal breakdown and significant economic losses. Conventional vibration-based diagnostic methods are based on the stationary assumption, thus they are not applicable to the diagnosis of bearings working under varying speeds. This constraint limits the bearing diagnosis to the industrial application significantly. A hybrid approach to fault diagnosis of roller bearings under variable speed conditions is proposed in this work, based on computed order tracking (COT) and variational mode decomposition (VMD)-based time frequency representation (VTFR). COT is utilized to resample the non-stationary vibration signal in the angular domain, while VMD is used to decompose the resampled signal into a number of band-limited intrinsic mode functions (BLIMFs). A VTFR is then constructed based on the estimated instantaneous frequency and instantaneous amplitude of each BLIMF. Moreover, the Gini index and time-frequency kurtosis are both proposed to quantitatively measure the sparsity and concentration measurement of time-frequency representation, respectively. The effectiveness of the VTFR for extracting nonlinear components has been verified by a bat signal. Results of this numerical simulation also show the sparsity and concentration of the VTFR are better than those of short-time Fourier transform, continuous wavelet transform, Hilbert-Huang transform and Wigner-Ville distribution techniques. Several experimental results have further demonstrated that the proposed method can well detect bearing faults under variable speed conditions.

  7. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  8. Image denoising by sparse 3-D transform-domain collaborative filtering.

    PubMed

    Dabov, Kostadin; Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-08-01

    We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.

  9. 2D deblending using the multi-scale shaping scheme

    NASA Astrophysics Data System (ADS)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  10. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression

    PubMed Central

    Jiang, Feng; Han, Ji-zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods. PMID:29623088

  11. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression.

    PubMed

    Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  12. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  13. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  14. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  15. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  16. Unified commutation-pruning technique for efficient computation of composite DFTs

    NASA Astrophysics Data System (ADS)

    Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.

    2015-12-01

    An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.

  17. Sparsity-based multi-height phase recovery in holographic microscopy

    NASA Astrophysics Data System (ADS)

    Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan

    2016-11-01

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  18. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  19. Bayesian X-ray computed tomography using a three-level hierarchical prior model

    NASA Astrophysics Data System (ADS)

    Wang, Li; Mohammad-Djafari, Ali; Gac, Nicolas

    2017-06-01

    In recent decades X-ray Computed Tomography (CT) image reconstruction has been largely developed in both medical and industrial domain. In this paper, we propose using the Bayesian inference approach with a new hierarchical prior model. In the proposed model, a generalised Student-t distribution is used to enforce the Haar transformation of images to be sparse. Comparisons with some state of the art methods are presented. It is shown that by using the proposed model, the sparsity of sparse representation of images is enforced, so that edges of images are preserved. Simulation results are also provided to demonstrate the effectiveness of the new hierarchical model for reconstruction with fewer projections.

  20. Design of Warped Stretch Transform

    PubMed Central

    Mahjoubfar, Ata; Chen, Claire Lifan; Jalali, Bahram

    2015-01-01

    Time stretch dispersive Fourier transform enables real-time spectroscopy at the repetition rate of million scans per second. High-speed real-time instruments ranging from analog-to-digital converters to cameras and single-shot rare-phenomena capture equipment with record performance have been empowered by it. Its warped stretch variant, realized with nonlinear group delay dispersion, offers variable-rate spectral domain sampling, as well as the ability to engineer the time-bandwidth product of the signal’s envelope to match that of the data acquisition systems. To be able to reconstruct the signal with low loss, the spectrotemporal distribution of the signal spectrum needs to be sparse. Here, for the first time, we show how to design the kernel of the transform and specifically, the nonlinear group delay profile dictated by the signal sparsity. Such a kernel leads to smart stretching with nonuniform spectral resolution, having direct utility in improvement of data acquisition rate, real-time data compression, and enhancement of ultrafast data capture accuracy. We also discuss the application of warped stretch transform in spectrotemporal analysis of continuous-time signals. PMID:26602458

  1. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  2. Experiments on sparsity assisted phase retrieval of phase objects

    NASA Astrophysics Data System (ADS)

    Gaur, Charu; Lochab, Priyanka; Khare, Kedar

    2017-05-01

    Iterative phase retrieval algorithms such as the Gerchberg-Saxton method and the Fienup hybrid input-output method are known to suffer from the twin image stagnation problem, particularly when the solution to be recovered is complex valued and has centrosymmetric support. Recently we showed that the twin image stagnation problem can be addressed using image sparsity ideas (Gaur et al 2015 J. Opt. Soc. Am. A 32 1922). In this work we test this sparsity assisted phase retrieval method with experimental single shot Fourier transform intensity data frames corresponding to phase objects displayed on a spatial light modulator. The standard iterative phase retrieval algorithms are combined with an image sparsity based penalty in an adaptive manner. Illustrations for both binary and continuous phase objects are provided. It is observed that image sparsity constraint has an important role to play in obtaining meaningful phase recovery without encountering the well-known stagnation problems. The results are valuable for enabling single shot coherent diffraction imaging of phase objects for applications involving illumination wavelengths over a wide range of electromagnetic spectrum.

  3. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  4. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-04-14

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.

  5. Analysis of the IJCNN 2011 UTL Challenge

    DTIC Science & Technology

    2012-01-13

    large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...http //clopinet.com/ul). We made available large datasets from various application domains handwriting recognition, image recognition, video...evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205 50000 HARRY Video 5000 98.1

  6. Exploiting the wavelet structure in compressed sensing MRI.

    PubMed

    Chen, Chen; Huang, Junzhou

    2014-12-01

    Sparsity has been widely utilized in magnetic resonance imaging (MRI) to reduce k-space sampling. According to structured sparsity theories, fewer measurements are required for tree sparse data than the data only with standard sparsity. Intuitively, more accurate image reconstruction can be achieved with the same number of measurements by exploiting the wavelet tree structure in MRI. A novel algorithm is proposed in this article to reconstruct MR images from undersampled k-space data. In contrast to conventional compressed sensing MRI (CS-MRI) that only relies on the sparsity of MR images in wavelet or gradient domain, we exploit the wavelet tree structure to improve CS-MRI. This tree-based CS-MRI problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved by an iterative scheme. Simulations and in vivo experiments demonstrate the significant improvement of the proposed method compared to conventional CS-MRI algorithms, and the feasibleness on MR data compared to existing tree-based imaging algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  8. Improvement of coda phase detectability and reconstruction of global seismic data using frequency-wavenumber methods

    NASA Astrophysics Data System (ADS)

    Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng

    2018-02-01

    Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.

  9. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.

    PubMed

    Singh, Anurag; Dandapat, Samarendra

    2017-04-01

    In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.

  10. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  11. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  12. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-01-01

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345

  13. Rating knowledge sharing in cross-domain collaborative filtering.

    PubMed

    Li, Bin; Zhu, Xingquan; Li, Ruijiang; Zhang, Chengqi

    2015-05-01

    Cross-domain collaborative filtering (CF) aims to share common rating knowledge across multiple related CF domains to boost the CF performance. In this paper, we view CF domains as a 2-D site-time coordinate system, on which multiple related domains, such as similar recommender sites or successive time-slices, can share group-level rating patterns. We propose a unified framework for cross-domain CF over the site-time coordinate system by sharing group-level rating patterns and imposing user/item dependence across domains. A generative model, say ratings over site-time (ROST), which can generate and predict ratings for multiple related CF domains, is developed as the basic model for the framework. We further introduce cross-domain user/item dependence into ROST and extend it to two real-world cross-domain CF scenarios: 1) ROST (sites) for alleviating rating sparsity in the target domain, where multiple similar sites are viewed as related CF domains and some items in the target domain depend on their correspondences in the related ones; and 2) ROST (time) for modeling user-interest drift over time, where a series of time-slices are viewed as related CF domains and a user at current time-slice depends on herself in the previous time-slice. All these ROST models are instances of the proposed unified framework. The experimental results show that ROST (sites) can effectively alleviate the sparsity problem to improve rating prediction performance and ROST (time) can clearly track and visualize user-interest drift over time.

  14. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  15. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  16. Computational photoacoustic imaging with sparsity-based optimization of the initial pressure distribution

    NASA Astrophysics Data System (ADS)

    Shang, Ruibo; Archibald, Richard; Gelb, Anne; Luke, Geoffrey P.

    2018-02-01

    In photoacoustic (PA) imaging, the optical absorption can be acquired from the initial pressure distribution (IPD). An accurate reconstruction of the IPD will be very helpful for the reconstruction of the optical absorption. However, the image quality of PA imaging in scattering media is deteriorated by the acoustic diffraction, imaging artifacts, and weak PA signals. In this paper, we propose a sparsity-based optimization approach that improves the reconstruction of the IPD in PA imaging. A linear imaging forward model was set up based on time-and-delay method with the assumption that the point spread function (PSF) is spatial invariant. Then, an optimization equation was proposed with a regularization term to denote the sparsity of the IPD in a certain domain to solve this inverse problem. As a proof of principle, the approach was applied to reconstructing point objects and blood vessel phantoms. The resolution and signal-to-noise ratio (SNR) were compared between conventional back-projection and our proposed approach. Overall these results show that computational imaging can leverage the sparsity of PA images to improve the estimation of the IPD.

  17. Time-jittered marine seismic data acquisition via compressed sensing and sparsity-promoting wavefield reconstruction

    NASA Astrophysics Data System (ADS)

    Wason, H.; Herrmann, F. J.; Kumar, R.

    2016-12-01

    Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.

  18. Analysis Of The IJCNN 2011 UTL Challenge

    DTIC Science & Technology

    2012-01-13

    large datasets from various application domains: handwriting recognition, image recognition, video processing, text processing, and ecology. The goal...validation and final evaluation sets consist of 4096 examples each. Dataset Domain Features Sparsity Devel. Transf. AVICENNA Handwriting 120 0% 150205...documents [3]. Transfer learning methods could accelerate the application of handwriting recognizers to historical manuscript by reducing the need for

  19. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  20. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  1. Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation.

    PubMed

    Xu, Yong; Fang, Xiaozhao; Wu, Jian; Li, Xuelong; Zhang, David

    2016-02-01

    In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.

  2. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms.

    PubMed

    Maggioni, Matteo; Boracchi, Giacomo; Foi, Alessandro; Egiazarian, Karen

    2012-09-01

    We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.

  3. OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform

    NASA Astrophysics Data System (ADS)

    Nan, F.; Xu, Y.

    2017-12-01

    OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a is a common receiver gather collected in the Bohai Sea, China. The whole profile is 120km long with 987 shots. The horizontal axis is shot number. The vertical axis is travel time reduced by 6km/s. We use our method to process the data and get a denoised profile figure 2b. After denoising, most of the high frequency noise was suppressed and the seismic phases were clearer.

  4. High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling

    PubMed Central

    Sung, Kyunghyun; Hargreaves, Brian A

    2013-01-01

    Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540

  5. The identification of multi-cave combinations in carbonate reservoirs based on sparsity constraint inverse spectral decomposition

    NASA Astrophysics Data System (ADS)

    Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng

    2016-12-01

    Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.

  6. Dendrites of dentate gyrus granule cells contribute to pattern separation by controlling sparsity

    PubMed Central

    Chavlis, Spyridon; Petrantonakis, Panagiotis C.

    2016-01-01

    ABSTRACT The hippocampus plays a key role in pattern separation, the process of transforming similar incoming information to highly dissimilar, nonverlapping representations. Sparse firing granule cells (GCs) in the dentate gyrus (DG) have been proposed to undertake this computation, but little is known about which of their properties influence pattern separation. Dendritic atrophy has been reported in diseases associated with pattern separation deficits, suggesting a possible role for dendrites in this phenomenon. To investigate whether and how the dendrites of GCs contribute to pattern separation, we build a simplified, biologically relevant, computational model of the DG. Our model suggests that the presence of GC dendrites is associated with high pattern separation efficiency while their atrophy leads to increased excitability and performance impairments. These impairments can be rescued by restoring GC sparsity to control levels through various manipulations. We predict that dendrites contribute to pattern separation as a mechanism for controlling sparsity. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27784124

  7. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    PubMed

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-01-01

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.

  8. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  9. Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain,more » as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.« less

  10. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  12. Efficient Implementations of the Quadrature-Free Discontinuous Galerkin Method

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Atkins, Harold L.

    1999-01-01

    The efficiency of the quadrature-free form of the dis- continuous Galerkin method in two dimensions, and briefly in three dimensions, is examined. Most of the work for constant-coefficient, linear problems involves the volume and edge integrations, and the transformation of information from the volume to the edges. These operations can be viewed as matrix-vector multiplications. Many of the matrices are sparse as a result of symmetry, and blocking and specialized multiplication routines are used to account for the sparsity. By optimizing these operations, a 35% reduction in total CPU time is achieved. For nonlinear problems, the calculation of the flux becomes dominant because of the cost associated with polynomial products and inversion. This component of the work can be reduced by up to 75% when the products are approximated by truncating terms. Because the cost is high for nonlinear problems on general elements, it is suggested that simplified physics and the most efficient element types be used over most of the domain.

  13. Detection of faults in rotating machinery using periodic time-frequency sparsity

    NASA Astrophysics Data System (ADS)

    Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.

    2016-11-01

    This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.

  14. Information verification and encryption based on phase retrieval with sparsity constraints and optical inference

    NASA Astrophysics Data System (ADS)

    Zhong, Shenlu; Li, Mengjiao; Tang, Xiajie; He, Weiqing; Wang, Xiaogang

    2017-01-01

    A novel optical information verification and encryption method is proposed based on inference principle and phase retrieval with sparsity constraints. In this method, a target image is encrypted into two phase-only masks (POMs), which comprise sparse phase data used for verification. Both of the two POMs need to be authenticated before being applied for decrypting. The target image can be optically reconstructed when the two authenticated POMs are Fourier transformed and convolved by the correct decryption key, which is also generated in encryption process. No holographic scheme is involved in the proposed optical verification and encryption system and there is also no problem of information disclosure in the two authenticable POMs. Numerical simulation results demonstrate the validity and good performance of this new proposed method.

  15. Sparsity-Cognizant Algorithms with Applications to Communications, Signal Processing, and the Smart Grid

    NASA Astrophysics Data System (ADS)

    Zhu, Hao

    Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.

  16. Spatio-temporal Event Classification using Time-series Kernel based Structured Sparsity

    PubMed Central

    Jeni, László A.; Lőrincz, András; Szabó, Zoltán; Cohn, Jeffrey F.; Kanade, Takeo

    2016-01-01

    In many behavioral domains, such as facial expression and gesture, sparse structure is prevalent. This sparsity would be well suited for event detection but for one problem. Features typically are confounded by alignment error in space and time. As a consequence, high-dimensional representations such as SIFT and Gabor features have been favored despite their much greater computational cost and potential loss of information. We propose a Kernel Structured Sparsity (KSS) method that can handle both the temporal alignment problem and the structured sparse reconstruction within a common framework, and it can rely on simple features. We characterize spatio-temporal events as time-series of motion patterns and by utilizing time-series kernels we apply standard structured-sparse coding techniques to tackle this important problem. We evaluated the KSS method using both gesture and facial expression datasets that include spontaneous behavior and differ in degree of difficulty and type of ground truth coding. KSS outperformed both sparse and non-sparse methods that utilize complex image features and their temporal extensions. In the case of early facial event classification KSS had 10% higher accuracy as measured by F1 score over kernel SVM methods1. PMID:27830214

  17. Parallel heterogeneous architectures for efficient OMP compressive sensing reconstruction

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amey; Stanislaus, Jerome L.; Mohsenin, Tinoosh

    2014-05-01

    Compressive Sensing (CS) is a novel scheme, in which a signal that is sparse in a known transform domain can be reconstructed using fewer samples. The signal reconstruction techniques are computationally intensive and have sluggish performance, which make them impractical for real-time processing applications . The paper presents novel architectures for Orthogonal Matching Pursuit algorithm, one of the popular CS reconstruction algorithms. We show the implementation results of proposed architectures on FPGA, ASIC and on a custom many-core platform. For FPGA and ASIC implementation, a novel thresholding method is used to reduce the processing time for the optimization problem by at least 25%. Whereas, for the custom many-core platform, efficient parallelization techniques are applied, to reconstruct signals with variant signal lengths of N and sparsity of m. The algorithm is divided into three kernels. Each kernel is parallelized to reduce execution time, whereas efficient reuse of the matrix operators allows us to reduce area. Matrix operations are efficiently paralellized by taking advantage of blocked algorithms. For demonstration purpose, all architectures reconstruct a 256-length signal with maximum sparsity of 8 using 64 measurements. Implementation on Xilinx Virtex-5 FPGA, requires 27.14 μs to reconstruct the signal using basic OMP. Whereas, with thresholding method it requires 18 μs. ASIC implementation reconstructs the signal in 13 μs. However, our custom many-core, operating at 1.18 GHz, takes 18.28 μs to complete. Our results show that compared to the previous published work of the same algorithm and matrix size, proposed architectures for FPGA and ASIC implementations perform 1.3x and 1.8x respectively faster. Also, the proposed many-core implementation performs 3000x faster than the CPU and 2000x faster than the GPU.

  18. Structured sparse linear graph embedding.

    PubMed

    Wang, Haixian

    2012-03-01

    Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  20. Sparsity Aware Adaptive Radar Sensor Imaging in Complex Scattering Environments

    DTIC Science & Technology

    2015-06-15

    while meeting the requirement on the peak to average power ratio. Third, we study impact of waveform encoding on nonlinear electromagnetic tomographic...Enyue Lu. Time Domain Electromagnetic Tomography Using Propagation and Backpropagation Method, IEEE International Conference on Image Processing...Received Paper 3.00 4.00 Yuanwei Jin, Chengdon Dong, Enyue Lu. Waveform Encoding for Nonlinear Electromagnetic Tomographic Imaging, IEEE Global

  1. Sparsity based target detection for compressive spectral imagery

    NASA Astrophysics Data System (ADS)

    Boada, David Alberto; Arguello Fuentes, Henry

    2016-09-01

    Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.

  2. A novel framework to alleviate the sparsity problem in context-aware recommender systems

    NASA Astrophysics Data System (ADS)

    Yu, Penghua; Lin, Lanfen; Wang, Jing

    2017-04-01

    Recommender systems have become indispensable for services in the era of big data. To improve accuracy and satisfaction, context-aware recommender systems (CARSs) attempt to incorporate contextual information into recommendations. Typically, valid and influential contexts are determined in advance by domain experts or feature selection approaches. Most studies have focused on utilizing the unitary context due to the differences between various contexts. Meanwhile, multi-dimensional contexts will aggravate the sparsity problem, which means that the user preference matrix would become extremely sparse. Consequently, there are not enough or even no preferences in most multi-dimensional conditions. In this paper, we propose a novel framework to alleviate the sparsity issue for CARSs, especially when multi-dimensional contextual variables are adopted. Motivated by the intuition that the overall preferences tend to show similarities among specific groups of users and conditions, we first explore to construct one contextual profile for each contextual condition. In order to further identify those user and context subgroups automatically and simultaneously, we apply a co-clustering algorithm. Furthermore, we expand user preferences in a given contextual condition with the identified user and context clusters. Finally, we perform recommendations based on expanded preferences. Extensive experiments demonstrate the effectiveness of the proposed framework.

  3. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A sparsity-based simplification method for segmentation of spectral-domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Meiniel, William; Gan, Yu; Olivo-Marin, Jean-Christophe; Angelini, Elsa

    2017-08-01

    Optical coherence tomography (OCT) has emerged as a promising image modality to characterize biological tissues. With axio-lateral resolutions at the micron-level, OCT images provide detailed morphological information and enable applications such as optical biopsy and virtual histology for clinical needs. Image enhancement is typically required for morphological segmentation, to improve boundary localization, rather than enrich detailed tissue information. We propose to formulate image enhancement as an image simplification task such that tissue layers are smoothed while contours are enhanced. For this purpose, we exploit a Total Variation sparsity-based image reconstruction, inspired by the Compressed Sensing (CS) theory, but specialized for images with structures arranged in layers. We demonstrate the potential of our approach on OCT human heart and retinal images for layers segmentation. We also compare our image enhancement capabilities to the state-of-the-art denoising techniques.

  5. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  6. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.

  7. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    PubMed

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. DOLPHIn—Dictionary Learning for Phase Retrieval

    NASA Astrophysics Data System (ADS)

    Tillmann, Andreas M.; Eldar, Yonina C.; Mairal, Julien

    2016-12-01

    We propose a new algorithm to learn a dictionary for reconstructing and sparsely encoding signals from measurements without phase. Specifically, we consider the task of estimating a two-dimensional image from squared-magnitude measurements of a complex-valued linear transformation of the original image. Several recent phase retrieval algorithms exploit underlying sparsity of the unknown signal in order to improve recovery performance. In this work, we consider such a sparse signal prior in the context of phase retrieval, when the sparsifying dictionary is not known in advance. Our algorithm jointly reconstructs the unknown signal - possibly corrupted by noise - and learns a dictionary such that each patch of the estimated image can be sparsely represented. Numerical experiments demonstrate that our approach can obtain significantly better reconstructions for phase retrieval problems with noise than methods that cannot exploit such "hidden" sparsity. Moreover, on the theoretical side, we provide a convergence result for our method.

  9. SNR enhancement for downhole microseismic data based on scale classification shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-06-01

    Shearlet transform (ST) can be effective in 2D signal processing, due to its parabolic scaling, high directional sensitivity, and optimal sparsity. ST combined with thresholding has been successfully applied to suppress random noise. However, because of the low magnitude and high frequency of a downhole microseismic signal, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is difficult to use for denoising. In this paper, we present a scale classification ST to solve this problem. The ST is used to decompose noisy microseismic data into serval scales. By analyzing the spectrum and energy distribution of the shearlet coefficients of microseismic data, we divide the scales into two types: low-frequency scales which contain less useful signal and high-frequency scales which contain more useful signal. After classification, we use two different methods to deal with the coefficients on different scales. For the low-frequency scales, the noise is attenuated using a thresholding method. As for the high-frequency scales, we propose to use a generalized Gauss distribution model based a non-local means filter, which takes advantage of the temporal and spatial similarity of microseismic data. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  10. An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation

    PubMed Central

    Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin

    2014-01-01

    In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035

  11. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Efficient multichannel acoustic echo cancellation using constrained tap selection schemes in the subband domain

    NASA Astrophysics Data System (ADS)

    Desiraju, Naveen Kumar; Doclo, Simon; Wolff, Tobias

    2017-12-01

    Acoustic echo cancellation (AEC) is a key speech enhancement technology in speech communication and voice-enabled devices. AEC systems employ adaptive filters to estimate the acoustic echo paths between the loudspeakers and the microphone(s). In applications involving surround sound, the computational complexity of an AEC system may become demanding due to the multiple loudspeaker channels and the necessity of using long filters in reverberant environments. In order to reduce the computational complexity, the approach of partially updating the AEC filters is considered in this paper. In particular, we investigate tap selection schemes which exploit the sparsity present in the loudspeaker channels for partially updating subband AEC filters. The potential for exploiting signal sparsity across three dimensions, namely time, frequency, and channels, is analyzed. A thorough analysis of different state-of-the-art tap selection schemes is performed and insights about their limitations are gained. A novel tap selection scheme is proposed which overcomes these limitations by exploiting signal sparsity while not ignoring any filters for update in the different subbands and channels. Extensive simulation results using both artificial as well as real-world multichannel signals show that the proposed tap selection scheme outperforms state-of-the-art tap selection schemes in terms of echo cancellation performance. In addition, it yields almost identical echo cancellation performance as compared to updating all filter taps at a significantly reduced computational cost.

  13. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  14. Stationary wavelet transform for under-sampled MRI reconstruction.

    PubMed

    Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M

    2014-12-01

    In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Curvelet-based compressive sensing for InSAR raw data

    NASA Astrophysics Data System (ADS)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.

  16. Separated Component-Based Restoration of Speckled SAR Images

    DTIC Science & Technology

    2013-01-01

    unsupervised change detection from SAR amplitude imagery,” IEEE Trans. Geosci. Remote Sens., vol. 44, no. 10, pp. 2972–2982, Oct. 2006. [5] F. Argenti , T...Sens., vol. 40, no. 10, pp. 2196–2212, Oct. 2002. [13] F. Argenti and L. Alparone, “Speckle removal from SAR images in the undecimated wavelet domain...iterative thresh- olding algorithm for linear inverse problems with a sparsity con- straint,” Commun . Pure Appl. Math., vol. 57, no. 11, pp. 1413

  17. Compressed sensing reconstruction of cardiac cine MRI using golden angle spiral trajectories

    NASA Astrophysics Data System (ADS)

    Tolouee, Azar; Alirezaie, Javad; Babyn, Paul

    2015-11-01

    In dynamic cardiac cine Magnetic Resonance Imaging (MRI), the spatiotemporal resolution is limited by the low imaging speed. Compressed sensing (CS) theory has been applied to improve the imaging speed and thus the spatiotemporal resolution. The purpose of this paper is to improve CS reconstruction of under sampled data by exploiting spatiotemporal sparsity and efficient spiral trajectories. We extend k-t sparse algorithm to spiral trajectories to achieve high spatio temporal resolutions in cardiac cine imaging. We have exploited spatiotemporal sparsity of cardiac cine MRI by applying a 2D + time wavelet-Fourier transform. For efficient coverage of k-space, we have used a modified version of multi shot (interleaved) spirals trajectories. In order to reduce incoherent aliasing artifact, we use different random undersampling pattern for each temporal frame. Finally, we have used nonuniform fast Fourier transform (NUFFT) algorithm to reconstruct the image from the non-uniformly acquired samples. The proposed approach was tested in simulated and cardiac cine MRI data. Results show that higher acceleration factors with improved image quality can be obtained with the proposed approach in comparison to the existing state-of-the-art method. The flexibility of the introduced method should allow it to be used not only for the challenging case of cardiac imaging, but also for other patient motion where the patient moves or breathes during acquisition.

  18. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM).

    PubMed

    Gao, Hao; Yu, Hengyong; Osher, Stanley; Wang, Ge

    2011-11-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations.

  19. Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, Russel

    This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.

  20. A Novel Sky-Subtraction Method Based on Non-negative Matrix Factorisation with Sparsity for Multi-object Fibre Spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Long; Ye, Zhongfu

    2016-12-01

    A novel sky-subtraction method based on non-negative matrix factorisation with sparsity is proposed in this paper. The proposed non-negative matrix factorisation with sparsity method is redesigned for sky-subtraction considering the characteristics of the skylights. It has two constraint terms, one for sparsity and the other for homogeneity. Different from the standard sky-subtraction techniques, such as the B-spline curve fitting methods and the Principal Components Analysis approaches, sky-subtraction based on non-negative matrix factorisation with sparsity method has higher accuracy and flexibility. The non-negative matrix factorisation with sparsity method has research value for the sky-subtraction on multi-object fibre spectroscopic telescope surveys. To demonstrate the effectiveness and superiority of the proposed algorithm, experiments are performed on Large Sky Area Multi-Object Fiber Spectroscopic Telescope data, as the mechanisms of the multi-object fibre spectroscopic telescopes are similar.

  1. Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2017-05-01

    Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.

  2. Motion-compensated compressed sensing for dynamic contrast-enhanced MRI using regional spatiotemporal sparsity and region tracking: Block LOw-rank Sparsity with Motion-guidance (BLOSM)

    PubMed Central

    Chen, Xiao; Salerno, Michael; Yang, Yang; Epstein, Frederick H.

    2014-01-01

    Purpose Dynamic contrast-enhanced MRI of the heart is well-suited for acceleration with compressed sensing (CS) due to its spatiotemporal sparsity; however, respiratory motion can degrade sparsity and lead to image artifacts. We sought to develop a motion-compensated CS method for this application. Methods A new method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was developed to accelerate first-pass cardiac MRI, even in the presence of respiratory motion. This method divides the images into regions, tracks the regions through time, and applies matrix low-rank sparsity to the tracked regions. BLOSM was evaluated using computer simulations and first-pass cardiac datasets from human subjects. Using rate-4 acceleration, BLOSM was compared to other CS methods such as k-t SLR that employs matrix low-rank sparsity applied to the whole image dataset, with and without motion tracking, and to k-t FOCUSS with motion estimation and compensation that employs spatial and temporal-frequency sparsity. Results BLOSM was qualitatively shown to reduce respiratory artifact compared to other methods. Quantitatively, using root mean squared error and the structural similarity index, BLOSM was superior to other methods. Conclusion BLOSM, which exploits regional low rank structure and uses region tracking for motion compensation, provides improved image quality for CS-accelerated first-pass cardiac MRI. PMID:24243528

  3. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.

    PubMed

    He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi

    2014-06-27

    The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.

  4. Identification of spatially-localized initial conditions via sparse PCA

    NASA Astrophysics Data System (ADS)

    Dwivedi, Anubhav; Jovanovic, Mihailo

    2017-11-01

    Principal Component Analysis involves maximization of a quadratic form subject to a quadratic constraint on the initial flow perturbations and it is routinely used to identify the most energetic flow structures. For general flow configurations, principal components can be efficiently computed via power iteration of the forward and adjoint governing equations. However, the resulting flow structures typically have a large spatial support leading to a question of physical realizability. To obtain spatially-localized structures, we modify the quadratic constraint on the initial condition to include a convex combination with an additional regularization term which promotes sparsity in the physical domain. We formulate this constrained optimization problem as a nonlinear eigenvalue problem and employ an inverse power-iteration-based method to solve it. The resulting solution is guaranteed to converge to a nonlinear eigenvector which becomes increasingly localized as our emphasis on sparsity increases. We use several fluids examples to demonstrate that our method indeed identifies the most energetic initial perturbations that are spatially compact. This work was supported by Office of Naval Research through Grant Number N00014-15-1-2522.

  5. Vibration-based monitoring and diagnostics using compressive sensing

    NASA Astrophysics Data System (ADS)

    Ganesan, Vaahini; Das, Tuhin; Rahnavard, Nazanin; Kauffman, Jeffrey L.

    2017-04-01

    Vibration data from mechanical systems carry important information that is useful for characterization and diagnosis. Standard approaches rely on continually streaming data at a fixed sampling frequency. For applications involving continuous monitoring, such as Structural Health Monitoring (SHM), such approaches result in high volume data and rely on sensors being powered for prolonged durations. Furthermore, for spatial resolution, structures are instrumented with a large array of sensors. This paper shows that both volume of data and number of sensors can be reduced significantly by applying Compressive Sensing (CS) in vibration monitoring applications. The reduction is achieved by using random sampling and capitalizing on the sparsity of vibration signals in the frequency domain. Preliminary experimental results validating CS-based frequency recovery are also provided. By exploiting the sparsity of mode shapes, CS can also enable efficient spatial reconstruction using fewer spatially distributed sensors. CS can thereby reduce the cost and power requirement of sensing as well as streamline data storage and processing in monitoring applications. In well-instrumented structures, CS can enable continued monitoring in case of sensor or computational failures.

  6. Adapting Word Embeddings from Multiple Domains to Symptom Recognition from Psychiatric Notes

    PubMed Central

    Zhang, Yaoyun; Li, Hee-Jin; Wang, Jingqi; Cohen, Trevor; Roberts, Kirk; Xu, Hua

    2018-01-01

    Mental health is increasingly recognized an important topic in healthcare. Information concerning psychiatric symptoms is critical for the timely diagnosis of mental disorders, as well as for the personalization of interventions. However, the diversity and sparsity of psychiatric symptoms make it challenging for conventional natural language processing techniques to automatically extract such information from clinical text. To address this problem, this study takes the initiative to use and adapt word embeddings from four source domains – intensive care, biomedical literature, Wikipedia and Psychiatric Forum – to recognize symptoms in the target domain of psychiatry. We investigated four different approaches including 1) only using word embeddings of the source domain, 2) directly combining data of the source and target to generate word embeddings, 3) assigning different weights to word embeddings, and 4) retraining the word embedding model of the source domain using a corpus of the target domain. To the best of our knowledge, this is the first work of adapting multiple word embeddings of external domains to improve psychiatric symptom recognition in clinical text. Experimental results showed that the last two approaches outperformed the baseline methods, indicating the effectiveness of our new strategies to leverage embeddings from other domains. PMID:29888086

  7. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    PubMed

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  8. A new approach to global seismic tomography based on regularization by sparsity in a novel 3D spherical wavelet basis

    NASA Astrophysics Data System (ADS)

    Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean

    2010-05-01

    Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients across scales. We also focus on the likely gains of future inversions of finite-frequency seismic data using a sparsity promoting penalty in combination with our new wavelet approach.

  9. Comparative analysis of autofocus functions in digital in-line phase-shifting holography.

    PubMed

    Fonseca, Elsa S R; Fiadeiro, Paulo T; Pereira, Manuela; Pinheiro, António

    2016-09-20

    Numerical reconstruction of digital holograms relies on a precise knowledge of the original object position. However, there are a number of relevant applications where this parameter is not known in advance and an efficient autofocusing method is required. This paper addresses the problem of finding optimal focusing methods for use in reconstruction of digital holograms of macroscopic amplitude and phase objects, using digital in-line phase-shifting holography in transmission mode. Fifteen autofocus measures, including spatial-, spectral-, and sparsity-based methods, were evaluated for both synthetic and experimental holograms. The Fresnel transform and the angular spectrum reconstruction methods were compared. Evaluation criteria included unimodality, accuracy, resolution, and computational cost. Autofocusing under angular spectrum propagation tends to perform better with respect to accuracy and unimodality criteria. Phase objects are, generally, more difficult to focus than amplitude objects. The normalized variance, the standard correlation, and the Tenenbaum gradient are the most reliable spatial-based metrics, combining computational efficiency with good accuracy and resolution. A good trade-off between focus performance and computational cost was found for the Fresnelet sparsity method.

  10. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  11. Wavelets, ridgelets, and curvelets for Poisson noise removal.

    PubMed

    Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc

    2008-07-01

    In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

  12. Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity

    DOE PAGES

    Tripathi, Ashish; McNulty, Ian; Munson, Todd; ...

    2016-10-14

    We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.

  13. SparseBeads data: benchmarking sparsity-regularized computed tomography

    NASA Astrophysics Data System (ADS)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  14. The Issues of Sparsity in Providing Educational Opportunity in the State of Wyoming.

    ERIC Educational Resources Information Center

    Hobbs, Max E.

    Wyoming's funding programs for public education that relate to the issues of sparsity and the state's attempt to provide equal educational opportunity are reviewed. School district problems that relate to the issue of sparsity are also discussed. School district size in Wyoming ranges from the smallest district, by area, of 186 square miles to the…

  15. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  16. A Removal of Eye Movement and Blink Artifacts from EEG Data Using Morphological Component Analysis

    PubMed Central

    Wagatsuma, Hiroaki

    2017-01-01

    EEG signals contain a large amount of ocular artifacts with different time-frequency properties mixing together in EEGs of interest. The artifact removal has been substantially dealt with by existing decomposition methods known as PCA and ICA based on the orthogonality of signal vectors or statistical independence of signal components. We focused on the signal morphology and proposed a systematic decomposition method to identify the type of signal components on the basis of sparsity in the time-frequency domain based on Morphological Component Analysis (MCA), which provides a way of reconstruction that guarantees accuracy in reconstruction by using multiple bases in accordance with the concept of “dictionary.” MCA was applied to decompose the real EEG signal and clarified the best combination of dictionaries for this purpose. In our proposed semirealistic biological signal analysis with iEEGs recorded from the brain intracranially, those signals were successfully decomposed into original types by a linear expansion of waveforms, such as redundant transforms: UDWT, DCT, LDCT, DST, and DIRAC. Our result demonstrated that the most suitable combination for EEG data analysis was UDWT, DST, and DIRAC to represent the baseline envelope, multifrequency wave-forms, and spiking activities individually as representative types of EEG morphologies. PMID:28194221

  17. Super-resolution photoacoustic microscopy using joint sparsity

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Haltmeier, M.; Berer, T.; Leiss-Holzinger, E.; Murray, T. W.

    2017-07-01

    We present an imaging method that uses the random optical speckle patterns that naturally emerge as light propagates through strongly scattering media as a structured illumination source for photoacoustic imaging. Our approach, termed blind structured illumination photoacoustic microscopy (BSIPAM), was inspired by recent work in fluorescence microscopy where super-resolution imaging was demonstrated using multiple unknown speckle illumination patterns. We extend this concept to the multiple scattering domain using photoacoustics (PA), with the speckle pattern serving to generate ultrasound. The optical speckle pattern that emerges as light propagates through diffuse media provides structured illumination to an object placed behind a scattering wall. The photoacoustic signal produced by such illumination is detected using a focused ultrasound transducer. We demonstrate through both simulation and experiment, that by acquiring multiple photoacoustic images, each produced by a different random and unknown speckle pattern, an image of an absorbing object can be reconstructed with a spatial resolution far exceeding that of the ultrasound transducer. We experimentally and numerically demonstrate a gain in resolution of more than a factor of two by using multiple speckle illuminations. The variations in the photoacoustic signals generated with random speckle patterns are utilized in BSIPAM using a novel reconstruction algorithm. Exploiting joint sparsity, this algorithm is capable of reconstructing the absorbing structure from measured PA signals with a resolution close to the speckle size. Another way to excite random excitation for photoacoustic imaging are small absorbing particles, including contrast agents, which flow through small vessels. For such a set-up, the joint-sparsity is generated by the fact that all the particles move in the same vessels. Structured illumination in that case is not necessary.

  18. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  19. Sparsity-optimized separation of body waves and ground-roll by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Chen, Wenchao; Wang, Xiaokai; Wang, Wei

    2017-10-01

    Low-frequency oscillatory ground-roll is regarded as one of the main regular interference waves, which obscures primary reflections in land seismic data. Suppressing the ground-roll can reasonably improve the signal-to-noise ratio of seismic data. Conventional suppression methods, such as high-pass and various f-k filtering, usually cause waveform distortions and loss of body wave information because of their simple cut-off operation. In this study, a sparsity-optimized separation of body waves and ground-roll, which is based on morphological component analysis theory, is realized by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors. Our separation model is grounded on the fact that the input seismic data are composed of low-oscillatory body waves and high-oscillatory ground-roll. Two different waveform dictionaries using a low Q-factor and a high Q-factor, respectively, are confirmed as able to sparsely represent each component based on their diverse morphologies. Thus, seismic data including body waves and ground-roll can be nonlinearly decomposed into low-oscillatory and high-oscillatory components. This is a new noise attenuation approach according to the oscillatory behaviour of the signal rather than the scale or frequency. We illustrate the method using both synthetic and field shot data. Compared with results from conventional high-pass and f-k filtering, the results of the proposed method prove this method to be effective and advantageous in preserving the waveform and bandwidth of reflections.

  20. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  1. Image Reconstruction from Highly Undersampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei

    2012-01-01

    Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345

  2. Measuring transferring similarity via local information

    NASA Astrophysics Data System (ADS)

    Yin, Likang; Deng, Yong

    2018-05-01

    Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.

  3. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  4. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    PubMed

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  5. Low rank magnetic resonance fingerprinting.

    PubMed

    Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C

    2016-08-01

    Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.

  6. PLATSIM: A Simulation and Analysis Package for Large-Order Flexible Systems. Version 2.0

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Kenny, Sean P.; Giesy, Daniel P.

    1997-01-01

    The software package PLATSIM provides efficient time and frequency domain analysis of large-order generic space platforms. PLATSIM can perform open-loop analysis or closed-loop analysis with linear or nonlinear control system models. PLATSIM exploits the particular form of sparsity of the plant matrices for very efficient linear and nonlinear time domain analysis, as well as frequency domain analysis. A new, original algorithm for the efficient computation of open-loop and closed-loop frequency response functions for large-order systems has been developed and is implemented within the package. Furthermore, a novel and efficient jitter analysis routine which determines jitter and stability values from time simulations in a very efficient manner has been developed and is incorporated in the PLATSIM package. In the time domain analysis, PLATSIM simulates the response of the space platform to disturbances and calculates the jitter and stability values from the response time histories. In the frequency domain analysis, PLATSIM calculates frequency response function matrices and provides the corresponding Bode plots. The PLATSIM software package is written in MATLAB script language. A graphical user interface is developed in the package to provide convenient access to its various features.

  7. Robust Kriged Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  8. A two-dimensional time domain near zone to far zone transformation

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Ryan, Deirdre; Beggs, John H.; Kunz, Karl S.

    1991-01-01

    A time domain transformation useful for extrapolating three dimensional near zone finite difference time domain (FDTD) results to the far zone was presented. Here, the corresponding two dimensional transform is outlined. While the three dimensional transformation produced a physically observable far zone time domain field, this is not convenient to do directly in two dimensions, since a convolution would be required. However, a representative two dimensional far zone time domain result can be obtained directly. This result can then be transformed to the frequency domain using a Fast Fourier Transform, corrected with a simple multiplicative factor, and used, for example, to calculate the complex wideband scattering width of a target. If an actual time domain far zone result is required, it can be obtained by inverse Fourier transform of the final frequency domain result.

  9. A two-dimensional time domain near zone to far zone transformation

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Ryan, Deirdre; Beggs, John H.; Kunz, Karl S.

    1991-01-01

    In a previous paper, a time domain transformation useful for extrapolating 3-D near zone finite difference time domain (FDTD) results to the far zone was presented. In this paper, the corresponding 2-D transform is outlined. While the 3-D transformation produced a physically observable far zone time domain field, this is not convenient to do directly in 2-D, since a convolution would be required. However, a representative 2-D far zone time domain result can be obtained directly. This result can then be transformed to the frequency domain using a Fast Fourier Transform, corrected with a simple multiplicative factor, and used, for example, to calculate the complex wideband scattering width of a target. If an actual time domain far zone result is required it can be obtained by inverse Fourier transform of the final frequency domain result.

  10. Enhancing Sparsity by Reweighted l(1) Minimization

    DTIC Science & Technology

    2008-07-01

    recovery depends on the sparsity level k. The dashed curves represent a reweighted ℓ1 algorithm that outperforms the traditional unweighted ℓ1...approach (solid curve ). (a) Performance after 4 reweighting iterations as a function of ǫ. (b) Performance with fixed ǫ = 0.1 as a function of the number of...signal recovery (declared when ‖x0 − x‖ℓ∞ ≤ 10−3) for the unweighted ℓ1 algorithm as a function of the sparsity level k. The dashed curves represent the

  11. MO-FG-204-06: A New Algorithm for Gold Nano-Particle Concentration Identification in Dual Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L; Shen, C; Ng, M

    Purpose: Gold nano-particle (GNP) has recently attracted a lot of attentions due to its potential as an imaging contrast agent and radiotherapy sensitiser. Imaging the GNP at its low contraction is a challenging problem. We propose a new algorithm to improve the identification of GNP based on dual energy CT (DECT). Methods: We consider three base materials: water, bone, and gold. Determining three density images from two images in DECT is an under-determined problem. We propose to solve this problem by exploring image domain sparsity via an optimization approach. The objective function contains four terms. A data-fidelity term ensures themore » fidelity between the identified material densities and the DECT images, while the other three terms enforces the sparsity in the gradient domain of the three images corresponding to the density of the base materials by using total variation (TV) regularization. A primal-dual algorithm is applied to solve the proposed optimization problem. We have performed simulation studies to test this model. Results: Our digital phantom in the tests contains water, bone regions and gold inserts of different sizes and densities. The gold inserts contain mixed material consisting of water with 1g/cm3 and gold at a certain density. At a low gold density of 0.0008 g/cm3, the insert is hardly visible in DECT images, especially for those with small sizes. Our algorithm is able to decompose the DECT into three density images. Those gold inserts at a low density can be clearly visualized in the density image. Conclusion: We have developed a new algorithm to decompose DECT images into three different material density images, in particular, to retrieve density of gold. Numerical studies showed promising results.« less

  12. Homogeneity Pursuit

    PubMed Central

    Ke, Tracy; Fan, Jianqing; Wu, Yichao

    2014-01-01

    This paper explores the homogeneity of coefficients in high-dimensional regression, which extends the sparsity concept and is more general and suitable for many applications. Homogeneity arises when regression coefficients corresponding to neighboring geographical regions or a similar cluster of covariates are expected to be approximately the same. Sparsity corresponds to a special case of homogeneity with a large cluster of known atom zero. In this article, we propose a new method called clustering algorithm in regression via data-driven segmentation (CARDS) to explore homogeneity. New mathematics are provided on the gain that can be achieved by exploring homogeneity. Statistical properties of two versions of CARDS are analyzed. In particular, the asymptotic normality of our proposed CARDS estimator is established, which reveals better estimation accuracy for homogeneous parameters than that without homogeneity exploration. When our methods are combined with sparsity exploration, further efficiency can be achieved beyond the exploration of sparsity alone. This provides additional insights into the power of exploring low-dimensional structures in high-dimensional regression: homogeneity and sparsity. Our results also shed lights on the properties of the fussed Lasso. The newly developed method is further illustrated by simulation studies and applications to real data. Supplementary materials for this article are available online. PMID:26085701

  13. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    PubMed

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  14. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  15. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Ashish; McNulty, Ian; Munson, Todd

    We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.

  17. Broadband Structural Dynamics: Understanding the Impulse-Response of Structures Across Multiple Length and Time Scales

    DTIC Science & Technology

    2010-08-18

    Spectral domain response calculated • Time domain response obtained through inverse transform Approach 4: WASABI Wavelet Analysis of Structural Anomalies...differences at unity scale! Time Function Transform Apply Spectral Domain Transfer Function Time Function Inverse Transform Transform Transform  mtP

  18. Spatially Common Sparsity Based Adaptive Channel Estimation and Feedback for FDD Massive MIMO

    NASA Astrophysics Data System (ADS)

    Gao, Zhen; Dai, Linglong; Wang, Zhaocheng; Chen, Sheng

    2015-12-01

    This paper proposes a spatially common sparsity based adaptive channel estimation and feedback scheme for frequency division duplex based massive multi-input multi-output (MIMO) systems, which adapts training overhead and pilot design to reliably estimate and feed back the downlink channel state information (CSI) with significantly reduced overhead. Specifically, a non-orthogonal downlink pilot design is first proposed, which is very different from standard orthogonal pilots. By exploiting the spatially common sparsity of massive MIMO channels, a compressive sensing (CS) based adaptive CSI acquisition scheme is proposed, where the consumed time slot overhead only adaptively depends on the sparsity level of the channels. Additionally, a distributed sparsity adaptive matching pursuit algorithm is proposed to jointly estimate the channels of multiple subcarriers. Furthermore, by exploiting the temporal channel correlation, a closed-loop channel tracking scheme is provided, which adaptively designs the non-orthogonal pilot according to the previous channel estimation to achieve an enhanced CSI acquisition. Finally, we generalize the results of the multiple-measurement-vectors case in CS and derive the Cramer-Rao lower bound of the proposed scheme, which enlightens us to design the non-orthogonal pilot signals for the improved performance. Simulation results demonstrate that the proposed scheme outperforms its counterparts, and it is capable of approaching the performance bound.

  19. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  20. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de; Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT; Brookes, Mike

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In thismore » paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole sources. • Inverse solutions are based on longitudinal and transverse line integral measurements. • Transverse line integral measurements are used as a sparsity constraint. • Numerical procedure to approximate the line integrals is described in detail. • Patterns of the studied electric fields are correctly estimated.« less

  1. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data in the Wavelet Domain Constrained by Sparsity Regularization

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.

    2014-12-01

    Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.

  2. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  3. Flow in horizontally anisotropic multilayered aquifer systems with leaky wells and aquitards

    EPA Science Inventory

    Flow problems in an anisotropic domain can be transformed into ones in an equivalent isotropic domain by coordinate transformations. Once analytical solutions are obtained for the equivalent isotropic domain, they can be back transformed to the original anisotropic domain. The ex...

  4. High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.

  5. Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects.

    PubMed

    Sidorenko, Pavel; Kfir, Ofer; Shechtman, Yoav; Fleischer, Avner; Eldar, Yonina C; Segev, Mordechai; Cohen, Oren

    2015-09-08

    Phase-retrieval problems of one-dimensional (1D) signals are known to suffer from ambiguity that hampers their recovery from measurements of their Fourier magnitude, even when their support (a region that confines the signal) is known. Here we demonstrate sparsity-based coherent diffraction imaging of 1D objects using extreme-ultraviolet radiation produced from high harmonic generation. Using sparsity as prior information removes the ambiguity in many cases and enhances the resolution beyond the physical limit of the microscope. Our approach may be used in a variety of problems, such as diagnostics of defects in microelectronic chips. Importantly, this is the first demonstration of sparsity-based 1D phase retrieval from actual experiments, hence it paves the way for greatly improving the performance of Fourier-based measurement systems where 1D signals are inherent, such as diagnostics of ultrashort laser pulses, deciphering the complex time-dependent response functions (for example, time-dependent permittivity and permeability) from spectral measurements and vice versa.

  6. Observation on the transformation domains of super-elastic NiTi shape memory alloy and their evolutions during cyclic loading

    NASA Astrophysics Data System (ADS)

    Xie, Xi; Kan, Qianhua; Kang, Guozheng; Li, Jian; Qiu, Bo; Yu, Chao

    2016-04-01

    The strain field of a super-elastic NiTi shape memory alloy (SMA) and its variation during uniaxial cyclic tension-unloading were observed by a non-contact digital image correlation method, and then the transformation domains and their evolutions were indirectly investigated and discussed. It is seen that the super-elastic NiTi (SMA) exhibits a remarkable localized deformation and the transformation domains evolve periodically with the repeated cyclic tension-unloading within the first several cycles. However, the evolutions of transformation domains at the stage of stable cyclic transformation depend on applied peak stress: when the peak stress is low, no obvious transformation band is observed and the strain field is nearly uniform; when the peak stress is large enough, obvious transformation bands occur due to the residual martensite caused by the prevention of enriched dislocations to the reverse transformation from induced martensite to austenite. Temperature variations measured by an infrared thermal imaging method further verifies the formation and evolution of transformation domains.

  7. A robust holographic autofocusing criterion based on edge sparsity: comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    NASA Astrophysics Data System (ADS)

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2018-02-01

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  8. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE PAGES

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.; ...

    2018-03-09

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  9. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  10. Fast live cell imaging at nanometer scale using annihilating filter-based low-rank Hankel matrix approach

    NASA Astrophysics Data System (ADS)

    Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2015-09-01

    Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.

  11. Combination of oriented partial differential equation and shearlet transform for denoising in electronic speckle pattern interferometry fringe patterns.

    PubMed

    Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia

    2017-04-01

    It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.

  12. A comparative study of surface EMG classification by fuzzy relevance vector machine and fuzzy support vector machine.

    PubMed

    Xie, Hong-Bo; Huang, Hu; Wu, Jianhua; Liu, Lei

    2015-02-01

    We present a multiclass fuzzy relevance vector machine (FRVM) learning mechanism and evaluate its performance to classify multiple hand motions using surface electromyographic (sEMG) signals. The relevance vector machine (RVM) is a sparse Bayesian kernel method which avoids some limitations of the support vector machine (SVM). However, RVM still suffers the difficulty of possible unclassifiable regions in multiclass problems. We propose two fuzzy membership function-based FRVM algorithms to solve such problems, based on experiments conducted on seven healthy subjects and two amputees with six hand motions. Two feature sets, namely, AR model coefficients and room mean square value (AR-RMS), and wavelet transform (WT) features, are extracted from the recorded sEMG signals. Fuzzy support vector machine (FSVM) analysis was also conducted for wide comparison in terms of accuracy, sparsity, training and testing time, as well as the effect of training sample sizes. FRVM yielded comparable classification accuracy with dramatically fewer support vectors in comparison with FSVM. Furthermore, the processing delay of FRVM was much less than that of FSVM, whilst training time of FSVM much faster than FRVM. The results indicate that FRVM classifier trained using sufficient samples can achieve comparable generalization capability as FSVM with significant sparsity in multi-channel sEMG classification, which is more suitable for sEMG-based real-time control applications.

  13. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  14. Poisson Noise Removal in Spherical Multichannel Images: Application to Fermi data

    NASA Astrophysics Data System (ADS)

    Schmitt, Jérémy; Starck, Jean-Luc; Fadili, Jalal; Digel, Seth

    2012-03-01

    The Fermi Gamma-ray Space Telescope, which was launched by NASA in June 2008, is a powerful space observatory which studies the high-energy gamma-ray sky [5]. Fermi's main instrument, the Large Area Telescope (LAT), detects photons in an energy range between 20MeV and >300 GeV. The LAT is much more sensitive than its predecessor, the energetic gamma ray experiment telescope (EGRET) telescope on the Compton Gamma-ray Observatory, and is expected to find several thousand gamma-ray point sources, which is an order of magnitude more than its predecessor EGRET [13]. Even with its relatively large acceptance (∼2m2 sr), the number of photons detected by the LAT outside the Galactic plane and away from intense sources is relatively low and the sky overall has a diffuse glow from cosmic-ray interactions with interstellar gas and low energy photons that makes a background against which point sources need to be detected. In addition, the per-photon angular resolution of the LAT is relatively poor and strongly energy dependent, ranging from>10° at 20MeV to ∼0.1° above 100 GeV. Consequently, the spherical photon count images obtained by Fermi are degraded by the fluctuations on the number of detected photons. This kind of noise is strongly signal dependent : on the brightest parts of the image like the galactic plane or the brightest sources, we have a lot of photons per pixel, and so the photon noise is low. Outside the galactic plane, the number of photons per pixel is low, which means that the photon noise is high. Such a signal-dependent noise cannot be accurately modeled by a Gaussian distribution. The basic photon-imaging model assumes that the number of detected photons at each pixel location is Poisson distributed. More specifically, the image is considered as a realization of an inhomogeneous Poisson process. This statistical noise makes the source detection more difficult, consequently it is highly desirable to have an efficient denoising method for spherical Poisson data. Several techniques have been proposed in the literature to estimate Poisson intensity in 2-dimensional (2D). A major class of methods adopt a multiscale Bayesian framework specifically tailored for Poisson data [18], independently initiated by Timmerman and Nowak [23] and Kolaczyk [14]. Lefkimmiaits et al. [15] proposed an improved Bayesian framework for analyzing Poisson processes, based on a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities in adjacent scales are modeled as mixtures of conjugate parametric distributions. Another approach includes preprocessing the count data by a variance stabilizing transform(VST) such as theAnscombe [4] and the Fisz [10] transforms, applied respectively in the spatial [8] or in the wavelet domain [11]. The transform reforms the data so that the noise approximately becomes Gaussian with a constant variance. Standard techniques for independent identically distributed Gaussian noise are then used for denoising. Zhang et al. [25] proposed a powerful method called multiscale (MS-VST). It consists in combining a VST with a multiscale transform (wavelets, ridgelets, or curvelets), yielding asymptotically normally distributed coefficients with known variances. The interest of using a multiscale method is to exploit the sparsity properties of the data : the data are transformed into a domain in which it is sparse, and, as the noise is not sparse in any transform domain, it is easy to separate it from the signal. When the noise is Gaussian of known variance, it is easy to remove it with a high thresholding in the wavelet domain. The choice of the multiscale transform depends on the morphology of the data. Wavelets represent more efficiently regular structures and isotropic singularities, whereas ridgelets are designed to represent global lines in an image, and curvelets represent efficiently curvilinear contours. Significant coefficients are then detected with binary hypothesis testing, and the final estimate is reconstructed with an iterative scheme. In Ref

  15. Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms

    DTIC Science & Technology

    2004-08-01

    inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared

  16. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  17. A new principle technic for the transformation from frequency domain to time domain

    NASA Astrophysics Data System (ADS)

    Gao, Ben-Qing

    2017-03-01

    A principle technic for the transformation from frequency domain to time domain is presented. Firstly, a special type of frequency domain transcendental equation is obtained for an expected frequency domain parameter which is a rational or irrational fraction expression. Secondly, the inverse Laplace transformation is performed. When the two time-domain factors corresponding to the two frequency domain factors at two sides of frequency domain transcendental equation are known quantities, a time domain transcendental equation is reached. At last, the expected time domain parameter corresponding to the expected frequency domain parameter can be solved by the inverse convolution process. Proceeding from rational or irrational fraction expression, all solving process is provided. In the meantime, the property of time domain sequence is analyzed and the strategy for choosing the parameter values is described. Numerical examples are presented to verify the proposed theory and technic. Except for rational or irrational fraction expressions, examples of complex relative permittivity of water and plasma are used as verification method. The principle method proposed in the paper can easily solve problems which are difficult to be solved by Laplace transformation.

  18. Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Xiaorun; Zhao, Liaoying

    2016-01-01

    Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.

  19. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.

  20. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  1. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  2. Sparsity-driven tomographic reconstruction of atmospheric water vapor using GNSS and InSAR observations

    NASA Astrophysics Data System (ADS)

    Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan

    2016-04-01

    An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. On the other hand, as water vapor causes a delay in the microwave signal propagation within the atmosphere, a precise determination of water vapor is required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). However, due to its high variability in time and space, the atmospheric water vapor distribution is difficult to model. Since GNSS meteorology was introduced about twenty years ago, it has increasingly been used as a geodetic technique to generate maps of 2D Precipitable Water Vapor (PWV). Moreover, several approaches for 3D tomographic water vapor reconstruction from GNSS-based estimates using the simple least squares adjustment were presented. In this poster, we present an innovative and sophisticated Compressive Sensing (CS) concept for sparsity-driven tomographic reconstruction of 3D atmospheric wet refractivity fields using data from GNSS and InSAR. The 2D zenith wet delay (ZWD) estimates are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data of 100 points as if corresponding to 100 GNSS sites within an area of 100 km × 100 km in the test region of the Upper Rhine Graben. The made-up ZWD values can be mapped into different elevation and azimuth angles. Using the Cosine transform, a sparse representation of the wet refractivity field is obtained. In contrast to existing tomographic approaches, we exploit sparsity as a prior for the regularization of the underdetermined inverse system. The new aspects of this work include both the combination of GNSS and InSAR data for water vapor tomography and the sophisticated CS estimation. The accuracy of the estimated 3D water vapor field is determined by comparing slant integrated wet delays computed from the estimated wet refractivities with real GNSS wet delay estimates. This comparison is performed along different elevation and azimuth angles.

  3. Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies.

    PubMed

    Thorsen, Jonathan; Brejnrod, Asker; Mortensen, Martin; Rasmussen, Morten A; Stokholm, Jakob; Al-Soud, Waleed Abu; Sørensen, Søren; Bisgaard, Hans; Waage, Johannes

    2016-11-25

    There is an immense scientific interest in the human microbiome and its effects on human physiology, health, and disease. A common approach for examining bacterial communities is high-throughput sequencing of 16S rRNA gene hypervariable regions, aggregating sequence-similar amplicons into operational taxonomic units (OTUs). Strategies for detecting differential relative abundance of OTUs between sample conditions include classical statistical approaches as well as a plethora of newer methods, many borrowing from the related field of RNA-seq analysis. This effort is complicated by unique data characteristics, including sparsity, sequencing depth variation, and nonconformity of read counts to theoretical distributions, which is often exacerbated by exploratory and/or unbalanced study designs. Here, we assess the robustness of available methods for (1) inference in differential relative abundance analysis and (2) beta-diversity-based sample separation, using a rigorous benchmarking framework based on large clinical 16S microbiome datasets from different sources. Running more than 380,000 full differential relative abundance tests on real datasets with permuted case/control assignments and in silico-spiked OTUs, we identify large differences in method performance on a range of parameters, including false positive rates, sensitivity to sparsity and case/control balances, and spike-in retrieval rate. In large datasets, methods with the highest false positive rates also tend to have the best detection power. For beta-diversity-based sample separation, we show that library size normalization has very little effect and that the distance metric is the most important factor in terms of separation power. Our results, generalizable to datasets from different sequencing platforms, demonstrate how the choice of method considerably affects analysis outcome. Here, we give recommendations for tools that exhibit low false positive rates, have good retrieval power across effect sizes and case/control proportions, and have low sparsity bias. Result output from some commonly used methods should be interpreted with caution. We provide an easily extensible framework for benchmarking of new methods and future microbiome datasets.

  4. Broadband CARS spectral phase retrieval using a time-domain Kramers–Kronig transform

    PubMed Central

    Liu, Yuexin; Lee, Young Jong; Cicerone, Marcus T.

    2014-01-01

    We describe a closed-form approach for performing a Kramers–Kronig (KK) transform that can be used to rapidly and reliably retrieve the phase, and thus the resonant imaginary component, from a broadband coherent anti-Stokes Raman scattering (CARS) spectrum with a nonflat background. In this approach we transform the frequency-domain data to the time domain, perform an operation that ensures a causality criterion is met, then transform back to the frequency domain. The fact that this method handles causality in the time domain allows us to conveniently account for spectrally varying nonresonant background from CARS as a response function with a finite rise time. A phase error accompanies KK transform of data with finite frequency range. In examples shown here, that phase error leads to small (<1%) errors in the retrieved resonant spectra. PMID:19412273

  5. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0

  6. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  7. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images

  8. Optimization-based image reconstruction in x-ray computed tomography by sparsity exploitation of local continuity and nonlocal spatial self-similarity

    NASA Astrophysics Data System (ADS)

    Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu

    2016-07-01

    The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).

  9. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  10. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    PubMed Central

    Jørgensen, J. S.; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620

  11. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    PubMed

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  12. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  13. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  14. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  15. Glimpse: Sparsity based weak lensing mass-mapping tool

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.

    2018-02-01

    Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.

  16. Photoinduced Domain Pattern Transformation in Ferroelectric-Dielectric Superlattices

    DOE PAGES

    Ahn, Youngjun; Park, Joonkyu; Pateras, Anastasios; ...

    2017-07-31

    The nanodomain pattern in ferroelectric/dielectric superlattices transforms to a uniform polarization state under above-bandgap optical excitation. X-ray scattering reveals a disappearance of domain diffuse scattering and an expansion of the lattice. Furthermore, the reappearance of the domain pattern occurs over a period of seconds at room temperature, suggesting a transformation mechanism in which charge carriers in long-lived trap states screen the depolarization field. A Landau-Ginzburg-Devonshire model predicts changes in lattice parameter and a critical carrier concentration for the transformation.

  17. Photoinduced Domain Pattern Transformation in Ferroelectric-Dielectric Superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Youngjun; Park, Joonkyu; Pateras, Anastasios

    2017-07-01

    The nanodomain pattern in ferroelectric/dielectric superlattices transforms to a uniform polarization state under above-bandgap optical excitation. X-ray scattering reveals a disappearance of domain diffuse scattering and an expansion of the lattice. The reappearance of the domain pattern occurs over a period of seconds at room temperature, suggesting a transformation mechanism in which charge carriers in long-lived trap states screen the depolarization field. A Landau-Ginzburg-Devonshire model predicts changes in lattice parameter and a critical carrier concentration for the transformation.

  18. SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    2015-06-15

    Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  19. Remote-sensing image encryption in hybrid domains

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong

    2012-04-01

    Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.

  20. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    PubMed

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  1. Compressed sensing for ultrasound computed tomography.

    PubMed

    van Sloun, Ruud; Pandharipande, Ashish; Mischi, Massimo; Demi, Libertario

    2015-06-01

    Ultrasound computed tomography (UCT) allows the reconstruction of quantitative tissue characteristics, such as speed of sound, mass density, and attenuation. Lowering its acquisition time would be beneficial; however, this is fundamentally limited by the physical time of flight and the number of transmission events. In this letter, we propose a compressed sensing solution for UCT. The adopted measurement scheme is based on compressed acquisitions, with concurrent randomised transmissions in a circular array configuration. Reconstruction of the image is then obtained by combining the born iterative method and total variation minimization, thereby exploiting variation sparsity in the image domain. Evaluation using simulated UCT scattering measurements shows that the proposed transmission scheme performs better than uniform undersampling, and is able to reduce acquisition time by almost one order of magnitude, while maintaining high spatial resolution.

  2. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  3. Parallel Finite Element Domain Decomposition for Structural/Acoustic Analysis

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Tungkahotara, Siroj; Watson, Willie R.; Rajan, Subramaniam D.

    2005-01-01

    A domain decomposition (DD) formulation for solving sparse linear systems of equations resulting from finite element analysis is presented. The formulation incorporates mixed direct and iterative equation solving strategics and other novel algorithmic ideas that are optimized to take advantage of sparsity and exploit modern computer architecture, such as memory and parallel computing. The most time consuming part of the formulation is identified and the critical roles of direct sparse and iterative solvers within the framework of the formulation are discussed. Experiments on several computer platforms using several complex test matrices are conducted using software based on the formulation. Small-scale structural examples are used to validate thc steps in the formulation and large-scale (l,000,000+ unknowns) duct acoustic examples are used to evaluate the ORIGIN 2000 processors, and a duster of 6 PCs (running under the Windows environment). Statistics show that the formulation is efficient in both sequential and parallel computing environmental and that the formulation is significantly faster and consumes less memory than that based on one of the best available commercialized parallel sparse solvers.

  4. Optimal Sparse Upstream Sensor Placement for Hydrokinetic Turbines

    NASA Astrophysics Data System (ADS)

    Cavagnaro, Robert; Strom, Benjamin; Ross, Hannah; Hill, Craig; Polagye, Brian

    2016-11-01

    Accurate measurement of the flow field incident upon a hydrokinetic turbine is critical for performance evaluation during testing and setting boundary conditions in simulation. Additionally, turbine controllers may leverage real-time flow measurements. Particle image velocimetry (PIV) is capable of rendering a flow field over a wide spatial domain in a controlled, laboratory environment. However, PIV's lack of suitability for natural marine environments, high cost, and intensive post-processing diminish its potential for control applications. Conversely, sensors such as acoustic Doppler velocimeters (ADVs), are designed for field deployment and real-time measurement, but over a small spatial domain. Sparsity-promoting regression analysis such as LASSO is utilized to improve the efficacy of point measurements for real-time applications by determining optimal spatial placement for a small number of ADVs using a training set of PIV velocity fields and turbine data. The study is conducted in a flume (0.8 m2 cross-sectional area, 1 m/s flow) with laboratory-scale axial and cross-flow turbines. Predicted turbine performance utilizing the optimal sparse sensor network and associated regression model is compared to actual performance with corresponding PIV measurements.

  5. Hammerstein system represention of financial volatility processes

    NASA Astrophysics Data System (ADS)

    Capobianco, E.

    2002-05-01

    We show new modeling aspects of stock return volatility processes, by first representing them through Hammerstein Systems, and by then approximating the observed and transformed dynamics with wavelet-based atomic dictionaries. We thus propose an hybrid statistical methodology for volatility approximation and non-parametric estimation, and aim to use the information embedded in a bank of volatility sources obtained by decomposing the observed signal with multiresolution techniques. Scale dependent information refers both to market activity inherent to different temporally aggregated trading horizons, and to a variable degree of sparsity in representing the signal. A decomposition of the expansion coefficients in least dependent coordinates is then implemented through Independent Component Analysis. Based on the described steps, the features of volatility can be more effectively detected through global and greedy algorithms.

  6. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  7. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  8. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  9. WW domain-mediated interaction with Wbp2 is important for the oncogenic property of TAZ

    PubMed Central

    Chan, S W; Lim, C J; Huang, C; Chong, Y F; Gunaratne, H J; Hogue, K A; Blackstock, W P; Harvey, K F; Hong, W

    2011-01-01

    The transcriptional co-activators YAP and TAZ are downstream targets inhibited by the Hippo tumor suppressor pathway. YAP and TAZ both possess WW domains, which are important protein–protein interaction modules that mediate interaction with proline-rich motifs, most commonly PPXY. The WW domains of YAP have complex regulatory roles as exemplified by recent reports showing that they can positively or negatively influence YAP activity in a cell and context-specific manner. In this study, we show that the WW domain of TAZ is important for it to transform both MCF10A and NIH3T3 cells and to activate transcription of ITGB2 but not CTGF, as introducing point mutations into the WW domain of TAZ (WWm) abolished its transforming and transcription-promoting ability. Using a proteomic approach, we discovered potential regulatory proteins that interact with TAZ WW domain and identified Wbp2. The interaction of Wbp2 with TAZ is dependent on the WW domain of TAZ and the PPXY-containing C-terminal region of Wbp2. Knockdown of endogenous Wbp2 suppresses, whereas overexpression of Wbp2 enhances, TAZ-driven transformation. Forced interaction of WWm with Wbp2 by direct C-terminal fusion of full-length Wbp2 or its TAZ-interacting C-terminal domain restored the transforming and transcription-promoting ability of TAZ. These results suggest that the WW domain-mediated interaction with Wbp2 promotes the transforming ability of TAZ. PMID:20972459

  10. System Transformation in Patient-Centered Medical Home (PCMH): Variable Impact on Chronically Ill Patients' Utilization.

    PubMed

    Carlin, Caroline S; Flottemesch, Thomas J; Solberg, Leif I; Werner, Ann M

    2016-01-01

    Research connecting patient-centered medical homes (PCMHs) with improved quality and reduced utilization is inconsistent, possibly because individual domains of change, and the stage of change, are not incorporated in the research design. The objective of this study was to examine the association between stage and domain of change and patterns of health care utilization. This was a cross-sectional observational study that including 87 Minnesota clinics certified as medical homes. Patients included those receiving management for diabetes or cardiovascular disease with insurance coverage by payers participating in the study. PCMH transformation stage was defined by practice systems in place, with measurements summarized in 5 domains. Health care utilization was measured by total utilization, frequency of outpatient visits and prescriptions, and occurrence of inpatient and emergency department visits. PCMH transformation was associated with few changes in utilization, but there were important differences by the underlying domains of change. We demonstrate meaningful differences in the impact of PCMH transformation by diagnosis cohort and comorbidity status of the patient. Because the association of health care utilization with PCMH transformation varied by transformation domain and patient diagnosis, practice leaders need to be supported by research incorporating detailed measures of PCMH transformation. © Copyright 2016 by the American Board of Family Medicine.

  11. Wavelet transformation to determine impedance spectra of lithium-ion rechargeable battery

    NASA Astrophysics Data System (ADS)

    Hoshi, Yoshinao; Yakabe, Natsuki; Isobe, Koichiro; Saito, Toshiki; Shitanda, Isao; Itagaki, Masayuki

    2016-05-01

    A new analytical method is proposed to determine the electrochemical impedance of lithium-ion rechargeable batteries (LIRB) from time domain data by wavelet transformation (WT). The WT is a waveform analysis method that can transform data in the time domain to the frequency domain while retaining time information. In this transformation, the frequency domain data are obtained by the convolution integral of a mother wavelet and original time domain data. A complex Morlet mother wavelet (CMMW) is used to obtain the complex number data in the frequency domain. The CMMW is expressed by combining a Gaussian function and sinusoidal term. The theory to select a set of suitable conditions for variables and constants related to the CMMW, i.e., band, scale, and time parameters, is established by determining impedance spectra from wavelet coefficients using input voltage to the equivalent circuit and the output current. The impedance spectrum of LIRB determined by WT agrees well with that measured using a frequency response analyzer.

  12. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    NASA Astrophysics Data System (ADS)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  13. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  14. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  15. Tectonic interpretation of the Andrew Bain transform fault: Southwest Indian Ocean

    NASA Astrophysics Data System (ADS)

    Sclater, John G.; Grindlay, Nancy R.; Madsen, John A.; Rommevaux-Jestin, Celine

    2005-09-01

    Between 25°E and 35°E, a suite of four transform faults, Du Toit, Andrew Bain, Marion, and Prince Edward, offsets the Southwest Indian Ridge (SWIR) left laterally 1230 km. The Andrew Bain, the largest, has a length of 750 km and a maximum transform domain width of 120 km. We show that, currently, the Nubia/Somalia plate boundary intersects the SWIR east of the Prince Edward, placing the Andrew Bain on the Nubia/Antarctica plate boundary. However, the overall trend of its transform domain lies 10° clockwise of the predicted direction of motion for this boundary. We use four transform-parallel multibeam and magnetic anomaly profiles, together with relocated earthquakes and focal mechanism solutions, to characterize the morphology and tectonics of the Andrew Bain. Starting at the southwestern ridge-transform intersection, the relocated epicenters follow a 450-km-long, 20-km-wide, 6-km-deep western valley. They cross the transform domain within a series of deep overlapping basins bounded by steep inward dipping arcuate scarps. Eight strike-slip and three dip-slip focal mechanism solutions lie within these basins. The earthquakes can be traced to the northeastern ridge-transform intersection via a straight, 100-km-long, 10-km-wide, 4.5-km-deep eastern valley. A striking set of seismically inactive NE-SW trending en echelon ridges and valleys, lying to the south of the overlapping basins, dominates the eastern central section of the transform domain. We interpret the deep overlapping basins as two pull-apart features connected by a strike-slip basin that have created a relay zone similar to those observed on continental transforms. This transform relay zone connects three closely spaced overlapping transform faults in the southwest to a single transform fault in the northeast. The existence of the transform relay zone accounts for the difference between the observed and predicted trend of the Andrew Bain transform domain. We speculate that between 20 and 3.2 Ma, an oblique accretionary zone jumping successively northward created the en echelon ridges and valleys in the eastern central portion of the domain. The style of accretion changed to that of a transform relay zone, during a final northward jump, at 3.2 Ma.

  16. Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.

    PubMed

    Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C

    2015-01-01

    Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.

  17. Structural damage identification using piezoelectric impedance measurement with sparse inverse analysis

    NASA Astrophysics Data System (ADS)

    Cao, Pei; Qi, Shuai; Tang, J.

    2018-03-01

    The impedance/admittance measurements of a piezoelectric transducer bonded to or embedded in a host structure can be used as damage indicator. When a credible model of the healthy structure, such as the finite element model, is available, using the impedance/admittance change information as input, it is possible to identify both the location and severity of damage. The inverse analysis, however, may be under-determined as the number of unknowns in high-frequency analysis is usually large while available input information is limited. The fundamental challenge thus is how to find a small set of solutions that cover the true damage scenario. In this research we cast the damage identification problem into a multi-objective optimization framework to tackle this challenge. With damage locations and severities as unknown variables, one of the objective functions is the difference between impedance-based model prediction in the parametric space and the actual measurements. Considering that damage occurrence generally affects only a small number of elements, we choose the sparsity of the unknown variables as another objective function, deliberately, the l 0 norm. Subsequently, a multi-objective Dividing RECTangles (DIRECT) algorithm is developed to facilitate the inverse analysis where the sparsity is further emphasized by sigmoid transformation. As a deterministic technique, this approach yields results that are repeatable and conclusive. In addition, only one algorithmic parameter, the number of function evaluations, is needed. Numerical and experimental case studies demonstrate that the proposed framework is capable of obtaining high-quality damage identification solutions with limited measurement information.

  18. Multimodal manifold-regularized transfer learning for MCI conversion prediction.

    PubMed

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-12-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.

  19. Robust, Adaptive Functional Regression in Functional Mixed Model Framework.

    PubMed

    Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S

    2011-09-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.

  20. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015

  1. Hawking radiation of five-dimensional charged black holes with scalar fields

    NASA Astrophysics Data System (ADS)

    Miao, Yan-Gang; Xu, Zhen-Ming

    2017-09-01

    We investigate the Hawking radiation cascade from the five-dimensional charged black hole with a scalar field coupled to higher-order Euler densities in a conformally invariant manner. We give the semi-analytic calculation of greybody factors for the Hawking radiation. Our analysis shows that the Hawking radiation cascade from this five-dimensional black hole is extremely sparse. The charge enhances the sparsity of the Hawking radiation, while the conformally coupled scalar field reduces this sparsity.

  2. A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

    DTIC Science & Technology

    2016-09-01

    UNCLASSIFIED A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades Ngoc Hung Nguyen 1, Hai-Tan Tran 2, Kutluyıl...TR–3292 ABSTRACT Radar imaging of rotating blade -like objects, such as helicopter rotors, using narrowband radar has lately been of significant...Methods for Analysing Radar Returns from Helicopter Rotor Blades Executive Summary Signal analysis and radar imaging of fast-rotating objects such as

  3. Sparsity enabled cluster reduced-order models for control

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  4. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  5. Transductive multi-view zero-shot learning.

    PubMed

    Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Gong, Shaogang

    2015-11-01

    Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset/domain are biased when applied directly to the target dataset/domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding, to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks.

  6. Logo recognition using alpha-rooted phase correlation in the radon transform domain

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2009-08-01

    Alpha-rooted phase correlation (ARPC) is a recently-developed variant of classical phase correlation that includes a Fourier domain image enhancement operation. ARPC combines classical phase correlation with alpha-rooting to provide tunable image enhancement. The alpha-rooting parameters may be adjusted to provide a tradeoff between height and width of the ARPC main lobe. A high narrow main lobe peak provides high matching accuracy for aligned images, but reduced matching performance for misaligned logos. A lower, wider peak trades matching accuracy on aligned logos, for improved matching performance on misaligned imagery. Previously, we developed ARPC and used it in the spatial domain for logo recognition as part of an overall automated document analysis problem. However, spatial domain ARPC performance can be sensitive to logo misalignments, including rotational misalignment. In this paper we use ARPC as a match metric in the radon transform domain for logo recognition. In the radon transform domain, rotational misalignments correspond to translations in the radon transform angle parameter. These translations are captured by ARPC, thereby producing rotation-invariant logo matching. In the paper, we first present an overview of ARPC, and then describe the logo matching algorithm. We present numerical performance results demonstrating matching tolerance to rotational misalignments. We demonstrate robustness of the radon transform domain rotation estimation to noise. We present logo verification and recognition performance results using the proposed approach on a public domain logo database. We compare performance results to performance obtained using spatial domain ARPC, and state-of-the-art SURF features, for logos in salt-and-pepper noise.

  7. Necessary and sufficient condition for the realization of the complex wavelet

    NASA Astrophysics Data System (ADS)

    Keita, Alpha; Qing, Qianqin; Wang, Nengchao

    1997-04-01

    Wavelet theory is a whole new signal analysis theory in recent years, and the appearance of which is attracting lots of experts in many different fields giving it a deepen study. Wavelet transformation is a new kind of time. Frequency domain analysis method of localization in can-be- realized time domain or frequency domain. It has many perfect characteristics that many other kinds of time frequency domain analysis, such as Gabor transformation or Viginier. For example, it has orthogonality, direction selectivity, variable time-frequency domain resolution ratio, adjustable local support, parsing data in little amount, and so on. All those above make wavelet transformation a very important new tool and method in signal analysis field. Because the calculation of complex wavelet is very difficult, in application, real wavelet function is used. In this paper, we present a necessary and sufficient condition that the real wavelet function can be obtained by the complex wavelet function. This theorem has some significant values in theory. The paper prepares its technique from Hartley transformation, then, it gives the complex wavelet was a signal engineering expert. His Hartley transformation, which also mentioned by Hartley, had been overlooked for about 40 years, for the social production conditions at that time cannot help to show its superiority. Only when it came to the end of 70s and the early 80s, after the development of the fast algorithm of Fourier transformation and the hardware implement to some degree, the completely some positive-negative transforming method was coming to take seriously. W transformation, which mentioned by Zhongde Wang, pushed the studying work of Hartley transformation and its fast algorithm forward. The kernel function of Hartley transformation.

  8. A General Sparse Tensor Framework for Electronic Structure Theory

    DOE PAGES

    Manzer, Samuel; Epifanovsky, Evgeny; Krylov, Anna I.; ...

    2017-01-24

    Linear-scaling algorithms must be developed in order to extend the domain of applicability of electronic structure theory to molecules of any desired size. But, the increasing complexity of modern linear-scaling methods makes code development and maintenance a significant challenge. A major contributor to this difficulty is the lack of robust software abstractions for handling block-sparse tensor operations. We therefore report the development of a highly efficient symbolic block-sparse tensor library in order to provide access to high-level software constructs to treat such problems. Our implementation supports arbitrary multi-dimensional sparsity in all input and output tensors. We then avoid cumbersome machine-generatedmore » code by implementing all functionality as a high-level symbolic C++ language library and demonstrate that our implementation attains very high performance for linear-scaling sparse tensor contractions.« less

  9. The Molecular Dynamics Study of the Structural Conversions in the Transformer Protein RfaH

    NASA Astrophysics Data System (ADS)

    Gc, Jeevan; Gerstman, Bernard; Chapagain, Prem

    Recently, a class of multi-domain proteins such as RfaH transcription factor are labelled as the transformer proteins as they undergo major conformational transformation for performing multiple functions. In the absence of the inter-domain contacts, the C-terminal domain of RfaH transforms from its alpha-helix conformation to a beta-barrel structure. Each of these states have their own functional role: in its alpha-helx state, RfaH-CTD inhibits the transcription by masking the binding site of RNAP, but in its beta state it facilitates the translation. We used various molecular dynamics simulations to study its transformer-like behavior of full-RfaH and identified key amino acid residues that are important in modulating such behavior. Our results show that the inter domain interactions constitute the major barrier in the alpha-helix to beta-barrel conversion. Once the interfacial interactions are broken, structural conversion is easier. The structural conversion from beta-barrel to alpha-helix proceeds with the rearrangement of the hydrophobic residues followed by the inter domain contacts formation via non-native, transient salt-bridge formation, leading to the formation of the native inter domain salt-bridge and hydrophobic contacts to give the final alpha-helix structure.

  10. A Graphical Presentation to Teach the Concept of the Fourier Transform

    ERIC Educational Resources Information Center

    Besalu, E.

    2006-01-01

    A study was conducted to visualize the reason why the Fourier transform technique is useful to detect the originating frequencies of a complicated superposition of waves. The findings reveal that students respond well when instructors adapt pictorial presentation to show how the time-domain function is transformed into the frequency domain.

  11. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake in order to better capture the rupture characteristics (e.g., rupture area and velocity) of this earthquake.

  12. Communication Optimal Parallel Multiplication of Sparse Random Matrices

    DTIC Science & Technology

    2013-02-21

    Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That

  13. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    DTIC Science & Technology

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  14. Martensitelike spontaneous relaxor-normal ferroelectric transformation in Pb(Zn1/3Nb2/3)O3-PbLa(ZrTi)O3 system

    NASA Astrophysics Data System (ADS)

    Deng, Guochu; Ding, Aili; Li, Guorong; Zheng, Xinsen; Cheng, Wenxiu; Qiu, Pingsun; Yin, Qingrui

    2005-11-01

    The spontaneous relaxor-normal ferroelectric transformation was found in the tetragonal composition of Pb(Zn1/3Nb2/3)O3-PbLa(ZrTi)O3 (0.3PZN-0.7PLZT) complex ABO3 system. The corresponding dielectric permittivities and losses of different compositions located near the morphotrophic phase boundary were analyzed. By reviewing all of the results about this type of transformation in previous references, the electric, compositional, structural, and thermodynamic characteristics of the spontaneous relaxor-normal transformation were proposed. Additionally, the adaptive phase model for martensite transformation proposed by Khachaturyan et al. [Phys. Rev. B 43, 10832 (1991)] was introduced into this ferroelectric transformation to explain the unique transformation pathway and associated features such as the tweedlike domain patterns and the dielectric dispersion under the critical transition temperature. Due to the critical compositions near the MPB, the ferroelectric materials just fulfill the condition, in which the adaptive phases can form in the transformation procedure. The formation of the adaptive phases, which are composed of stress-accommodating twinned domains, makes the system bypass the energy barrier encountered in conventional martensite transformations. The twinned adaptive phase corresponds to the tweedlike domain pattern under a transmission electronic microscope. At lower temperature, these precursor phases transform into the conventional ferroelectric state with macrodomains by the movement of domain walls, which causes a weak dispersion in dielectric permittivity.

  15. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  16. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  17. Effects of high-order correlations on personalized recommendations for bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Zhou, Tao; Che, Hong-An; Wang, Bing-Hong; Zhang, Yi-Cheng

    2010-02-01

    In this paper, we introduce a modified collaborative filtering (MCF) algorithm, which has remarkably higher accuracy than the standard collaborative filtering. In the MCF, instead of the cosine similarity index, the user-user correlations are obtained by a diffusion process. Furthermore, by considering the second-order correlations, we design an effective algorithm that depresses the influence of mainstream preferences. Simulation results show that the algorithmic accuracy, measured by the average ranking score, is further improved by 20.45% and 33.25% in the optimal cases of MovieLens and Netflix data. More importantly, the optimal value λ depends approximately monotonously on the sparsity of the training set. Given a real system, we could estimate the optimal parameter according to the data sparsity, which makes this algorithm easy to be applied. In addition, two significant criteria of algorithmic performance, diversity and popularity, are also taken into account. Numerical results show that as the sparsity increases, the algorithm considering the second-order correlation can outperform the MCF simultaneously in all three criteria.

  18. A Method to Compute the Force Signature of a Body Impacting on a Linear Elastic Structure Using Fourier Analysis

    DTIC Science & Technology

    1982-09-17

    FK * 1PK (2) The convolution of two transforms in time domain is the inverse transform of the product in frequency domain. Thus Rp(s) - Fgc() Ipg(*) (3...its inverse transform by: R,(r)- R,(a.)e’’ do. (5)2w In order to nuke use f a very accurate numerical method to ompute Fourier "ke and coil...taorm. When the inverse transform it tken by using Eq. (15), the cosine transform, because it converges faster than the sine transform refu-ft the

  19. Variational Bayesian Learning for Wavelet Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Roussos, E.; Roberts, S.; Daubechies, I.

    2005-11-01

    In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.

  20. Enhanced image fusion using directional contrast rules in fuzzy transform domain.

    PubMed

    Nandal, Amita; Rosales, Hamurabi Gamboa

    2016-01-01

    In this paper a novel image fusion algorithm based on directional contrast in fuzzy transform (FTR) domain is proposed. Input images to be fused are first divided into several non-overlapping blocks. The components of these sub-blocks are fused using directional contrast based fuzzy fusion rule in FTR domain. The fused sub-blocks are then transformed into original size blocks using inverse-FTR. Further, these inverse transformed blocks are fused according to select maximum based fusion rule for reconstructing the final fused image. The proposed fusion algorithm is both visually and quantitatively compared with other standard and recent fusion algorithms. Experimental results demonstrate that the proposed method generates better results than the other methods.

  1. Improvement of kurtosis-guided-grams via Gini index for bearing fault feature identification

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing

    2017-12-01

    A group of kurtosis-guided-grams, such as Kurtogram, Protrugram and SKRgram, is designed to detect the resonance band excited by faults based on the sparsity index. However, a common issue associated with these methods is that they tend to choose the frequency band with individual impulses rather than the desired fault impulses. This may be attributed to the selection of the sparsity index, kurtosis, which is vulnerable to impulsive noise. In this paper, to solve the problem, a sparsity index, called the Gini index, is introduced as an alternative estimator for the selection of the resonance band. It has been found that the sparsity index is still able to provide guidelines for the selection of the fault band without prior information of the fault period. More importantly, the Gini index has unique performance in random-impulse resistance, which renders the improved methods using the index free from the random impulse caused by external knocks on the bearing housing, or electromagnetic interference. By virtue of these advantages, the improved methods using the Gini index not only overcome the shortcomings but are more effective under harsh working conditions, even in the complex structure. Finally, the comparison between the kurtosis-guided-grams and the improved methods using the Gini index is made using the simulated and experimental data. The results verify the effectiveness of the improvement by both the fixed-axis bearing and planetary bearing fault signals.

  2. Learning what matters: A neural explanation for the sparsity bias.

    PubMed

    Hassall, Cameron D; Connor, Patrick C; Trappenberg, Thomas P; McDonald, John J; Krigolson, Olave E

    2018-05-01

    The visual environment is filled with complex, multi-dimensional objects that vary in their value to an observer's current goals. When faced with multi-dimensional stimuli, humans may rely on biases to learn to select those objects that are most valuable to the task at hand. Here, we show that decision making in a complex task is guided by the sparsity bias: the focusing of attention on a subset of available features. Participants completed a gambling task in which they selected complex stimuli that varied randomly along three dimensions: shape, color, and texture. Each dimension comprised three features (e.g., color: red, green, yellow). Only one dimension was relevant in each block (e.g., color), and a randomly-chosen value ranking determined outcome probabilities (e.g., green > yellow > red). Participants were faster to respond to infrequent probe stimuli that appeared unexpectedly within stimuli that possessed a more valuable feature than to probes appearing within stimuli possessing a less valuable feature. Event-related brain potentials recorded during the task provided a neurophysiological explanation for sparsity as a learning-dependent increase in optimal attentional performance (as measured by the N2pc component of the human event-related potential) and a concomitant learning-dependent decrease in prediction errors (as measured by the feedback-elicited reward positivity). Together, our results suggest that the sparsity bias guides human reinforcement learning in complex environments. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. On the domain of the Nelson Hamiltonian

    NASA Astrophysics Data System (ADS)

    Griesemer, M.; Wünsch, A.

    2018-04-01

    The Nelson Hamiltonian is unitarily equivalent to a Hamiltonian defined through a closed, semibounded quadratic form, the unitary transformation being explicitly known and due to Gross. In this paper, we study the mapping properties of the Gross-transform in order to characterize the regularity properties of vectors in the form domain of the Nelson Hamiltonian. Since the operator domain is a subset of the form domain, our results apply to vectors in the domain of the Hamiltonian as well. This work is a continuation of our previous work on the Fröhlich Hamiltonian.

  4. Resilience of biochemical activity in protein domains in the face of structural divergence.

    PubMed

    Zhang, Dapeng; Iyer, Lakshminarayan M; Burroughs, A Maxwell; Aravind, L

    2014-06-01

    Recent studies point to the prevalence of the evolutionary phenomenon of drastic structural transformation of protein domains while continuing to preserve their basic biochemical function. These transformations span a wide spectrum, including simple domains incorporated into larger structural scaffolds, changes in the structural core, major active site shifts, topological rewiring and extensive structural transmogrifications. Proteins from biological conflict systems, such as toxin-antitoxin, restriction-modification, CRISPR/Cas, polymorphic toxin and secondary metabolism systems commonly display such transformations. These include endoDNases, metal-independent RNases, deaminases, ADP ribosyltransferases, immunity proteins, kinases and E1-like enzymes. In eukaryotes such transformations are seen in domains involved in chromatin-related peptide recognition and protein/DNA-modification. Intense selective pressures from 'arms-race'-like situations in conflict and macromolecular modification systems could favor drastic structural divergence while preserving function. Published by Elsevier Ltd.

  5. Collaborative Wideband Compressed Signal Detection in Interplanetary Internet

    NASA Astrophysics Data System (ADS)

    Wang, Yulin; Zhang, Gengxin; Bian, Dongming; Gou, Liang; Zhang, Wei

    2014-07-01

    As the development of autonomous radio in deep space network, it is possible to actualize communication between explorers, aircrafts, rovers and satellites, e.g. from different countries, adopting different signal modes. The first mission to enforce the autonomous radio is to detect signals of the explorer autonomously without disturbing the original communication. This paper develops a collaborative wideband compressed signal detection approach for InterPlaNetary (IPN) Internet where there exist sparse active signals in the deep space environment. Compressed sensing (CS) can be utilized by exploiting the sparsity of IPN Internet communication signal, whose useful frequency support occupies only a small portion of an entirely wide spectrum. An estimate of the signal spectrum can be obtained by using reconstruction algorithms. Against deep space shadowing and channel fading, multiple satellites collaboratively sense and make a final decision according to certain fusion rule to gain spatial diversity. A couple of novel discrete cosine transform (DCT) and walsh-hadamard transform (WHT) based compressed spectrum detection methods are proposed which significantly improve the performance of spectrum recovery and signal detection. Finally, extensive simulation results are presented to show the effectiveness of our proposed collaborative scheme for signal detection in IPN Internet. Compared with the conventional discrete fourier transform (DFT) based method, our DCT and WHT based methods reduce computational complexity, decrease processing time, save energy and enhance probability of detection.

  6. Data-driven discovery of partial differential equations.

    PubMed

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  7. Species-Specific Elements in the Large T-Antigen J Domain Are Required for Cellular Transformation and DNA Replication by Simian Virus 40

    PubMed Central

    Sullivan, Christopher S.; Tremblay, James D.; Fewell, Sheara W.; Lewis, John A.; Brodsky, Jeffrey L.; Pipas, James M.

    2000-01-01

    The J domain of simian virus 40 (SV40) large T antigen is required for efficient DNA replication and transformation. Despite previous reports demonstrating the promiscuity of J domains in heterologous systems, results presented here show the requirement for specific J-domain sequences in SV40 large-T-antigen-mediated activities. In particular, chimeric-T-antigen constructs in which the SV40 T-antigen J domain was replaced with that from the yeast Ydj1p or Escherichia coli DnaJ proteins failed to replicate in BSC40 cells and did not transform REF52 cells. However, T antigen containing the JC virus J domain was functional in these assays, although it was less efficient than the wild type. The inability of some large-T-antigen chimeras to promote DNA replication and elicit cellular transformation was not due to a failure to interact with hsc70, since a nonfunctional chimera, containing the DnaJ J domain, bound hsc70. However, this nonfunctional chimeric T antigen was reduced in its ability to stimulate hsc70 ATPase activity and unable to liberate E2F from p130, indicating that transcriptional activation of factors required for cell growth and DNA replication may be compromised. Our data suggest that the T-antigen J domain harbors species-specific elements required for viral activities in vivo. PMID:10891510

  8. Applying Frequency-Domain Equalization to Code-Division Multiple Access and Transform-Domain Communications Systems

    DTIC Science & Technology

    2008-03-01

    terms the last time we spoke, I can say without a doubt that he was my favorite cousin. You are both missed, always.... I want to thank my wife for her...IEEE Communications Magazine, 50:S11–S15, September 2005. 3. Haker , M. E. Hardware Realization of a Transform Domain Communication Sys- tem. Master’s

  9. Pipelined digital SAR azimuth correlator using hybrid FFT-transversal filter

    NASA Technical Reports Server (NTRS)

    Wu, C.; Liu, K. Y. (Inventor)

    1984-01-01

    A synthetic aperture radar system (SAR) having a range correlator is provided with a hybrid azimuth correlator which utilizes a block-pipe-lined fast Fourier transform (FFT). The correlator has a predetermined FFT transform size with delay elements for delaying SAR range correlated data so as to embed in the Fourier transform operation a corner-turning function as the range correlated SAR data is converted from the time domain to a frequency domain. The azimuth correlator is comprised of a transversal filter to receive the SAR data in the frequency domain, a generator for range migration compensation and azimuth reference functions, and an azimuth reference multiplier for correlation of the SAR data. Following the transversal filter is a block-pipelined inverse FFT used to restore azimuth correlated data in the frequency domain to the time domain for imaging.

  10. Mapping small molecule binding data to structural domains

    PubMed Central

    2012-01-01

    Background Large-scale bioactivity/SAR Open Data has recently become available, and this has allowed new analyses and approaches to be developed to help address the productivity and translational gaps of current drug discovery. One of the current limitations of these data is the relative sparsity of reported interactions per protein target, and complexities in establishing clear relationships between bioactivity and targets using bioinformatics tools. We detail in this paper the indexing of targets by the structural domains that bind (or are likely to bind) the ligand within a full-length protein. Specifically, we present a simple heuristic to map small molecule binding to Pfam domains. This profiling can be applied to all proteins within a genome to give some indications of the potential pharmacological modulation and regulation of all proteins. Results In this implementation of our heuristic, ligand binding to protein targets from the ChEMBL database was mapped to structural domains as defined by profiles contained within the Pfam-A database. Our mapping suggests that the majority of assay targets within the current version of the ChEMBL database bind ligands through a small number of highly prevalent domains, and conversely the majority of Pfam domains sampled by our data play no currently established role in ligand binding. Validation studies, carried out firstly against Uniprot entries with expert binding-site annotation and secondly against entries in the wwPDB repository of crystallographic protein structures, demonstrate that our simple heuristic maps ligand binding to the correct domain in about 90 percent of all assessed cases. Using the mappings obtained with our heuristic, we have assembled ligand sets associated with each Pfam domain. Conclusions Small molecule binding has been mapped to Pfam-A domains of protein targets in the ChEMBL bioactivity database. The result of this mapping is an enriched annotation of small molecule bioactivity data and a grouping of activity classes following the Pfam-A specifications of protein domains. This is valuable for data-focused approaches in drug discovery, for example when extrapolating potential targets of a small molecule with known activity against one or few targets, or in the assessment of a potential target for drug discovery or screening studies. PMID:23282026

  11. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    NASA Astrophysics Data System (ADS)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  12. Midwifery participatory curriculum development: Transformation through active partnership.

    PubMed

    Sidebotham, Mary; Walters, Caroline; Chipperfield, Janine; Gamble, Jenny

    2017-07-01

    Evolving knowledge and professional practice combined with advances in pedagogy and learning technology create challenges for accredited professional programs. Internationally a sparsity of literature exists around curriculum development for professional programs responsive to regulatory and societal drivers. This paper evaluates a participatory curriculum development framework, adapted from the community development sector, to determine its applicability to promote engagement and ownership during the development of a Bachelor of Midwifery curriculum at an Australian University. The structures, processes and resulting curriculum development framework are described. A representative sample of key curriculum development team members were interviewed in relation to their participation. Qualitative analysis of transcribed interviews occurred through inductive, essentialist thematic analysis. Two main themes emerged: (1) 'it is a transformative journey' and (2) focused 'partnership in action'. Results confirmed the participatory curriculum development process provides symbiotic benefits to participants leading to individual and organisational growth and the perception of a shared curriculum. A final operational model using a participatory curriculum development process to guide the development of accredited health programs emerged. The model provides an appropriate structure to create meaningful collaboration with multiple stakeholders to produce a curriculum that is contemporary, underpinned by evidence and reflective of 'real world' practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan

    2017-09-01

    It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.

  14. Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.

    PubMed

    Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin

    2017-02-01

    Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.

  15. The Carboxyl Terminus of v-Abl Protein Can Augment SH2 Domain Function

    PubMed Central

    Warren, David; Heilpern, Andrew J.; Berg, Kent; Rosenberg, Naomi

    2000-01-01

    Abelson murine leukemia virus (Ab-MLV) transforms NIH 3T3 and pre-B cells via expression of the v-Abl tyrosine kinase. Although the enzymatic activity of this molecule is absolutely required for transformation, other regions of the protein are also important for this response. Among these are the SH2 domain, involved in phosphotyrosine-dependent protein-protein interactions, and the long carboxyl terminus, which plays an important role in transformation of hematopoietic cells. Important signals are sent from each of these regions, and transformation is most likely orchestrated by the concerted action of these different parts of the protein. To explore this idea, we compared the ability of the v-Src SH2 domain to substitute for that of v-Abl in the full-length P120 v-Abl protein and in P70 v-Abl, a protein that lacks the carboxyl terminus characteristic of Abl family members. Ab-MLV strains expressing P70/S2 failed to transform NIH 3T3 cells and demonstrated a greatly reduced capacity to mediate signaling events associated with the Ras-dependent mitogen-activated protein (MAP) kinase pathway. In contrast, Ab-MLV strains expressing P120/S2 were indistinguishable from P120 with respect to these features. Analyses of additional mutants demonstrated that the last 162 amino acids of the carboxyl terminus were sufficient to restore transformation. These data demonstrate that an SH2 domain with v-Abl substrate specificity is required for NIH 3T3 transformation in the absence of the carboxyl terminus and suggest that cooperativity between the extreme carboxyl terminus and the SH2 domain facilitates the transmission of transforming signals via the MAP kinase pathway. PMID:10775585

  16. Properties of an improved Gabor wavelet transform and its applications to seismic signal processing and interpretation

    NASA Astrophysics Data System (ADS)

    Ji, Zhan-Huai; Yan, Sheng-Gang

    2017-12-01

    This paper presents an analytical study of the complete transform of improved Gabor wavelets (IGWs), and discusses its application to the processing and interpretation of seismic signals. The complete Gabor wavelet transform has the following properties. First, unlike the conventional transform, the improved Gabor wavelet transform (IGWT) maps time domain signals to the time-frequency domain instead of the time-scale domain. Second, the IGW's dominant frequency is fixed, so the transform can perform signal frequency division, where the dominant frequency components of the extracted sub-band signal carry essentially the same information as the corresponding components of the original signal, and the subband signal bandwidth can be regulated effectively by the transform's resolution factor. Third, a time-frequency filter consisting of an IGWT and its inverse transform can accurately locate target areas in the time-frequency field and perform filtering in a given time-frequency range. The complete IGW transform's properties are investigated using simulation experiments and test cases, showing positive results for seismic signal processing and interpretation, such as enhancing seismic signal resolution, permitting signal frequency division, and allowing small faults to be identified.

  17. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  18. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    PubMed

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  19. Global boundary flattening transforms for acoustic propagation under rough sea surfaces.

    PubMed

    Oba, Roger M

    2010-07-01

    This paper introduces a conformal transform of an acoustic domain under a one-dimensional, rough sea surface onto a domain with a flat top. This non-perturbative transform can include many hundreds of wavelengths of the surface variation. The resulting two-dimensional, flat-topped domain allows direct application of any existing, acoustic propagation model of the Helmholtz or wave equation using transformed sound speeds. Such a transform-model combination applies where the surface particle velocity is much slower than sound speed, such that the boundary motion can be neglected. Once the acoustic field is computed, the bijective (one-to-one and onto) mapping permits the field interpolation in terms of the original coordinates. The Bergstrom method for inverse Riemann maps determines the transform by iterated solution of an integral equation for a surface matching term. Rough sea surface forward scatter test cases provide verification of the method using a particular parabolic equation model of the Helmholtz equation.

  20. Cucheb: A GPU implementation of the filtered Lanczos procedure

    NASA Astrophysics Data System (ADS)

    Aurentz, Jared L.; Kalantzis, Vassilis; Saad, Yousef

    2017-11-01

    This paper describes the software package Cucheb, a GPU implementation of the filtered Lanczos procedure for the solution of large sparse symmetric eigenvalue problems. The filtered Lanczos procedure uses a carefully chosen polynomial spectral transformation to accelerate convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective for eigenvalue problems that arise in electronic structure calculations and density functional theory. We compare our implementation against an equivalent CPU implementation and show that using the GPU can reduce the computation time by more than a factor of 10. Program Summary Program title: Cucheb Program Files doi:http://dx.doi.org/10.17632/rjr9tzchmh.1 Licensing provisions: MIT Programming language: CUDA C/C++ Nature of problem: Electronic structure calculations require the computation of all eigenvalue-eigenvector pairs of a symmetric matrix that lie inside a user-defined real interval. Solution method: To compute all the eigenvalues within a given interval a polynomial spectral transformation is constructed that maps the desired eigenvalues of the original matrix to the exterior of the spectrum of the transformed matrix. The Lanczos method is then used to compute the desired eigenvectors of the transformed matrix, which are then used to recover the desired eigenvalues of the original matrix. The bulk of the operations are executed in parallel using a graphics processing unit (GPU). Runtime: Variable, depending on the number of eigenvalues sought and the size and sparsity of the matrix. Additional comments: Cucheb is compatible with CUDA Toolkit v7.0 or greater.

  1. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  2. Computational wavelength resolution for in-line lensless holography: phase-coded diffraction patterns and wavefront group-sparsity

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Shevkunov, Igor; Petrov, Nikolay V.; Egiazarian, Karen

    2017-06-01

    In-line lensless holography is considered with a random phase modulation at the object plane. The forward wavefront propagation is modelled using the Fourier transform with the angular spectrum transfer function. The multiple intensities (holograms) recorded by the sensor are random due to the random phase modulation and noisy with Poissonian noise distribution. It is shown by computational experiments that high-accuracy reconstructions can be achieved with resolution going up to the two thirds of the wavelength. With respect to the sensor pixel size it is a super-resolution with a factor of 32. The algorithm designed for optimal superresolution phase/amplitude reconstruction from Poissonian data is based on the general methodology developed for phase retrieval with a pixel-wise resolution in V. Katkovnik, "Phase retrieval from noisy data based on sparse approximation of object phase and amplitude", http://www.cs.tut.fi/ lasip/DDT/index3.html.

  3. An adaptive image sparse reconstruction method combined with nonlocal similarity and cosparsity for mixed Gaussian-Poisson noise removal

    NASA Astrophysics Data System (ADS)

    Chen, Yong-fei; Gao, Hong-xia; Wu, Zi-ling; Kang, Hui

    2018-01-01

    Compressed sensing (CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation (NCSR), in terms of both visual results and quantitative measures.

  4. ICON: 3D reconstruction with 'missing-information' restoration in biological electron tomography.

    PubMed

    Deng, Yuchen; Chen, Yu; Zhang, Yan; Wang, Shengliu; Zhang, Fa; Sun, Fei

    2016-07-01

    Electron tomography (ET) plays an important role in revealing biological structures, ranging from macromolecular to subcellular scale. Due to limited tilt angles, ET reconstruction always suffers from the 'missing wedge' artifacts, thus severely weakens the further biological interpretation. In this work, we developed an algorithm called Iterative Compressed-sensing Optimized Non-uniform fast Fourier transform reconstruction (ICON) based on the theory of compressed-sensing and the assumption of sparsity of biological specimens. ICON can significantly restore the missing information in comparison with other reconstruction algorithms. More importantly, we used the leave-one-out method to verify the validity of restored information for both simulated and experimental data. The significant improvement in sub-tomogram averaging by ICON indicates its great potential in the future application of high-resolution structural determination of macromolecules in situ. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  6. Dual Low-Rank Pursuit: Learning Salient Features for Saliency Detection.

    PubMed

    Lang, Congyan; Feng, Jiashi; Feng, Songhe; Wang, Jingdong; Yan, Shuicheng

    2016-06-01

    Saliency detection is an important procedure for machines to understand visual world as humans do. In this paper, we consider a specific saliency detection problem of predicting human eye fixations when they freely view natural images, and propose a novel dual low-rank pursuit (DLRP) method. DLRP learns saliency-aware feature transformations by utilizing available supervision information and constructs discriminative bases for effectively detecting human fixation points under the popular low-rank and sparsity-pursuit framework. Benefiting from the embedded high-level information in the supervised learning process, DLRP is able to predict fixations accurately without performing the expensive object segmentation as in the previous works. Comprehensive experiments clearly show the superiority of the proposed DLRP method over the established state-of-the-art methods. We also empirically demonstrate that DLRP provides stronger generalization performance across different data sets and inherits the advantages of both the bottom-up- and top-down-based saliency detection methods.

  7. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  8. Evidence for an Evolutionarily Conserved Memory Coding Scheme in the Mammalian Hippocampus

    PubMed Central

    Thome, Alexander; Lisanby, Sarah H.; McNaughton, Bruce L.

    2017-01-01

    Decades of research identify the hippocampal formation as central to memory storage and recall. Events are stored via distributed population codes, the parameters of which (e.g., sparsity and overlap) determine both storage capacity and fidelity. However, it remains unclear whether the parameters governing information storage are similar between species. Because episodic memories are rooted in the space in which they are experienced, the hippocampal response to navigation is often used as a proxy to study memory. Critically, recent studies in rodents that mimic the conditions typical of navigation studies in humans and nonhuman primates (i.e., virtual reality) show that reduced sensory input alters hippocampal representations of space. The goal of this study was to quantify this effect and determine whether there are commonalities in information storage across species. Using functional molecular imaging, we observe that navigation in virtual environments elicits activity in fewer CA1 neurons relative to real-world conditions. Conversely, comparable neuronal activity is observed in hippocampus region CA3 and the dentate gyrus under both conditions. Surprisingly, we also find evidence that the absolute number of neurons used to represent an experience is relatively stable between nonhuman primates and rodents. We propose that this convergence reflects an optimal ensemble size for episodic memories. SIGNIFICANCE STATEMENT One primary factor constraining memory capacity is the sparsity of the engram, the proportion of neurons that encode a single experience. Investigating sparsity in humans is hampered by the lack of single-cell resolution and differences in behavioral protocols. Sparsity can be quantified in freely moving rodents, but extrapolating these data to humans assumes that information storage is comparable across species and is robust to restraint-induced reduction in sensory input. Here, we test these assumptions and show that species differences in brain size build memory capacity without altering the structure of the data being stored. Furthermore, sparsity in most of the hippocampus is resilient to reduced sensory information. This information is vital to integrating animal data with human imaging navigation studies. PMID:28174334

  9. Fourier transform of delayed fluorescence as an indicator of herbicide concentration.

    PubMed

    Guo, Ya; Tan, Jinglu

    2014-12-21

    It is well known that delayed fluorescence (DF) from Photosystem II (PSII) of plant leaves can be potentially used to sense herbicide pollution and evaluate the effect of herbicides on plant leaves. The research of using DF as a measure of herbicides in the literature was mainly conducted in time domain and qualitative correlation was often obtained. Fourier transform is often used to analyze signals. Viewing DF signal in frequency domain through Fourier transform may allow separation of signal components and provide a quantitative method for sensing herbicides. However, there is a lack of an attempt to use Fourier transform of DF as an indicator of herbicide. In this work, the relationship between the Fourier transform of DF and herbicide concentration was theoretically modelled and analyzed, which immediately yielded a quantitative method to measure herbicide concentration in frequency domain. Experiments were performed to validate the developed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  11. In-situ visualization of stress-dependent bulk magnetic domain formation by neutron grating interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Betz, B.; École Polytechnique Fédérale de Lausanne, NXMM Laboratory, IMX, CH-1015 Lausanne; Rauscher, P.

    The performance and degree of efficiency of industrial transformers are directly influenced by the magnetic properties of high-permeability steel laminations (HPSLs). Industrial transformer cores are built of stacks of single HPSLs. While the insulating coating on each HPSL reduces eddy-current losses in the transformer core, the coating also induces favorable inter-granular tensile stresses that significantly influence the underlying magnetic domain structure. Here, we show that the neutron dark-field image can be used to analyze the influence of the coating on the volume and supplementary surface magnetic domain structures. To visualize the stress effect of the coating on the bulk domainmore » formation, we used an uncoated HPSL and stepwise increased the applied external tensile stress up to 20 MPa. We imaged the domain configuration of the intermediate stress states and were able to reproduce the original domain structure of the coated state. Furthermore, we were able to visualize how the applied stresses lead to a refinement of the volume domain structure and the suppression and reoccurrence of supplementary domains.« less

  12. Beyond the Sparsity-Based Target Detector: A Hybrid Sparsity and Statistics Based Detector for Hyperspectral Images.

    PubMed

    Du, Bo; Zhang, Yuxiang; Zhang, Liangpei; Tao, Dacheng

    2016-08-18

    Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsitybased detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual based detection methods. Extensive experimental results with three hyperspectral datasets confirm that the proposed HSSD algorithm clearly outperforms the stateof- the-art target detectors.

  13. Sparsity-driven coupled imaging and autofocusing for interferometric SAR

    NASA Astrophysics Data System (ADS)

    Zengin, Oǧuzcan; Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    We propose a sparsity-driven method for coupled image formation and autofocusing based on multi-channel data collected in interferometric synthetic aperture radar (IfSAR). Relative phase between SAR images contains valuable information. For example, it can be used to estimate the height of the scene in SAR interferometry. However, this relative phase could be degraded when independent enhancement methods are used over SAR image pairs. Previously, Ramakrishnan et al. proposed a coupled multi-channel image enhancement technique, based on a dual descent method, which exhibits better performance in phase preservation compared to independent enhancement methods. Their work involves a coupled optimization formulation that uses a sparsity enforcing penalty term as well as a constraint tying the multichannel images together to preserve the cross-channel information. In addition to independent enhancement, the relative phase between the acquisitions can be degraded due to other factors as well, such as platform location uncertainties, leading to phase errors in the data and defocusing in the formed imagery. The performance of airborne SAR systems can be affected severely by such errors. We propose an optimization formulation that combines Ramakrishnan et al.'s coupled IfSAR enhancement method with the sparsity-driven autofocus (SDA) approach of Önhon and Çetin to alleviate the effects of phase errors due to motion errors in the context of IfSAR imaging. Our method solves the joint optimization problem with a Lagrangian optimization method iteratively. In our preliminary experimental analysis, we have obtained results of our method on synthetic SAR images and compared its performance to existing methods.

  14. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  15. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method.

    PubMed

    Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie

    2014-02-01

    Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.

  16. Sampling limits for electron tomography with sparsity-exploiting reconstructions.

    PubMed

    Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A

    2018-03-01

    Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    NASA Astrophysics Data System (ADS)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  18. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  19. Assessment of User Home Location Geoinference Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  20. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  1. Equivalence of linear canonical transform domains to fractional Fourier domains and the bicanonical width product: a generalization of the space-bandwidth product.

    PubMed

    Oktem, Figen S; Ozaktas, Haldun M

    2010-08-01

    Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.

  2. Simultaneous storage of medical images in the spatial and frequency domain: a comparative study.

    PubMed

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; Uc, Niranjan

    2004-06-05

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient.

  3. Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.

    PubMed

    Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying

    2016-04-01

    As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Two-population model for medial temporal lobe neurons: The vast majority are almost silent

    NASA Astrophysics Data System (ADS)

    Magyar, Andrew; Collins, John

    2015-07-01

    Recordings in the human medial temporal lobe have found many neurons that respond to pictures (and related stimuli) of just one particular person of those presented. It has been proposed that these are concept cells, responding to just a single concept. However, a direct experimental test of the concept cell idea appears impossible, because it would need the measurement of the response of each cell to enormous numbers of other stimuli. Here we propose a new statistical method for analysis of the data that gives a more powerful way to analyze how close data are to the concept-cell idea. Central to the model is the neuronal sparsity, defined as the total fraction of stimuli that elicit an above-threshold response in the neuron. The model exploits the large number of sampled neurons to give sensitivity to situations where the average response sparsity is much less than one response for the number of presented stimuli. We show that a conventional model where a single sparsity is postulated for all neurons gives an extremely poor fit to the data. In contrast, a model with two dramatically different populations gives an excellent fit to data from the hippocampus and entorhinal cortex. In the hippocampus, one population has 7% of the cells with a 2.6% sparsity. But a much larger fraction (93%) respond to only 0.1% of the stimuli. This can result in an extreme bias in the responsiveness of reported neurons compared with a typical neuron. Finally, we show how to allow for the fact that some identified units correspond to multiple neurons and find that our conclusions at the neural level are quantitatively changed but strengthened, with an even stronger difference between the two populations.

  5. Modeling Multiple Risks: Hidden Domain of Attraction

    DTIC Science & Technology

    2012-01-01

    improve joint tail probability approximation but the deficiency can be remedied by a more general approach which we call hidden domain of attraction ( HDA ...HRV is a special case of HDA . If the distribution of X does not have MRV but (1.2) still holds, we may retrieve the MRV setup by transforming the...potential advantage in some circumstances of the notion of HDA is that it does not require that we transform components. Performing such transformations on

  6. Inverted electro-mechanical behaviour induced by the irreversible domain configuration transformation in (K,Na)NbO3-based ceramics

    PubMed Central

    Huan, Yu; Wang, Xiaohui; Koruza, Jurij; Wang, Ke; Webber, Kyle G.; Hao, Yanan; Li, Longtu

    2016-01-01

    Miniaturization of domains to the nanometer scale has been previously reported in many piezoelectrics with two-phase coexistence. Despite the observation of nanoscale domain configuration near the polymorphic phase transition (PPT) regionin virgin (K0.5Na0.5)NbO3 (KNN) based ceramics, it remains unclear how this domain state responds to external loads and influences the macroscopic electro-mechanical properties. To this end, the electric-field-induced and stress-induced strain curves of KNN-based ceramics over a wide compositional range across PPT were characterized. It was found that the coercive field of the virgin samples was highest in PPT region, which was related to the inhibited domain wall motion due to the presence of nanodomains. However, the coercive field was found to be the lowest in the PPT region after electrical poling. This was related to the irreversible transformation of the nanodomains into micron-sized domains during the poling process. With the similar micron-sized domain configuration for all poled ceramics, the domains in the PPT region move more easily due to the additional polarization vectors. The results demonstrate that the poling process can give rise to the irreversible domain configuration transformation and then account for the inverted macroscopic piezoelectricity in the PPT region of KNN-based ceramics. PMID:26915972

  7. The analysis of decimation and interpolation in the linear canonical transform domain.

    PubMed

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  8. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  9. Noise suppression in surface microseismic data

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael

    2012-01-01

    We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.

  10. Data-driven discovery of partial differential equations

    PubMed Central

    Rudy, Samuel H.; Brunton, Steven L.; Proctor, Joshua L.; Kutz, J. Nathan

    2017-01-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg–de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable. PMID:28508044

  11. Application of Genetic Algorithm and Particle Swarm Optimization techniques for improved image steganography systems

    NASA Astrophysics Data System (ADS)

    Jude Hemanth, Duraisamy; Umamaheswari, Subramaniyan; Popescu, Daniela Elena; Naaji, Antoanela

    2016-01-01

    Image steganography is one of the ever growing computational approaches which has found its application in many fields. The frequency domain techniques are highly preferred for image steganography applications. However, there are significant drawbacks associated with these techniques. In transform based approaches, the secret data is embedded in random manner in the transform coefficients of the cover image. These transform coefficients may not be optimal in terms of the stego image quality and embedding capacity. In this work, the application of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) have been explored in the context of determining the optimal coefficients in these transforms. Frequency domain transforms such as Bandelet Transform (BT) and Finite Ridgelet Transform (FRIT) are used in combination with GA and PSO to improve the efficiency of the image steganography system.

  12. Image encryption with chaotic map and Arnold transform in the gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Sang, Jun; Luo, Hongling; Zhao, Jun; Alam, Mohammad S.; Cai, Bin

    2017-05-01

    An image encryption method combing chaotic map and Arnold transform in the gyrator transform domains was proposed. Firstly, the original secret image is XOR-ed with a random binary sequence generated by a logistic map. Then, the gyrator transform is performed. Finally, the amplitude and phase of the gyrator transform are permutated by Arnold transform. The decryption procedure is the inverse operation of encryption. The secret keys used in the proposed method include the control parameter and the initial value of the logistic map, the rotation angle of the gyrator transform, and the transform number of the Arnold transform. Therefore, the key space is large, while the key data volume is small. The numerical simulation was conducted to demonstrate the effectiveness of the proposed method and the security analysis was performed in terms of the histogram of the encrypted image, the sensitiveness to the secret keys, decryption upon ciphertext loss, and resistance to the chosen-plaintext attack.

  13. Nonnegative constraint quadratic program technique to enhance the resolution of γ spectra

    NASA Astrophysics Data System (ADS)

    Li, Jinglun; Xiao, Wuyun; Ai, Xianyun; Chen, Ye

    2018-04-01

    Two concepts of the nonnegative least squares problem (NNLS) and the linear complementarity problem (LCP) are introduced for the resolution enhancement of the γ spectra. The respective algorithms such as the active set method and the primal-dual interior point method are applied to solve the above two problems. In mathematics, the nonnegative constraint results in the sparsity of the optimal solution of the deconvolution, and it is this sparsity that enhances the resolution. Finally, a comparison in the peak position accuracy and the computation time is made between these two methods and the boosted L_R and Gold methods.

  14. Generalized uncertainty principle impact onto the black holes information flux and the sparsity of Hawking radiation

    NASA Astrophysics Data System (ADS)

    Alonso-Serrano, Ana; DÄ browski, Mariusz P.; Gohar, Hussain

    2018-02-01

    We investigate the generalized uncertainty principle (GUP) corrections to the entropy content and the information flux of black holes, as well as the corrections to the sparsity of the Hawking radiation at the late stages of evaporation. We find that due to these quantum gravity motivated corrections, the entropy flow per particle reduces its value on the approach to the Planck scale due to a better accuracy in counting the number of microstates. We also show that the radiation flow is no longer sparse when the mass of a black hole approaches Planck mass which is not the case for non-GUP calculations.

  15. Convex relaxations of spectral sparsity for robust super-resolution and line spectrum estimation

    NASA Astrophysics Data System (ADS)

    Chi, Yuejie

    2017-08-01

    We consider recovering the amplitudes and locations of spikes in a point source signal from its low-pass spectrum that may suffer from missing data and arbitrary outliers. We first review and provide a unified view of several recently proposed convex relaxations that characterize and capitalize the spectral sparsity of the point source signal without discretization under the framework of atomic norms. Next we propose a new algorithm when the spikes are known a priori to be positive, motivated by applications such as neural spike sorting and fluorescence microscopy imaging. Numerical experiments are provided to demonstrate the effectiveness of the proposed approach.

  16. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  17. Sparse Bayesian Inference of White Matter Fiber Orientations from Compressed Multi-resolution Diffusion MRI

    PubMed Central

    Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe

    2017-01-01

    The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations. PMID:28845484

  18. Sparse Bayesian Inference of White Matter Fiber Orientations from Compressed Multi-resolution Diffusion MRI.

    PubMed

    Pisharady, Pramod Kumar; Duarte-Carvajalino, Julio M; Sotiropoulos, Stamatios N; Sapiro, Guillermo; Lenglet, Christophe

    2015-10-01

    The RubiX [1] algorithm combines high SNR characteristics of low resolution data with high spacial specificity of high resolution data, to extract microstructural tissue parameters from diffusion MRI. In this paper we focus on estimating crossing fiber orientations and introduce sparsity to the RubiX algorithm, making it suitable for reconstruction from compressed (under-sampled) data. We propose a sparse Bayesian algorithm for estimation of fiber orientations and volume fractions from compressed diffusion MRI. The data at high resolution is modeled using a parametric spherical deconvolution approach and represented using a dictionary created with the exponential decay components along different possible directions. Volume fractions of fibers along these orientations define the dictionary weights. The data at low resolution is modeled using a spatial partial volume representation. The proposed dictionary representation and sparsity priors consider the dependence between fiber orientations and the spatial redundancy in data representation. Our method exploits the sparsity of fiber orientations, therefore facilitating inference from under-sampled data. Experimental results show improved accuracy and decreased uncertainty in fiber orientation estimates. For under-sampled data, the proposed method is also shown to produce more robust estimates of fiber orientations.

  19. Leveraging tagging and rating for recommendation: RMF meets weighted diffusion on tripartite graphs

    NASA Astrophysics Data System (ADS)

    Li, Jianguo; Tang, Yong; Chen, Jiemin

    2017-10-01

    Recommender systems (RSs) have been a widely exploited approach to solving the information overload problem. However, the performance is still limited due to the extreme sparsity of the rating data. With the popularity of Web 2.0, the social tagging system provides more external information to improve recommendation accuracy. Although some existing approaches combine the matrix factorization models with the tag co-occurrence and context of tags, they neglect the issue of tag sparsity that would also result in inaccurate recommendations. Consequently, in this paper, we propose a novel hybrid collaborative filtering model named WUDiff_RMF, which improves regularized matrix factorization (RMF) model by integrating Weighted User-Diffusion-based CF algorithm(WUDiff) that obtains the information of similar users from the weighted tripartite user-item-tag graph. This model aims to capture the degree correlation of the user-item-tag tripartite network to enhance the performance of recommendation. Experiments conducted on four real-world datasets demonstrate that our approach significantly performs better than already widely used methods in the accuracy of recommendation. Moreover, results show that WUDiff_RMF can alleviate the data sparsity, especially in the circumstance that users have made few ratings and few tags.

  20. Hyperspectral imagery super-resolution by compressive sensing inspired dictionary learning and spatial-spectral regularization.

    PubMed

    Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui

    2015-01-19

    Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.

  1. Reconstruction-of-difference (RoD) imaging for cone-beam CT neuro-angiography

    NASA Astrophysics Data System (ADS)

    Wu, P.; Stayman, J. W.; Mow, M.; Zbijewski, W.; Sisniega, A.; Aygun, N.; Stevens, R.; Foos, D.; Wang, X.; Siewerdsen, J. H.

    2018-06-01

    Timely evaluation of neurovasculature via CT angiography (CTA) is critical to the detection of pathology such as ischemic stroke. Cone-beam CTA (CBCT-A) systems provide potential advantages in the timely use at the point-of-care, although challenges of a relatively slow gantry rotation speed introduce tradeoffs among image quality, data consistency and data sparsity. This work describes and evaluates a new reconstruction-of-difference (RoD) approach that is robust to such challenges. A fast digital simulation framework was developed to test the performance of the RoD over standard reference reconstruction methods such as filtered back-projection (FBP) and penalized likelihood (PL) over a broad range of imaging conditions, grouped into three scenarios to test the trade-off between data consistency, data sparsity and peak contrast. Two experiments were also conducted using a CBCT prototype and an anthropomorphic neurovascular phantom to test the simulation findings in real data. Performance was evaluated primarily in terms of normalized root mean square error (NRMSE) in comparison to truth, with reconstruction parameters chosen to optimize performance in each case to ensure fair comparison. The RoD approach reduced NRMSE in reconstructed images by up to 50%–53% compared to FBP and up to 29%–31% compared to PL for each scenario. Scan protocols well suited to the RoD approach were identified that balance tradeoffs among data consistency, sparsity and peak contrast—for example, a CBCT-A scan with 128 projections acquired in 8.5 s over a 180°  +  fan angle half-scan for a time attenuation curve with ~8.5 s time-to-peak and 600 HU peak contrast. With imaging conditions such as the simulation scenarios of fixed data sparsity (i.e. varying levels of data consistency and peak contrast), the experiments confirmed the reduction of NRMSE by 34% and 17% compared to FBP and PL, respectively. The RoD approach demonstrated superior performance in 3D angiography compared to FBP and PL in all simulation and physical experiments, suggesting the possibility of CBCT-A on low-cost, mobile imaging platforms suitable to the point-of-care. The algorithm demonstrated accurate reconstruction with a high degree of robustness against data sparsity and inconsistency.

  2. Identification of Scattering Mechanisms from Measured Impulse Response Signatures of Several Conducting Objects.

    DTIC Science & Technology

    1984-02-01

    conducting sphere 35 compared to inverse transform of exact solution. 4-5. Measured impulse response of a conducting 2:1 right 37 circular cylinder with...frequency domain. This is equivalent to multiplication in the time domain by the inverse transform of w(n), which is shown in Figure 3-1 for N=15. The...equivalent pulse width from 0.066 T for the rectangular window to 0.10 T for the Hanning window. The inverse transform of the Hanning window is shown

  3. Effects of mutations within the SV40 large T antigen ATPase/p53 binding domain on viral replication and transformation.

    PubMed

    Peden, K W; Srinivasan, A; Vartikar, J V; Pipas, J M

    1998-01-01

    The simian virus 40 (SV40) large T antigen is a 708 amino-acid protein possessing multiple biochemical activities that play distinct roles in productive infection or virus-induced cell transformation. The carboxy-terminal portion of T antigen includes a domain that carries the nucleotide binding and ATPase activities of the protein, as well as sequences required for T antigen to associate with the cellular tumor suppressor p53. Consequently this domain functions both in viral DNA replication and cellular transformation. We have generated a collection of SV40 mutants with amino-acid deletions, insertions or substitutions in specific domains of the protein. Here we report the properties of nine mutants with single or multiple substitutions between amino acids 402 and 430, a region thought to be important for both the p53 binding and ATPase functions. The mutants were examined for the ability to produce infectious progeny virions, replicate viral DNA in vivo, perform in trans complementation tests, and transform established cell lines. Two of the mutants exhibited a wild-type phenotype in all these tests. The remaining seven mutants were defective for plaque formation and viral DNA replication, but in each case these defects could be complemented by a wild-type T antigen supplied in trans. One of these replication-defective mutants efficiently transformed the REF52 and C3H10T1/2 cell lines as assessed by the dense-focus assay. The remaining six mutants were defective for transforming REF52 cells and transformed the C3H10T1/2 line with a reduced efficiency. The ability of mutant T antigen to transform REF52 cells correlated with their ability to induce increased levels of p53.

  4. Transformation by oncogenic mutants and ligand-dependent activation of FLT3 wild-type requires the tyrosine residues 589 and 591.

    PubMed

    Vempati, Sridhar; Reindl, Carola; Wolf, Ulla; Kern, Ruth; Petropoulos, Konstantin; Naidu, Vegi M; Buske, Christian; Hiddemann, Wolfgang; Kohl, Tobias M; Spiekermann, Karsten

    2008-07-15

    Mutations in the receptor tyrosine kinase FLT3 are found in up to 30% of acute myelogenous leukemia patients and are associated with an inferior prognosis. In this study, we characterized critical tyrosine residues responsible for the transforming potential of active FLT3-receptor mutants and ligand-dependent activation of FLT3-WT. We performed a detailed structure-function analysis of putative autophosphorylation tyrosine residues in the FLT3-D835Y tyrosine kinase domain (TKD) mutant. All tyrosine residues in the juxtamembrane domain (Y566, Y572, Y589, Y591, Y597, and Y599), interkinase domain (Y726 and Y768), and COOH-terminal domain (Y955 and Y969) of the FLT3-D835Y construct were successively mutated to phenylalanine and the transforming activity of these mutants was analyzed in interleukin-3-dependent Ba/F3 cells. Tyrosine residues critical for the transforming potential of FLT3-D835Y were also analyzed in FLT3 internal tandem duplication mutants (FLT3-ITD)and the FLT3 wild-type (FLT3-WT) receptor. The substitution of the tyrosine residues by phenylalanine in the juxtamembrane, interkinase, and COOH-terminal domains resulted in a complete loss of the transforming potential of FLT3-D835Y-expressing cells which can be attributed to a significant reduction of signal tranducer and activator of transcription 5 (STAT5) phosphorylation at the molecular level. Reintroduction of single tyrosine residues revealed the critical role of Y589 and Y591 in reconstituting interleukin-3-independent growth of FLT3-TKD-expressing cells. Combined mutation of Y589 and Y591 to phenylalanine also abrogated ligand-dependent proliferation of FLT3-WT and the transforming potential of FLT3-ITD-with a subsequent abrogation of STAT5 phosphorylation. We identified two tyrosine residues, Y589 and Y591, in the juxtamembrane domain that are critical for the ligand-dependent activation of FLT3-WT and the transforming potential of oncogenic FLT3 mutants.

  5. Estimation of spectral kurtosis

    NASA Astrophysics Data System (ADS)

    Sutawanir

    2017-03-01

    Rolling bearings are the most important elements in rotating machinery. Bearing frequently fall out of service for various reasons: heavy loads, unsuitable lubrications, ineffective sealing. Bearing faults may cause a decrease in performance. Analysis of bearing vibration signals has attracted attention in the field of monitoring and fault diagnosis. Bearing vibration signals give rich information for early detection of bearing failures. Spectral kurtosis, SK, is a parameter in frequency domain indicating how the impulsiveness of a signal varies with frequency. Faults in rolling bearings give rise to a series of short impulse responses as the rolling elements strike faults, SK potentially useful for determining frequency bands dominated by bearing fault signals. SK can provide a measure of the distance of the analyzed bearings from a healthy one. SK provides additional information given by the power spectral density (psd). This paper aims to explore the estimation of spectral kurtosis using short time Fourier transform known as spectrogram. The estimation of SK is similar to the estimation of psd. The estimation falls in model-free estimation and plug-in estimator. Some numerical studies using simulations are discussed to support the methodology. Spectral kurtosis of some stationary signals are analytically obtained and used in simulation study. Kurtosis of time domain has been a popular tool for detecting non-normality. Spectral kurtosis is an extension of kurtosis in frequency domain. The relationship between time domain and frequency domain analysis is establish through power spectrum-autocovariance Fourier transform. Fourier transform is the main tool for estimation in frequency domain. The power spectral density is estimated through periodogram. In this paper, the short time Fourier transform of the spectral kurtosis is reviewed, a bearing fault (inner ring and outer ring) is simulated. The bearing response, power spectrum, and spectral kurtosis are plotted to visualize the pattern of each fault. Keywords: frequency domain Fourier transform, spectral kurtosis, bearing fault

  6. Simultaneous storage of medical images in the spatial and frequency domain: A comparative study

    PubMed Central

    Nayak, Jagadish; Bhat, P Subbanna; Acharya U, Rajendra; UC, Niranjan

    2004-01-01

    Background Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. Methods The patient information is encrypted before interleaving with images to ensure greater security. The bio-signals are compressed and subsequently interleaved with the image. This interleaving is carried out in the spatial domain and Frequency domain. The performance of interleaving in the spatial, Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) coefficients is studied. Differential pulse code modulation (DPCM) is employed for data compression as well as encryption and results are tabulated for a specific example. Results It can be seen from results, the process does not affect the picture quality. This is attributed to the fact that the change in LSB of a pixel changes its brightness by 1 part in 256. Spatial and DFT domain interleaving gave very less %NRMSE as compared to DCT and DWT domain. Conclusion The Results show that spatial domain the interleaving, the %NRMSE was less than 0.25% for 8-bit encoded pixel intensity. Among the frequency domain interleaving methods, DFT was found to be very efficient. PMID:15180899

  7. Method and apparatus for wavefront sensing

    DOEpatents

    Bahk, Seung-Whan

    2016-08-23

    A method of measuring characteristics of a wavefront of an incident beam includes obtaining an interferogram associated with the incident beam passing through a transmission mask and Fourier transforming the interferogram to provide a frequency domain interferogram. The method also includes selecting a subset of harmonics from the frequency domain interferogram, individually inverse Fourier transforming each of the subset of harmonics to provide a set of spatial domain harmonics, and extracting a phase profile from each of the set of spatial domain harmonics. The method further includes removing phase discontinuities in the phase profile, rotating the phase profile, and reconstructing a phase front of the wavefront of the incident beam.

  8. Phanerozoic geological evolution of the Equatorial Atlantic domain

    NASA Astrophysics Data System (ADS)

    Basile, Christophe; Mascle, Jean; Guiraud, René

    2005-10-01

    The Phanerozoic geological evolution of the Equatorial Atlantic domain has been controlled since the end of Early Cretaceous by the Romanche and Saint Paul transform faults. These faults did not follow the PanAfrican shear zones, but were surimposed on Palæozoic basins. From Neocomian to Barremian, the Central Atlantic rift propagated southward in Cassiporé and Marajó basins, and the South Atlantic rift propagated northward in Potiguar and Benue basins. During Aptian times, the Equatorial Atlantic transform domain appeared as a transfer zone between the northward propagating tip of South Atlantic and the Central Atlantic. Between the transform faults, oceanic accretion started during Late Aptian in small divergent segments, from south to north: Benin-Mundaú, deep Ivorian basin-Barreirinhas, Liberia-Cassiporé. From Late Aptian to Late Albian, the Togo-Ghana-Ceará basins appeared along the Romanche transform fault, and Côte d'Ivoire-Parà-Maranhão basins along Saint Paul transform fault. They were rapidly subsiding in intra-continental settings. During Late Cretaceous, these basins became active transform continental margins, and passive margins since Santonian times. In the same time, the continental edge uplifted leading either to important erosion on the shelf or to marginal ridges parallel to the transform faults in deeper settings.

  9. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  10. Mutation of the Salt Bridge-forming Residues in the ETV6-SAM Domain Interface Blocks ETV6-NTRK3-induced Cellular Transformation*

    PubMed Central

    Cetinbas, Naniye; Huang-Hobbs, Helen; Tognon, Cristina; Leprivier, Gabriel; An, Jianghong; McKinney, Steven; Bowden, Mary; Chow, Connie; Gleave, Martin; McIntosh, Lawrence P.; Sorensen, Poul H.

    2013-01-01

    The ETV6-NTRK3 (EN) chimeric oncogene is expressed in diverse tumor types. EN is generated by a t(12;15) translocation, which fuses the N-terminal SAM (sterile α-motif) domain of the ETV6 (or TEL) transcription factor to the C-terminal PTK (protein-tyrosine kinase) domain of the neurotrophin-3 receptor NTRK3. SAM domain-mediated polymerization of EN leads to constitutive activation of the PTK domain and constitutive signaling of the Ras-MAPK and PI3K-Akt pathways, which are essential for EN oncogenesis. Here we show through complementary biophysical and cellular biological techniques that mutation of Lys-99, which participates in a salt bridge at the SAM polymer interface, reduces self-association of the isolated SAM domain as well as high molecular mass complex formation of EN and abrogates the transformation activity of EN. We also show that mutation of Asp-101, the intermolecular salt bridge partner of Lys-99, similarly blocks transformation of NIH3T3 cells by EN, reduces EN tyrosine phosphorylation, inhibits Akt and Mek1/2 signaling downstream of EN, and abolishes tumor formation in nude mice. In contrast, mutations of Glu-100 and Arg-103, residues in the vicinity of the interdomain Lys-99–Asp-101 salt bridge, have little or no effect on these oncogenic characteristics of EN. Our results underscore the importance of specific electrostatic interactions for SAM polymerization and EN transformation. PMID:23798677

  11. Mutation of the salt bridge-forming residues in the ETV6-SAM domain interface blocks ETV6-NTRK3-induced cellular transformation.

    PubMed

    Cetinbas, Naniye; Huang-Hobbs, Helen; Tognon, Cristina; Leprivier, Gabriel; An, Jianghong; McKinney, Steven; Bowden, Mary; Chow, Connie; Gleave, Martin; McIntosh, Lawrence P; Sorensen, Poul H

    2013-09-27

    The ETV6-NTRK3 (EN) chimeric oncogene is expressed in diverse tumor types. EN is generated by a t(12;15) translocation, which fuses the N-terminal SAM (sterile α-motif) domain of the ETV6 (or TEL) transcription factor to the C-terminal PTK (protein-tyrosine kinase) domain of the neurotrophin-3 receptor NTRK3. SAM domain-mediated polymerization of EN leads to constitutive activation of the PTK domain and constitutive signaling of the Ras-MAPK and PI3K-Akt pathways, which are essential for EN oncogenesis. Here we show through complementary biophysical and cellular biological techniques that mutation of Lys-99, which participates in a salt bridge at the SAM polymer interface, reduces self-association of the isolated SAM domain as well as high molecular mass complex formation of EN and abrogates the transformation activity of EN. We also show that mutation of Asp-101, the intermolecular salt bridge partner of Lys-99, similarly blocks transformation of NIH3T3 cells by EN, reduces EN tyrosine phosphorylation, inhibits Akt and Mek1/2 signaling downstream of EN, and abolishes tumor formation in nude mice. In contrast, mutations of Glu-100 and Arg-103, residues in the vicinity of the interdomain Lys-99-Asp-101 salt bridge, have little or no effect on these oncogenic characteristics of EN. Our results underscore the importance of specific electrostatic interactions for SAM polymerization and EN transformation.

  12. Immittance Data Validation by Kramers‐Kronig Relations – Derivation and Implications

    PubMed Central

    2017-01-01

    Abstract Explicitly based on causality, linearity (superposition) and stability (time invariance) and implicit on continuity (consistency), finiteness (convergence) and uniqueness (single valuedness) in the time domain, Kramers‐Kronig (KK) integral transform (KKT) relations for immittances are derived as pure mathematical constructs in the complex frequency domain using the two‐sided (bilateral) Laplace integral transform (LT) reduced to the Fourier domain for sufficiently rapid exponential decaying, bounded immittances. Novel anti KK relations are also derived to distinguish LTI (linear, time invariant) systems from non‐linear, unstable and acausal systems. All relations can be used to test KK transformability on the LTI principles of linearity, stability and causality of measured and model data by Fourier transform (FT) in immittance spectroscopy (IS). Also, integral transform relations are provided to estimate (conjugate) immittances at zero and infinite frequency particularly useful to normalise data and compare data. Also, important implications for IS are presented and suggestions for consistent data analysis are made which generally apply likewise to complex valued quantities in many fields of engineering and natural sciences. PMID:29577007

  13. Recent progress in synchrotron-based frequency-domain Fourier-transform THz-EPR.

    PubMed

    Nehrkorn, Joscha; Holldack, Karsten; Bittl, Robert; Schnegg, Alexander

    2017-07-01

    We describe frequency-domain Fourier-transform THz-EPR as a method to assign spin-coupling parameters of high-spin (S>1/2) systems with very large zero-field splittings. The instrumental foundations of synchrotron-based FD-FT THz-EPR are presented, alongside with a discussion of frequency-domain EPR simulation routines. The capabilities of this approach is demonstrated for selected mono- and multinuclear HS systems. Finally, we discuss remaining challenges and give an outlook on the future prospects of the technique. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A Single Amino Acid Substitution in the v-Eyk Intracellular Domain Results in Activation of Stat3 and Enhances Cellular Transformation

    PubMed Central

    Besser, Daniel; Bromberg, Jacqueline F.; Darnell, James E.; Hanafusa, Hidesaburo

    1999-01-01

    The receptor tyrosine kinase Eyk, a member of the Axl/Tyro3 subfamily, activates the STAT pathway and transforms cells when constitutively activated. Here, we compared the potentials of the intracellular domains of Eyk molecules derived from c-Eyk and v-Eyk to transform rat 3Y1 fibroblasts. The v-Eyk molecule induced higher numbers of transformants in soft agar and stronger activation of Stat3; levels of Stat1 activation by the two Eyk molecules were similar. A mutation in the sequence Y933VPL, present in c-Eyk, to the v-Eyk sequence Y933VPQ led to increased activation of Stat3 and increased transformation efficiency. However, altering another sequence, Y862VNT, present in both Eyk molecules to F862VNT markedly decreased transformation without impairing Stat3 activation. These results indicate that activation of Stat3 enhances transformation efficiency and cooperates with another pathway to induce transformation. PMID:9891073

  15. The whole number axis integer linear transformation reversible information hiding algorithm on wavelet domain

    NASA Astrophysics Data System (ADS)

    Jiang, Zhuo; Xie, Chengjun

    2013-12-01

    This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.

  16. Microcomputer Simulation of a Fourier Approach to Optical Wave Propagation

    DTIC Science & Technology

    1992-06-01

    and transformed input in transform domain). 44 Figure 21. SHFTOUTPUT1 ( inverse transform of product of Bessel filter and transformed input). . . . 44...Figure 22. SHFT OUTPUT2 ( inverse transform of product of ,derivative filter and transformed input).. 45 Figure 23. •tIFT OUTPUT (sum of SHFTOUTPUT1...52 Figure 33. SHFT OUTPUT1 at time slice 1 ( inverse transform of product of Bessel filter and transformed input) .... ............. ... 53

  17. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less

  18. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science.

    PubMed

    Mocanu, Decebal Constantin; Mocanu, Elena; Stone, Peter; Nguyen, Phuong H; Gibescu, Madeleine; Liotta, Antonio

    2018-06-19

    Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erdős-Rényi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.

  19. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  20. Mapping High Dimensional Sparse Customer Requirements into Product Configurations

    NASA Astrophysics Data System (ADS)

    Jiao, Yao; Yang, Yu; Zhang, Hongshan

    2017-10-01

    Mapping customer requirements into product configurations is a crucial step for product design, while, customers express their needs ambiguously and locally due to the lack of domain knowledge. Thus the data mining process of customer requirements might result in fragmental information with high dimensional sparsity, leading the mapping procedure risk uncertainty and complexity. The Expert Judgment is widely applied against that background since there is no formal requirements for systematic or structural data. However, there are concerns on the repeatability and bias for Expert Judgment. In this study, an integrated method by adjusted Local Linear Embedding (LLE) and Naïve Bayes (NB) classifier is proposed to map high dimensional sparse customer requirements to product configurations. The integrated method adjusts classical LLE to preprocess high dimensional sparse dataset to satisfy the prerequisite of NB for classifying different customer requirements to corresponding product configurations. Compared with Expert Judgment, the adjusted LLE with NB performs much better in a real-world Tablet PC design case both in accuracy and robustness.

  1. Fast Fourier single-pixel imaging via binary illumination.

    PubMed

    Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang

    2017-09-20

    Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.

  2. An efficient semi-supervised community detection framework in social networks.

    PubMed

    Li, Zhen; Gong, Yong; Pan, Zhisong; Hu, Guyu

    2017-01-01

    Community detection is an important tasks across a number of research fields including social science, biology, and physics. In the real world, topology information alone is often inadequate to accurately find out community structure due to its sparsity and noise. The potential useful prior information such as pairwise constraints which contain must-link and cannot-link constraints can be obtained from domain knowledge in many applications. Thus, combining network topology with prior information to improve the community detection accuracy is promising. Previous methods mainly utilize the must-link constraints while cannot make full use of cannot-link constraints. In this paper, we propose a semi-supervised community detection framework which can effectively incorporate two types of pairwise constraints into the detection process. Particularly, must-link and cannot-link constraints are represented as positive and negative links, and we encode them by adding different graph regularization terms to penalize closeness of the nodes. Experiments on multiple real-world datasets show that the proposed framework significantly improves the accuracy of community detection.

  3. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  4. Signal processing using sparse derivatives with applications to chromatograms and ECG

    NASA Astrophysics Data System (ADS)

    Ning, Xiaoran

    In this thesis, we investigate the sparsity exist in the derivative domain. Particularly, we focus on the type of signals which posses up to Mth (M > 0) order sparse derivatives. Efforts are put on formulating proper penalty functions and optimization problems to capture properties related to sparse derivatives, searching for fast, computationally efficient solvers. Also the effectiveness of these algorithms are applied to two real world applications. In the first application, we provide an algorithm which jointly addresses the problems of chromatogram baseline correction and noise reduction. The series of chromatogram peaks are modeled as sparse with sparse derivatives, and the baseline is modeled as a low-pass signal. A convex optimization problem is formulated so as to encapsulate these non-parametric models. To account for the positivity of chromatogram peaks, an asymmetric penalty function is also utilized with symmetric penalty functions. A robust, computationally efficient, iterative algorithm is developed that is guaranteed to converge to the unique optimal solution. The approach, termed Baseline Estimation And Denoising with Sparsity (BEADS), is evaluated and compared with two state-of-the-art methods using both simulated and real chromatogram data. Promising result is obtained. In the second application, a novel Electrocardiography (ECG) enhancement algorithm is designed also based on sparse derivatives. In the real medical environment, ECG signals are often contaminated by various kinds of noise or artifacts, for example, morphological changes due to motion artifact, non-stationary noise due to muscular contraction (EMG), etc. Some of these contaminations severely affect the usefulness of ECG signals, especially when computer aided algorithms are utilized. By solving the proposed convex l1 optimization problem, artifacts are reduced by modeling the clean ECG signal as a sum of two signals whose second and third-order derivatives (differences) are sparse respectively. At the end, the algorithm is applied to a QRS detection system and validated using the MIT-BIH Arrhythmia database (109452 anotations), resulting a sensitivity of Se = 99.87%$ and a positive prediction of +P = 99.88%.

  5. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058

  6. Group sparse multiview patch alignment framework with view consistency for image classification.

    PubMed

    Gui, Jie; Tao, Dacheng; Sun, Zhenan; Luo, Yong; You, Xinge; Tang, Yuan Yan

    2014-07-01

    No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting l(2,1)-norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification.

  7. Graded porous inorganic materials derived from self-assembled block copolymer templates.

    PubMed

    Gu, Yibei; Werner, Jörg G; Dorin, Rachel M; Robbins, Spencer W; Wiesner, Ulrich

    2015-03-19

    Graded porous inorganic materials directed by macromolecular self-assembly are expected to offer unique structural platforms relative to conventional porous inorganic materials. Their preparation to date remains a challenge, however, based on the sparsity of viable synthetic self-assembly pathways to control structural asymmetry. Here we demonstrate the fabrication of graded porous carbon, metal, and metal oxide film structures from self-assembled block copolymer templates by using various backfilling techniques in combination with thermal treatments for template removal and chemical transformations. The asymmetric inorganic structures display mesopores in the film top layers and a gradual pore size increase along the film normal in the macroporous sponge-like support structure. Substructure walls between macropores are themselves mesoporous, constituting a structural hierarchy in addition to the pore gradation. Final graded structures can be tailored by tuning casting conditions of self-assembled templates as well as the backfilling processes. We expect that these graded porous inorganic materials may find use in applications including separation, catalysis, biomedical implants, and energy conversion and storage.

  8. An ankyrin-like protein with transmembrane domains is specifically lost after oncogenic transformation of human fibroblasts.

    PubMed

    Jaquemar, D; Schenker, T; Trueb, B

    1999-03-12

    We have identified a novel transformation-sensitive mRNA, which is present in cultured fibroblasts but is lacking in SV40 transformed cells as well as in many mesenchymal tumor cell lines. The corresponding gene is located on human chromosome 8 in band 8q13. The open reading frame of the mRNA encodes a protein of 1119 amino acids forming two distinct domains. The N-terminal domain consists of 18 repeats that are related to the cytoskeletal protein ankyrin. The C-terminal domain contains six putative transmembrane segments that resemble many ion channels. This overall structure is reminiscent of TRP-like proteins that function as store-operated calcium channels. The novel protein with an Mr of 130 kDa is expressed at a very low level in human fibroblasts and at a moderate level in liposarcoma cells. Overexpression in eukaryotic cells appears to interfere with normal growth, suggesting that it might play a direct or indirect role in signal transduction and growth control.

  9. Solar signals detected within neutral atmospheric and ionospheric parameters

    NASA Astrophysics Data System (ADS)

    Koucka Knizova, Petra; Georgieva, Katya; Mosna, Zbysek; Kozubek, Michal; Kouba, Daniel; Kirov, Boian; Potuzníkova, Katerina; Boska, Josef

    2018-06-01

    We have analyzed time series of solar data together with the atmospheric and ionospheric measurements for solar cycles 19 till 23 according to particular data availability. For the analyses we have used long term data with 1-day sampling. By mean of Continuous Wavelet Transform (CWT) we have found common spectral domains within solar and atmospheric and ionospheric time series. Further we have identified terms when particular pairs of signals show high coherence applying Wavelet Transform Coherence (WTC). Despite wide oscillation ranges detected in particular time series CWT spectra we found only limited domains with high coherence by mean of WTC. Wavelet Transform Coherence reveals significant high power domains with stable phase difference for periods 1 month, 2 months, 6 months, 1 year, 2 years and 3-4 years between pairs of solar data and atmospheric and ionospheric data. The occurence of the detected domains vary significantly during particular solar cycle (SC) and from cycle to the following one. It indicates the changing solar forcing and/or atmospheric sensitivity with time.

  10. Vortex Domain Structure in Ferroelectric Nanoplatelets and Control of its Transformation by Mechanical Load

    PubMed Central

    Chen, W. J.; Zheng, Yue; Wang, Biao

    2012-01-01

    Vortex domain patterns in low-dimensional ferroelectrics and multiferroics have been extensively studied with the aim of developing nanoscale functional devices. However, control of the vortex domain structure has not been investigated systematically. Taking into account effects of inhomogeneous electromechanical fields, ambient temperature, surface and size, we demonstrate significant influence of mechanical load on the vortex domain structure in ferroelectric nanoplatelets. Our analysis shows that the size and number of dipole vortices can be controlled by mechanical load, and yields rich temperature-stress (T-S) phase diagrams. Simulations also reveal that transformations between “vortex states” induced by the mechanical load are possible, which is totally different from the conventional way controlled on the vortex domain by the electric field. These results are relevant to application of vortex domain structures in ferroelectric nanodevices, and suggest a novel route to applications including memories, mechanical sensors and transducers. PMID:23150769

  11. [Spatial domain display for interference image dataset].

    PubMed

    Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia

    2011-11-01

    The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.

  12. Improved design of subcritical and supercritical cascades using complex characteristics and boundary layer correction

    NASA Technical Reports Server (NTRS)

    Sanz, J. M.

    1983-01-01

    The method of complex characteristics and hodograph transformation for the design of shockless airfoils was extended to design supercritical cascades with high solidities and large inlet angles. This capability was achieved by introducing a conformal mapping of the hodograph domain onto an ellipse and expanding the solution in terms of Tchebycheff polynomials. A computer code was developd based on this idea. A number of airfoils designed with the code are presented. Various supercritical and subcritical compressor, turbine and propeller sections are shown. The lag-entrainment method for the calculation of a turbulent boundary layer was incorporated to the inviscid design code. The results of this calculation are shown for the airfoils described. The elliptic conformal transformation developed to map the hodograph domain onto an ellipse can be used to generate a conformal grid in the physical domain of a cascade of airfoils with open trailing edges with a single transformation. A grid generated with this transformation is shown for the Korn airfoil.

  13. Evidence for sparse synergies in grasping actions.

    PubMed

    Prevete, Roberto; Donnarumma, Francesco; d'Avella, Andrea; Pezzulo, Giovanni

    2018-01-12

    Converging evidence shows that hand-actions are controlled at the level of synergies and not single muscles. One intriguing aspect of synergy-based action-representation is that it may be intrinsically sparse and the same synergies can be shared across several distinct types of hand-actions. Here, adopting a normative angle, we consider three hypotheses for hand-action optimal-control: sparse-combination hypothesis (SC) - sparsity in the mapping between synergies and actions - i.e., actions implemented using a sparse combination of synergies; sparse-elements hypothesis (SE) - sparsity in synergy representation - i.e., the mapping between degrees-of-freedom (DoF) and synergies is sparse; double-sparsity hypothesis (DS) - a novel view combining both SC and SE - i.e., both the mapping between DoF and synergies and between synergies and actions are sparse, each action implementing a sparse combination of synergies (as in SC), each using a limited set of DoFs (as in SE). We evaluate these hypotheses using hand kinematic data from six human subjects performing nine different types of reach-to-grasp actions. Our results support DS, suggesting that the best action representation is based on a relatively large set of synergies, each involving a reduced number of degrees-of-freedom, and that distinct sets of synergies may be involved in distinct tasks.

  14. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  15. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  16. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    NASA Astrophysics Data System (ADS)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  17. Distributed Compressive CSIT Estimation and Feedback for FDD Multi-User Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Rao, Xiongbin; Lau, Vincent K. N.

    2014-06-01

    To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.

  18. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  19. Transformed PANSS Factors Intended to Reduce Pseudospecificity Among Symptom Domains and Enhance Understanding of Symptom Change in Antipsychotic-Treated Patients With Schizophrenia

    PubMed Central

    Hopkins, Seth C; Ogirala, Ajay; Loebel, Antony; Koblan, Kenneth S

    2018-01-01

    Abstract Positive and Negative Syndrome Scale (PANSS) total score is the standard primary efficacy measure in acute treatment studies of schizophrenia. However, PANSS factors that have been derived from factor analytic approaches over the past several decades have uncertain clinical and regulatory status as they are, to varying degrees, intercorrelated. As a consequence of cross-factor correlations, the apparent improvement in key clinical domains (eg, negative symptoms, disorganized thinking/behavior) may largely be attributable to improvement in a related clinical domain, such as positive symptoms, a problem often referred to as pseudospecificity. Here, we analyzed correlations among PANSS items, at baseline and change post-baseline, in a pooled sample of 5 placebo-controlled clinical trials (N = 1710 patients), using clustering and factor analysis to identify an uncorrelated PANSS score matrix (UPSM) that minimized the degree of correlation between each resulting transformed PANSS factor. The transformed PANSS factors corresponded well with discrete symptom domains described by prior factor analyses, but between-factor change-scores correlations were markedly lower. We then used the UPSM to transform PANSS in data from 4657 unique schizophrenia patients included in 12 additional lurasidone clinical trials. The results confirmed that transformed PANSS factors retained a high degree of specificity, thus validating that low between-factor correlations are a reliable property of the USPM when transforming PANSS data from a variety of clinical trial data sets. These results provide a more robust understanding of the structure of symptom change in schizophrenia and suggest a means to evaluate the specificity of antipsychotic treatment effects. PMID:28981857

  20. Enhancement of Signal-to-noise Ratio in Natural-source Transient Magnetotelluric Data with Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Paulson, K. V.

    For audio-frequency magnetotelluric surveys where the signals are lightning-stroke transients, the conventional Fourier transform method often fails to produce a high quality impedance tensor. An alternative approach is to use the wavelet transform method which is capable of localizing target information simultaneously in both the temporal and frequency domains. Unlike Fourier analysis that yields an average amplitude and phase, the wavelet transform produces an instantaneous estimate of the amplitude and phase of a signal. In this paper a complex well-localized wavelet, the Morlet wavelet, has been used to transform and analyze audio-frequency magnetotelluric data. With the Morlet wavelet, the magnetotelluric impedance tensor can be computed directly in the wavelet transform domain. The lightning-stroke transients are easily identified on the dilation-translation plane. Choosing those wavelet transform values where the signals are located, a higher signal-to-noise ratio estimation of the impedance tensor can be obtained. In a test using real data, the wavelet transform showed a significant improvement in the signal-to-noise ratio over the conventional Fourier transform.

  1. Novel Angiogenic Domains: Use in Identifying Unique Transforming and Tumor Promoting Pathways in Human Breast Cancer

    DTIC Science & Technology

    2004-10-01

    Cancer PRINCIPAL INVESTIGATOR: Thomas F. Deuel, M.D. CONTRACTING ORGANIZATION: The Scripps Research Institute...NUMBER Novel Angiogenic Domains: Use in Identifying Unique Transforming and Tumor Promoting Pathways in Human Breast Cancer 5b. GRANT NUMBER DAMD17...SUPPLEMENTARY NOTES 14. ABSTRACT Breast cancers in humans often grow slowly or even remain undetectable for long periods of time only to

  2. Towards limb position invariant myoelectric pattern recognition using time-dependent spectral features.

    PubMed

    Khushaba, Rami N; Takruri, Maen; Miro, Jaime Valls; Kodagoda, Sarath

    2014-07-01

    Recent studies in Electromyogram (EMG) pattern recognition reveal a gap between research findings and a viable clinical implementation of myoelectric control strategies. One of the important factors contributing to the limited performance of such controllers in practice is the variation in the limb position associated with normal use as it results in different EMG patterns for the same movements when carried out at different positions. However, the end goal of the myoelectric control scheme is to allow amputees to control their prosthetics in an intuitive and accurate manner regardless of the limb position at which the movement is initiated. In an attempt to reduce the impact of limb position on EMG pattern recognition, this paper proposes a new feature extraction method that extracts a set of power spectrum characteristics directly from the time-domain. The end goal is to form a set of features invariant to limb position. Specifically, the proposed method estimates the spectral moments, spectral sparsity, spectral flux, irregularity factor, and signals power spectrum correlation. This is achieved through using Fourier transform properties to form invariants to amplification, translation and signal scaling, providing an efficient and accurate representation of the underlying EMG activity. Additionally, due to the inherent temporal structure of the EMG signal, the proposed method is applied on the global segments of EMG data as well as the sliced segments using multiple overlapped windows. The performance of the proposed features is tested on EMG data collected from eleven subjects, while implementing eight classes of movements, each at five different limb positions. Practical results indicate that the proposed feature set can achieve significant reduction in classification error rates, in comparison to other methods, with ≈8% error on average across all subjects and limb positions. A real-time implementation and demonstration is also provided and made available as a video supplement (see Appendix A). Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Image Fusion Algorithms Using Human Visual System in Transform Domain

    NASA Astrophysics Data System (ADS)

    Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar

    2017-08-01

    The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.

  4. Yeast peroxisomal multifunctional enzyme: (3R)-hydroxyacyl-CoA dehydrogenase domains A and B are required for optimal growth on oleic acid.

    PubMed

    Qin, Y M; Marttila, M S; Haapalainen, A M; Siivari, K M; Glumoff, T; Hiltunen, J K

    1999-10-01

    The yeast peroxisomal (3R)-hydroxyacyl-CoA dehydrogenase/2-enoyl-CoA hydratase 2 (multifunctional enzyme type 2; MFE-2) has two N-terminal domains belonging to the short chain alcohol dehydrogenase/reductase superfamily. To investigate the physiological roles of these domains, here called A and B, Saccharomyces cerevisiae fox-2 cells (devoid of Sc MFE-2) were taken as a model system. Gly(16) and Gly(329) of the S. cerevisiae A and B domains, corresponding to Gly(16), which is mutated in the human MFE-2 deficiency, were mutated to serine and cloned into the yeast expression plasmid pYE352. In oleic acid medium, fox-2 cells transformed with pYE352:: ScMFE-2(aDelta) and pYE352::ScMFE-2(bDelta) grew slower than cells transformed with pYE352::ScMFE-2, whereas cells transformed with pYE352::ScMFE-2(aDeltabDelta) failed to grow. Candida tropicalis MFE-2 with a deleted hydratase 2 domain (Ct MFE- 2(h2Delta)) and mutational variants of the A and B domains (Ct MFE- 2(h2DeltaaDelta), Ct MFE- 2(h2DeltabDelta), and Ct MFE- 2(h2DeltaaDeltabDelta)) were overexpressed and characterized. All proteins were dimers with similar secondary structure elements. Both wild type domains were enzymatically active, with the B domain showing the highest activity with short chain and the A domain with medium and long chain (3R)-hydroxyacyl-CoA substrates. The data show that the dehydrogenase domains of yeast MFE-2 have different substrate specificities required to allow the yeast to propagate optimally on fatty acids as the carbon source.

  5. High-resolution 2-D Bragg diffraction reveal heterogeneous domain transformation behavior in a bulk relaxor ferroelectric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pramanick, Abhijit, E-mail: apramani@cityu.edu.hk; Stoica, Alexandru D.; An, Ke

    2016-08-29

    In-situ measurement of fine-structure of neutron Bragg diffraction peaks from a relaxor single-crystal using a time-of-flight instrument reveals highly heterogeneous mesoscale domain transformation behavior under applied electric fields. It is observed that only ∼25% of domains undergo reorientation or phase transition contributing to large average strains, while at least 40% remain invariant and exhibit microstrains. Such insights could be central for designing new relaxor materials with better performance and longevity. The current experimental technique can also be applied to resolve complex mesoscale phenomena in other functional materials.

  6. High-resolution 2-D Bragg diffraction reveal heterogeneous domain transformation behavior in a bulk relaxor ferroelectric

    DOE PAGES

    Pramanick, Abhijit; Stoica, Alexandru D.; An, Ke

    2016-09-02

    In-situ measurement of fine-structure of neutron Bragg diffraction peaks from a relaxor single-crystal using a time-of-flight instrument reveals highly heterogeneous mesoscale domain transformation behavior under applied electric fields. We observed that only 25% of domains undergo reorienta- tion or phase transition contributing to large average strains, while at least 40% remain invariant and exhibit microstrains. Such insights could be central for designing new relaxor materials with better performance and longevity. The current experimental technique can also be applied to resolve com- plex mesoscale phenomena in other functional materials.

  7. TEM characterization of planar defects in massively transformed TiAl alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, X.D.; Wiezorek, J.M.K.; Fraser, H.L.

    1997-12-31

    The microstructure of a massively transformed Ti-49at.%Al alloy has been studied by conventional transmission electron microscopy (CTEM) and high resolution TEM (HREM). A high density of planar defects, namely complex anti-phase domain boundaries (CAPDB) and thermal micro-twins (TMT) have been observed. CTEM images and diffraction patterns showed that two anti-phase related {gamma}-matrix domains were generally separated by a thin layer of a 90{degree}-domain, for which the c-axis is rotated 90{degree} over a common cube axis with respect to those of the {gamma}-matrix domains. HREM confirmed the presence of two crystallographically different types of 90{degree}-domains being associated with the CAPDB. Furthermore,more » interactions between the CAPDB and TMT have been observed. Local faceting of the generally wavy, non-crystallographic CAPDB parallel to the {l_brace}111{r_brace}-twinning planes occurred due to interaction with the TMT. The relaxation of the CAPDB onto {l_brace}111{r_brace} required diffusion which is proposed to be enhanced locally in the presence of the dislocations associated with the formation of TMT during the massive transformation.« less

  8. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  9. Matrix form for the instrument line shape of Fourier-transform spectrometers yielding a fast integration algorithm to theoretical spectra.

    PubMed

    Desbiens, Raphaël; Tremblay, Pierre; Genest, Jérôme; Bouchard, Jean-Pierre

    2006-01-20

    The instrument line shape (ILS) of a Fourier-transform spectrometer is expressed in a matrix form. For all line shape effects that scale with wavenumber, the ILS matrix is shown to be transposed in the spectral and interferogram domains. The novel representation of the ILS matrix in the interferogram domain yields an insightful physical interpretation of the underlying process producing self-apodization. Working in the interferogram domain circumvents the problem of taking into account the effects of finite optical path difference and permits a proper discretization of the equations. A fast algorithm in O(N log2 N), based on the fractional Fourier transform, is introduced that permits the application of a constant resolving power line shape to theoretical spectra or forward models. The ILS integration formalism is validated with experimental data.

  10. Novel image encryption algorithm based on multiple-parameter discrete fractional random transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Dong, Taiji; Wu, Jianhua

    2010-08-01

    A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.

  11. Method and algorithm for image processing

    DOEpatents

    He, George G.; Moon, Brain D.

    2003-12-16

    The present invention is a modified Radon transform. It is similar to the traditional Radon transform for the extraction of line parameters and similar to traditional slant stack for the intensity summation of pixels away from a given pixel, for example ray paths that spans 360 degree at a given grid in the time and offset domain. However, the present invention differs from these methods in that the intensity and direction of a composite intensity for each pixel are maintained separately instead of combined after the transformation. An advantage of this approach is elimination of the work required to extract the line parameters in the transformed domain. The advantage of the modified Radon Transform method is amplified when many lines are present in the imagery or when the lines are just short segments which both occur in actual imagery.

  12. Causal Correlation Functions and Fourier Transforms: Application in Calculating Pressure Induced Shifts

    NASA Technical Reports Server (NTRS)

    Ma, Q.; Tipping, R. H.; Lavrentieva, N. N.

    2012-01-01

    By adopting a concept from signal processing, instead of starting from the correlation functions which are even, one considers the causal correlation functions whose Fourier transforms become complex. Their real and imaginary parts multiplied by 2 are the Fourier transforms of the original correlations and the subsequent Hilbert transforms, respectively. Thus, by taking this step one can complete the two previously needed transforms. However, to obviate performing the Cauchy principal integrations required in the Hilbert transforms is the greatest advantage. Meanwhile, because the causal correlations are well-bounded within the time domain and band limited in the frequency domain, one can replace their Fourier transforms by the discrete Fourier transforms and the latter can be carried out with the FFT algorithm. This replacement is justified by sampling theory because the Fourier transforms can be derived from the discrete Fourier transforms with the Nyquis rate without any distortions. We apply this method in calculating pressure induced shifts of H2O lines and obtain more reliable values. By comparing the calculated shifts with those in HITRAN 2008 and by screening both of them with the pair identity and the smooth variation rules, one can conclude many of shift values in HITRAN are not correct.

  13. Interplay between bulk and edge-bound topological defects in a square micromagnet

    DOE PAGES

    Sloetjes, Sam D.; Digernes, Einar; Olsen, Fredrik K.; ...

    2018-01-22

    A field-driven transformation of a domain pattern in a square micromagnet, defined in a thin film of La 0.7Sr 0.3MnO 3, is discussed in terms of creation and annihilation of bulk vortices and edge-bound topological defects with half-integer winding numbers. The evolution of the domain pattern was mapped with soft x-ray photoemission electron microscopy and magnetic force microscopy. Micromagnetic modeling, permitting detailed analysis of the spin texture, accurately reproduces the measured domain state transformation. The simulations also helped stipulate the energy barriers associated with the creation and annihilation of the topological charges and thus to assess the stability of themore » domain states in this magnetic microstructure.« less

  14. Interplay between bulk and edge-bound topological defects in a square micromagnet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sloetjes, Sam D.; Digernes, Einar; Olsen, Fredrik K.

    A field-driven transformation of a domain pattern in a square micromagnet, defined in a thin film of La 0.7Sr 0.3MnO 3, is discussed in terms of creation and annihilation of bulk vortices and edge-bound topological defects with half-integer winding numbers. The evolution of the domain pattern was mapped with soft x-ray photoemission electron microscopy and magnetic force microscopy. Micromagnetic modeling, permitting detailed analysis of the spin texture, accurately reproduces the measured domain state transformation. The simulations also helped stipulate the energy barriers associated with the creation and annihilation of the topological charges and thus to assess the stability of themore » domain states in this magnetic microstructure.« less

  15. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography.

    PubMed

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-21

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  16. Channel Estimation and Pilot Design for Massive MIMO Systems with Block-Structured Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua

    2018-03-01

    Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.

  17. Sparse representation of Gravitational Sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  18. Analytical electron microscope study of the omega phase transformation in a zirconium-niobium alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaluzec, N. J.

    1979-01-01

    The study of the as-quenched omega phase morphology shows that the domain size of Zr-15% Nb is on the order of 30 A. No alignment of omega domains along <222>..beta.. directions was observed and samples having undergone thermal cycling in thin foil form, did not develop a long-period structure of alternating ..beta.. and ..omega.. phases below the omega transformation temperature. (FS)

  19. The Power of the Frame: Systems Transformation Framework for Health Care Leaders.

    PubMed

    Scott, Kathy A; Pringle, Janice

    Health care leaders are responsible for oversight of multiple and competing change interventions. These interventions regularly fail to achieve the desired outcomes and/or sustainable results. This often occurs because of the mental models and approaches that are used to plan, design, implement, and evaluate the system. These do not account for inherent characteristics that determine the system's likely ability to innovate while maintaining operational effectiveness. Theories exist on how to assess a system's readiness to change, but the definitions, constructs, and assessments are diverse and often look at facets of systems in isolation. The Systems Transformation Framework prescriptively defines and characterizes system domains on the basis of complex adaptive systems theory so that domains can be assessed in tandem. As a result, strengths and challenges to implementation are recognized before implementation begins. The Systems Transformation Framework defines 8 major domains: vision, leadership, organizational culture, organizational behavior, organizational structure, performance measurements, internal learning, and external learning. Each domain has principles that are critical for creating the conditions that lead to successful organizational adaptation and change. The Systems Transformation Framework can serve as a guide for health care leaders at all levels of the organization to (1) create environments that are change ready and (2) plan, design, implement, and evaluate change within complex adaptive systems.

  20. The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex

    PubMed Central

    Leibo, Joel Z.; Liao, Qianli; Anselmi, Fabio; Poggio, Tomaso

    2015-01-01

    Is visual cortex made up of general-purpose information processing machinery, or does it consist of a collection of specialized modules? If prior knowledge, acquired from learning a set of objects is only transferable to new objects that share properties with the old, then the recognition system’s optimal organization must be one containing specialized modules for different object classes. Our analysis starts from a premise we call the invariance hypothesis: that the computational goal of the ventral stream is to compute an invariant-to-transformations and discriminative signature for recognition. The key condition enabling approximate transfer of invariance without sacrificing discriminability turns out to be that the learned and novel objects transform similarly. This implies that the optimal recognition system must contain subsystems trained only with data from similarly-transforming objects and suggests a novel interpretation of domain-specific regions like the fusiform face area (FFA). Furthermore, we can define an index of transformation-compatibility, computable from videos, that can be combined with information about the statistics of natural vision to yield predictions for which object categories ought to have domain-specific regions in agreement with the available data. The result is a unifying account linking the large literature on view-based recognition with the wealth of experimental evidence concerning domain-specific regions. PMID:26496457

  1. Accurate determination of the diffusion coefficient of proteins by Fourier analysis with whole column imaging detection.

    PubMed

    Zarabadi, Atefeh S; Pawliszyn, Janusz

    2015-02-17

    Analysis in the frequency domain is considered a powerful tool to elicit precise information from spectroscopic signals. In this study, the Fourier transformation technique is employed to determine the diffusion coefficient (D) of a number of proteins in the frequency domain. Analytical approaches are investigated for determination of D from both experimental and data treatment viewpoints. The diffusion process is modeled to calculate diffusion coefficients based on the Fourier transformation solution to Fick's law equation, and its results are compared to time domain results. The simulations characterize optimum spatial and temporal conditions and demonstrate the noise tolerance of the method. The proposed model is validated by its application for the electropherograms from the diffusion path of a set of proteins. Real-time dynamic scanning is conducted to monitor dispersion by employing whole column imaging detection technology in combination with capillary isoelectric focusing (CIEF) and the imaging plug flow (iPF) experiment. These experimental techniques provide different peak shapes, which are utilized to demonstrate the Fourier transformation ability in extracting diffusion coefficients out of irregular shape signals. Experimental results confirmed that the Fourier transformation procedure substantially enhanced the accuracy of the determined values compared to those obtained in the time domain.

  2. Transformation-specific interaction of the bovine papillomavirus E5 oncoprotein with the platelet-derived growth factor receptor transmembrane domain and the epidermal growth factor receptor cytoplasmic domain.

    PubMed Central

    Cohen, B D; Goldstein, D J; Rutledge, L; Vass, W C; Lowy, D R; Schlegel, R; Schiller, J T

    1993-01-01

    The bovine papillomavirus E5 transforming protein appears to activate both the epidermal growth factor receptor (EGF-R) and the platelet-derived growth factor receptor (PDGF-R) by a ligand-independent mechanism. To further investigate the ability of E5 to activate receptors of different classes and to determine whether this stimulation occurs through the extracellular domain required for ligand activation, we constructed chimeric genes encoding PDGF-R and EGF-R by interchanging the extracellular, membrane, and cytoplasmic coding domains. Chimeras were transfected into NIH 3T3 and CHO(LR73) cells. All chimeras expressed stable protein which, upon addition of the appropriate ligand, could be activated as assayed by tyrosine autophosphorylation and biological transformation. Cotransfection of E5 with the wild-type and chimeric receptors resulted in the ligand-independent activation of receptors, provided that a receptor contained either the transmembrane domain of the PDGF-R or the cytoplasmic domain of the EGF-R. Chimeric receptors that contained both of these domains exhibited the highest level of E5-induced biochemical and biological stimulation. These results imply that E5 activates the PDGF-R and EGR-R by two distinct mechanisms, neither of which specifically involves the extracellular domain of the receptor. Consistent with the biochemical and biological activation data, coimmunoprecipitation studies demonstrated that E5 formed a complex with any chimera that contained a PDGF-R transmembrane domain or an EGF-R cytoplasmic domain, with those chimeras containing both domains demonstrating the greatest efficiency of complex formation. These results suggest that although different domains of the PDGF-R and EGF-R are required for E5 activation, both receptors are activated directly by formation of an E5-containing complex. Images PMID:8394451

  3. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  4. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  5. Exact Solutions of Coupled Multispecies Linear Reaction–Diffusion Equations on a Uniformly Growing Domain

    PubMed Central

    Simpson, Matthew J.; Sharp, Jesse A.; Morrow, Liam C.; Baker, Ruth E.

    2015-01-01

    Embryonic development involves diffusion and proliferation of cells, as well as diffusion and reaction of molecules, within growing tissues. Mathematical models of these processes often involve reaction–diffusion equations on growing domains that have been primarily studied using approximate numerical solutions. Recently, we have shown how to obtain an exact solution to a single, uncoupled, linear reaction–diffusion equation on a growing domain, 0 < x < L(t), where L(t) is the domain length. The present work is an extension of our previous study, and we illustrate how to solve a system of coupled reaction–diffusion equations on a growing domain. This system of equations can be used to study the spatial and temporal distributions of different generations of cells within a population that diffuses and proliferates within a growing tissue. The exact solution is obtained by applying an uncoupling transformation, and the uncoupled equations are solved separately before applying the inverse uncoupling transformation to give the coupled solution. We present several example calculations to illustrate different types of behaviour. The first example calculation corresponds to a situation where the initially–confined population diffuses sufficiently slowly that it is unable to reach the moving boundary at x = L(t). In contrast, the second example calculation corresponds to a situation where the initially–confined population is able to overcome the domain growth and reach the moving boundary at x = L(t). In its basic format, the uncoupling transformation at first appears to be restricted to deal only with the case where each generation of cells has a distinct proliferation rate. However, we also demonstrate how the uncoupling transformation can be used when each generation has the same proliferation rate by evaluating the exact solutions as an appropriate limit. PMID:26407013

  6. Exact Solutions of Coupled Multispecies Linear Reaction-Diffusion Equations on a Uniformly Growing Domain.

    PubMed

    Simpson, Matthew J; Sharp, Jesse A; Morrow, Liam C; Baker, Ruth E

    2015-01-01

    Embryonic development involves diffusion and proliferation of cells, as well as diffusion and reaction of molecules, within growing tissues. Mathematical models of these processes often involve reaction-diffusion equations on growing domains that have been primarily studied using approximate numerical solutions. Recently, we have shown how to obtain an exact solution to a single, uncoupled, linear reaction-diffusion equation on a growing domain, 0 < x < L(t), where L(t) is the domain length. The present work is an extension of our previous study, and we illustrate how to solve a system of coupled reaction-diffusion equations on a growing domain. This system of equations can be used to study the spatial and temporal distributions of different generations of cells within a population that diffuses and proliferates within a growing tissue. The exact solution is obtained by applying an uncoupling transformation, and the uncoupled equations are solved separately before applying the inverse uncoupling transformation to give the coupled solution. We present several example calculations to illustrate different types of behaviour. The first example calculation corresponds to a situation where the initially-confined population diffuses sufficiently slowly that it is unable to reach the moving boundary at x = L(t). In contrast, the second example calculation corresponds to a situation where the initially-confined population is able to overcome the domain growth and reach the moving boundary at x = L(t). In its basic format, the uncoupling transformation at first appears to be restricted to deal only with the case where each generation of cells has a distinct proliferation rate. However, we also demonstrate how the uncoupling transformation can be used when each generation has the same proliferation rate by evaluating the exact solutions as an appropriate limit.

  7. A Primer of Fourier Transform NMR.

    ERIC Educational Resources Information Center

    Macomber, Roger S.

    1985-01-01

    Fourier transform nuclear magnetic resonance (NMR) is a new spectroscopic technique that is often omitted from undergraduate curricula because of lack of instructional materials. Therefore, information is provided to introduce students to the technique of data collection and transformation into the frequency domain. (JN)

  8. A Short-Segment Fourier Transform Methodology

    DTIC Science & Technology

    2009-03-01

    defined sampling of the continuous-valued discrete-time Fourier transform, superresolution in the frequency domain and allowance of Dirac delta functions associated with pure sinusoidal input data components.

  9. Computation of transform domain covariance matrices

    NASA Technical Reports Server (NTRS)

    Fino, B. J.; Algazi, V. R.

    1975-01-01

    It is often of interest in applications to compute the covariance matrix of a random process transformed by a fast unitary transform. Here, the recursive definition of fast unitary transforms is used to derive recursive relations for the covariance matrices of the transformed process. These relations lead to fast methods of computation of covariance matrices and to substantial reductions of the number of arithmetic operations required.

  10. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    DTIC Science & Technology

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  11. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    PubMed

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  12. New mechanism for toughening ceramic materials. Final report, 15 March 1989-15 July 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutler, R.A.; Virkar, A.V.; Cross, L.E.

    Ferroelastic toughening was identified as a viable mechanism for toughening ceramics. Domain structure and domain switching was identified by x-ray diffraction, transmission optical microscopy, and transmission electron microscopy in zirconia, lead zirconate titanate and gadolinium molybdata. Switching in compression was observed at stresses greater than 600 MPa and at 400 MPa in tension for polycrystalline t'-zirconia. Domain switching contributes to toughness, as evidenced by data for monoclinic zirconia, t'-zirconia, PZT and GMO. The magnitude of toughening varied between 0.6 MPa.ml/2 for GMO to 2-6 MPa-ml/2 for zirconia. Polycrystalline monoclinic and t'-zirconias, which showed no transformation toughening, had similar toughness valuesmore » as Y-TZP which exhibits transformation. Coarse-grained monoclinic and tetragonal (t') zirconia samples could be cooled to room temperature for mechanical property evaluation since fine domain size, not grain size, controlled transformation for t'-zirconia and minimized stress for m-ZrO2. LnAlO3, LnNbO4, and LnCrO3 were among the materials identified as high temperature ferroelastics.« less

  13. Deblocking of mobile stereo video

    NASA Astrophysics Data System (ADS)

    Azzari, Lucio; Gotchev, Atanas; Egiazarian, Karen

    2012-02-01

    Most of candidate methods for compression of mobile stereo video apply block-transform based compression based on the H-264 standard with quantization of transform coefficients driven by quantization parameter (QP). The compression ratio and the resulting bit rate are directly determined by the QP level and high compression is achieved for the price of visually noticeable blocking artifacts. Previous studies on perceived quality of mobile stereo video have revealed that blocking artifacts are the most annoying and most influential in the acceptance/rejection of mobile stereo video and can even completely cancel the 3D effect and the corresponding quality added value. In this work, we address the problem of deblocking of mobile stereo video. We modify a powerful non-local transform-domain collaborative filtering method originally developed for denoising of images and video. The method employs grouping of similar block patches residing in spatial and temporal vicinity of a reference block in filtering them collaboratively in a suitable transform domain. We study the most suitable way of finding similar patches in both channels of stereo video and suggest a hybrid four-dimensional transform to process the collected synchronized (stereo) volumes of grouped blocks. The results benefit from the additional correlation available between the left and right channel of the stereo video. Furthermore, addition sharpening is applied through an embedded alpha-rooting in transform domain, which improve the visual appearance of the deblocked frames.

  14. Devil's vortex Fresnel lens phase masks on an asymmetric cryptosystem based on phase-truncation in gyrator wavelet transform domain

    NASA Astrophysics Data System (ADS)

    Singh, Hukum

    2016-06-01

    An asymmetric scheme has been proposed for optical double images encryption in the gyrator wavelet transform (GWT) domain. Grayscale and binary images are encrypted separately using double random phase encoding (DRPE) in the GWT domain. Phase masks based on devil's vortex Fresnel Lens (DVFLs) and random phase masks (RPMs) are jointly used in spatial as well as in the Fourier plane. The images to be encrypted are first gyrator transformed and then single-level discrete wavelet transformed (DWT) to decompose LL , HL , LH and HH matrices of approximation, horizontal, vertical and diagonal coefficients. The resulting coefficients from the DWT are multiplied by other RPMs and the results are applied to inverse discrete wavelet transform (IDWT) for obtaining the encrypted images. The images are recovered from their corresponding encrypted images by using the correct parameters of the GWT, DVFL and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The mother wavelet family, DVFL and gyrator transform orders associated with the GWT are extra keys that cause difficulty to an attacker. Thus, the scheme is more secure as compared to conventional techniques. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between recovered and the original images. The sensitivity of the proposed scheme is verified with encryption parameters and noise attacks.

  15. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    PubMed

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  16. A novel ship CFAR detection algorithm based on adaptive parameter enhancement and wake-aided detection in SAR images

    NASA Astrophysics Data System (ADS)

    Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun

    2018-03-01

    Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.

  17. Optical image encryption using QR code and multilevel fingerprints in gyrator transform domains

    NASA Astrophysics Data System (ADS)

    Wei, Yang; Yan, Aimin; Dong, Jiabin; Hu, Zhijuan; Zhang, Jingtao

    2017-11-01

    A new concept of GT encryption scheme is proposed in this paper. We present a novel optical image encryption method by using quick response (QR) code and multilevel fingerprint keys in gyrator transform (GT) domains. In this method, an original image is firstly transformed into a QR code, which is placed in the input plane of cascaded GTs. Subsequently, the QR code is encrypted into the cipher-text by using multilevel fingerprint keys. The original image can be obtained easily by reading the high-quality retrieved QR code with hand-held devices. The main parameters used as private keys are GTs' rotation angles and multilevel fingerprints. Biometrics and cryptography are integrated with each other to improve data security. Numerical simulations are performed to demonstrate the validity and feasibility of the proposed encryption scheme. In the future, the method of applying QR codes and fingerprints in GT domains possesses much potential for information security.

  18. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  19. Vibration signal correction of unbalanced rotor due to angular speed fluctuation

    NASA Astrophysics Data System (ADS)

    Cao, Hongrui; He, Dong; Xi, Songtao; Chen, Xuefeng

    2018-07-01

    The rotating speed of a rotor is hardly constant in practice due to angular speed fluctuation, which affects the balancing accuracy of the rotor. In this paper, the effect of angular speed fluctuation on vibration responses of the unbalanced rotor is analyzed quantitatively. Then, a vibration signal correction method based on zoom synchrosqueezing transform (ZST) and tacholess order tracking is proposed. The instantaneous angular speed (IAS) of the rotor is extracted by the ZST firstly and then used to calculate the instantaneous phase. The vibration signal is further resampled in angular domain to reduce the effect of angular speed fluctuation. The signal obtained in angular domain is transformed into order domain using discrete Fourier transform (DFT) to estimate the amplitude and phase of the vibration signal. Simulated and experimental results show that the proposed method can successfully correct the amplitude and phase of the vibration signal due to angular speed fluctuation.

  20. Time-response shaping using output to input saturation transformation

    NASA Astrophysics Data System (ADS)

    Chambon, E.; Burlion, L.; Apkarian, P.

    2018-03-01

    For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.

  1. Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the Influence of Conformational Uncertainty in Biomolecular Solvation

    DOE PAGES

    Lei, Huan; Yang, Xiu; Zheng, Bin; ...

    2015-11-05

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  2. Low-dose CT reconstruction with patch based sparsity and similarity constraints

    NASA Astrophysics Data System (ADS)

    Xu, Qiong; Mou, Xuanqin

    2014-03-01

    As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.

  3. JiTTree: A Just-in-Time Compiled Sparse GPU Volume Data Structure.

    PubMed

    Labschütz, Matthias; Bruckner, Stefan; Gröller, M Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    Sparse volume data structures enable the efficient representation of large but sparse volumes in GPU memory for computation and visualization. However, the choice of a specific data structure for a given data set depends on several factors, such as the memory budget, the sparsity of the data, and data access patterns. In general, there is no single optimal sparse data structure, but a set of several candidates with individual strengths and drawbacks. One solution to this problem are hybrid data structures which locally adapt themselves to the sparsity. However, they typically suffer from increased traversal overhead which limits their utility in many applications. This paper presents JiTTree, a novel sparse hybrid volume data structure that uses just-in-time compilation to overcome these problems. By combining multiple sparse data structures and reducing traversal overhead we leverage their individual advantages. We demonstrate that hybrid data structures adapt well to a large range of data sets. They are especially superior to other sparse data structures for data sets that locally vary in sparsity. Possible optimization criteria are memory, performance and a combination thereof. Through just-in-time (JIT) compilation, JiTTree reduces the traversal overhead of the resulting optimal data structure. As a result, our hybrid volume data structure enables efficient computations on the GPU, while being superior in terms of memory usage when compared to non-hybrid data structures.

  4. L 1-2 minimization for exact and stable seismic attenuation compensation

    NASA Astrophysics Data System (ADS)

    Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang

    2018-06-01

    Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.

  5. Investigation of the influence of sampling schemes on quantitative dynamic fluorescence imaging

    PubMed Central

    Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Wang, Guodong; Wang, Bo; Zhan, Yonghua; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin

    2018-01-01

    Dynamic optical data from a series of sampling intervals can be used for quantitative analysis to obtain meaningful kinetic parameters of probe in vivo. The sampling schemes may affect the quantification results of dynamic fluorescence imaging. Here, we investigate the influence of different sampling schemes on the quantification of binding potential (BP) with theoretically simulated and experimentally measured data. Three groups of sampling schemes are investigated including the sampling starting point, sampling sparsity, and sampling uniformity. In the investigation of the influence of the sampling starting point, we further summarize two cases by considering the missing timing sequence between the probe injection and sampling starting time. Results show that the mean value of BP exhibits an obvious growth trend with an increase in the delay of the sampling starting point, and has a strong correlation with the sampling sparsity. The growth trend is much more obvious if throwing the missing timing sequence. The standard deviation of BP is inversely related to the sampling sparsity, and independent of the sampling uniformity and the delay of sampling starting time. Moreover, the mean value of BP obtained by uniform sampling is significantly higher than that by using the non-uniform sampling. Our results collectively suggest that a suitable sampling scheme can help compartmental modeling of dynamic fluorescence imaging provide more accurate results and simpler operations. PMID:29675325

  6. Collaborative sparse priors for multi-view ATR

    NASA Astrophysics Data System (ADS)

    Li, Xuelu; Monga, Vishal

    2018-04-01

    Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.

  7. Activation of the Lbc Rho Exchange Factor Proto-Oncogene by Truncation of an Extended C Terminus That Regulates Transformation and Targeting

    PubMed Central

    Sterpetti, Paola; Hack, Andrew A.; Bashar, Mariam P.; Park, Brian; Cheng, Sou-De; Knoll, Joan H. M.; Urano, Takeshi; Feig, Larry A.; Toksoz, Deniz

    1999-01-01

    The human lbc oncogene product is a guanine nucleotide exchange factor that specifically activates the Rho small GTP binding protein, thus resulting in biologically active, GTP-bound Rho, which in turn mediates actin cytoskeletal reorganization, gene transcription, and entry into the mitotic S phase. In order to elucidate the mechanism of onco-Lbc transformation, here we report that while proto- and onco-lbc cDNAs encode identical N-terminal dbl oncogene homology (DH) and pleckstrin homology (PH) domains, proto-Lbc encodes a novel C terminus absent in the oncoprotein that includes a predicted α-helical region homologous to cyto-matrix proteins, followed by a proline-rich region. The lbc proto-oncogene maps to chromosome 15, and onco-lbc represents a fusion of the lbc proto-oncogene N terminus with a short, unrelated C-terminal sequence from chromosome 7. Both onco- and proto-Lbc can promote formation of GTP-bound Rho in vivo. Proto-Lbc transforming activity is much reduced compared to that of onco-Lbc, and a significant increase in transforming activity requires truncation of both the α-helical and proline-rich regions in the proto-Lbc C terminus. Deletion of the chromosome 7-derived C terminus of onco-Lbc does not destroy transforming activity, demonstrating that it is loss of the proto-Lbc C terminus, rather than gain of an unrelated C-terminus by onco-Lbc, that confers transforming activity. Mutations of onco-Lbc DH and PH domains demonstrate that both domains are necessary for full transforming activity. The proto-Lbc product localizes to the particulate (membrane) fraction, while the majority of the onco-Lbc product is cytosolic, and mutations of the PH domain do not affect this localization. The proto-Lbc C-terminus alone localizes predominantly to the particulate fraction, indicating that the C terminus may play a major role in the correct subcellular localization of proto-Lbc, thus providing a mechanism for regulating Lbc oncogenic potential. PMID:9891067

  8. A new approach for measuring power spectra and reconstructing time series in active galactic nuclei

    NASA Astrophysics Data System (ADS)

    Li, Yan-Rong; Wang, Jian-Min

    2018-05-01

    We provide a new approach to measure power spectra and reconstruct time series in active galactic nuclei (AGNs) based on the fact that the Fourier transform of AGN stochastic variations is a series of complex Gaussian random variables. The approach parametrizes a stochastic series in frequency domain and transforms it back to time domain to fit the observed data. The parameters and their uncertainties are derived in a Bayesian framework, which also allows us to compare the relative merits of different power spectral density models. The well-developed fast Fourier transform algorithm together with parallel computation enables an acceptable time complexity for the approach.

  9. SH2 and SH3 domains: elements that control interactions of cytoplasmic signaling proteins.

    PubMed

    Koch, C A; Anderson, D; Moran, M F; Ellis, C; Pawson, T

    1991-05-03

    Src homology (SH) regions 2 and 3 are noncatalytic domains that are conserved among a series of cytoplasmic signaling proteins regulated by receptor protein-tyrosine kinases, including phospholipase C-gamma, Ras GTPase (guanosine triphosphatase)-activating protein, and Src-like tyrosine kinases. The SH2 domains of these signaling proteins bind tyrosine phosphorylated polypeptides, implicated in normal signaling and cellular transformation. Tyrosine phosphorylation acts as a switch to induce the binding of SH2 domains, thereby mediating the formation of heteromeric protein complexes at or near the plasma membrane. The formation of these complexes is likely to control the activation of signal transduction pathways by tyrosine kinases. The SH3 domain is a distinct motif that, together with SH2, may modulate interactions with the cytoskeleton and membrane. Some signaling and transforming proteins contain SH2 and SH3 domains unattached to any known catalytic element. These noncatalytic proteins may serve as adaptors to link tyrosine kinases to specific target proteins. These observations suggest that SH2 and SH3 domains participate in the control of intracellular responses to growth factor stimulation.

  10. THE PSTD ALGORITHM: A TIME-DOMAIN METHOD REQUIRING ONLY TWO CELLS PER WAVELENGTH. (R825225)

    EPA Science Inventory

    A pseudospectral time-domain (PSTD) method is developed for solutions of Maxwell's equations. It uses the fast Fourier transform (FFT), instead of finite differences on conventional finite-difference-time-domain (FDTD) methods, to represent spatial derivatives. Because the Fourie...

  11. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    NASA Astrophysics Data System (ADS)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  12. Automating the Transformational Development of Software. Volume 1.

    DTIC Science & Technology

    1983-03-01

    DRACO system [Neighbors 80] uses meta-rules to derive information about which new transformations will be applicable after a particular transformation has...transformation over another. The new model, as Incorporated in a system called Glitter, explicitly represents transformation goals, methods, and selection...done anew for each new problem (compare this with Neighbor’s Draco system [Neighbors 80] which attempts to reuse domain analysis). o Is the user

  13. Sparsity based terahertz reflective off-axis digital holography

    NASA Astrophysics Data System (ADS)

    Wan, Min; Muniraj, Inbarasan; Malallah, Ra'ed; Zhao, Liang; Ryle, James P.; Rong, Lu; Healy, John J.; Wang, Dayong; Sheridan, John T.

    2017-05-01

    Terahertz radiation lies between the microwave and infrared regions in the electromagnetic spectrum. Emitted frequencies range from 0.1 to 10 THz with corresponding wavelengths ranging from 30 μm to 3 mm. In this paper, a continuous-wave Terahertz off-axis digital holographic system is described. A Gaussian fitting method and image normalisation techniques were employed on the recorded hologram to improve the image resolution. A synthesised contrast enhanced hologram is then digitally constructed. Numerical reconstruction is achieved using the angular spectrum method of the filtered off-axis hologram. A sparsity based compression technique is introduced before numerical data reconstruction in order to reduce the dataset required for hologram reconstruction. Results prove that a tiny amount of sparse dataset is sufficient in order to reconstruct the hologram with good image quality.

  14. Center for Infrastructure Assurance and Security - Attack and Defense Exercises

    DTIC Science & Technology

    2010-06-01

    conclusion of the research funding under this program. 4.1. Steganography Detection Tools Steganography is the art of hiding information in a cover image ...Some of the more common methods are altering the LSB (least significant bit) of the pixels of the image , altering the palette of an RGB image , or...altering parts of the image in the transform domain. Algorithms that embed information in the transform domain are usually more robust to common

  15. SLEEC: Semantics-Rich Libraries for Effective Exascale Computation. Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milind, Kulkarni

    SLEEC (Semantics-rich Libraries for Effective Exascale Computation) was a project funded by the Department of Energy X-Stack Program, award number DE-SC0008629. The initial project period was September 2012–August 2015. The project was renewed for an additional year, expiring August 2016. Finally, the project received a no-cost extension, leading to a final expiry date of August 2017. Modern applications, especially those intended to run at exascale, are not written from scratch. Instead, they are built by stitching together various carefully-written, hand-tuned libraries. Correctly composing these libraries is difficult, but traditional compilers are unable to effectively analyze and transform across abstraction layers.more » Domain specific compilers integrate semantic knowledge into compilers, allowing them to transform applications that use particular domain-specific languages, or domain libraries. But they do not help when new domains are developed, or applications span multiple domains. SLEEC aims to fix these problems. To do so, we are building generic compiler and runtime infrastructures that are semantics-aware but not domain-specific. By performing optimizations related to the semantics of a domain library, the same infrastructure can be made generic and apply across multiple domains.« less

  16. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  17. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  18. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications☆

    PubMed Central

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-01-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer. PMID:24829517

  19. Finite element solution of nonlinear eddy current problems with periodic excitation and its industrial applications.

    PubMed

    Bíró, Oszkár; Koczka, Gergely; Preis, Kurt

    2014-05-01

    An efficient finite element method to take account of the nonlinearity of the magnetic materials when analyzing three-dimensional eddy current problems is presented in this paper. The problem is formulated in terms of vector and scalar potentials approximated by edge and node based finite element basis functions. The application of Galerkin techniques leads to a large, nonlinear system of ordinary differential equations in the time domain. The excitations are assumed to be time-periodic and the steady-state periodic solution is of interest only. This is represented either in the frequency domain as a finite Fourier series or in the time domain as a set of discrete time values within one period for each finite element degree of freedom. The former approach is the (continuous) harmonic balance method and, in the latter one, discrete Fourier transformation will be shown to lead to a discrete harmonic balance method. Due to the nonlinearity, all harmonics, both continuous and discrete, are coupled to each other. The harmonics would be decoupled if the problem were linear, therefore, a special nonlinear iteration technique, the fixed-point method is used to linearize the equations by selecting a time-independent permeability distribution, the so-called fixed-point permeability in each nonlinear iteration step. This leads to uncoupled harmonics within these steps. As industrial applications, analyses of large power transformers are presented. The first example is the computation of the electromagnetic field of a single-phase transformer in the time domain with the results compared to those obtained by traditional time-stepping techniques. In the second application, an advanced model of the same transformer is analyzed in the frequency domain by the harmonic balance method with the effect of the presence of higher harmonics on the losses investigated. Finally a third example tackles the case of direct current (DC) bias in the coils of a single-phase transformer.

  20. Application of Wavelet Transform for PDZ Domain Classification

    PubMed Central

    Daqrouq, Khaled; Alhmouz, Rami; Balamesh, Ahmed; Memic, Adnan

    2015-01-01

    PDZ domains have been identified as part of an array of signaling proteins that are often unrelated, except for the well-conserved structural PDZ domain they contain. These domains have been linked to many disease processes including common Avian influenza, as well as very rare conditions such as Fraser and Usher syndromes. Historically, based on the interactions and the nature of bonds they form, PDZ domains have most often been classified into one of three classes (class I, class II and others - class III), that is directly dependent on their binding partner. In this study, we report on three unique feature extraction approaches based on the bigram and trigram occurrence and existence rearrangements within the domain's primary amino acid sequences in assisting PDZ domain classification. Wavelet packet transform (WPT) and Shannon entropy denoted by wavelet entropy (WE) feature extraction methods were proposed. Using 115 unique human and mouse PDZ domains, the existence rearrangement approach yielded a high recognition rate (78.34%), which outperformed our occurrence rearrangements based method. The recognition rate was (81.41%) with validation technique. The method reported for PDZ domain classification from primary sequences proved to be an encouraging approach for obtaining consistent classification results. We anticipate that by increasing the database size, we can further improve feature extraction and correct classification. PMID:25860375

  1. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  2. Color image cryptosystem using Fresnel diffraction and phase modulation in an expanded fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Chen, Hang; Liu, Zhengjun; Chen, Qi; Blondel, Walter; Varis, Pierre

    2018-05-01

    In this letter, what we believe is a new technique for optical color image encryption by using Fresnel diffraction and a phase modulation in an extended fractional Fourier transform domain is proposed. Different from the RGB component separation based method, the color image is converted into one component by improved Chirikov mapping. The encryption system is addressed with Fresnel diffraction and phase modulation. A pair of lenses is placed into the fractional Fourier transform system for the modulation of beam propagation. The structure parameters of the optical system and parameters in Chirikov mapping serve as extra keys. Some numerical simulations are given to test the validity of the proposed cryptosystem.

  3. Medical Image Authentication Using DPT Watermarking: A Preliminary Attempt

    NASA Astrophysics Data System (ADS)

    Wong, M. L. Dennis; Goh, Antionette W.-T.; Chua, Hong Siang

    Secure authentication of digital medical image content provides great value to the e-Health community and medical insurance industries. Fragile Watermarking has been proposed to provide the mechanism to authenticate digital medical image securely. Transform Domain based Watermarking are typically slower than spatial domain watermarking owing to the overhead in calculation of coefficients. In this paper, we propose a new Discrete Pascal Transform based watermarking technique. Preliminary experiment result shows authentication capability. Possible improvements on the proposed scheme are also presented before conclusions.

  4. Coupled domain wall motion, lattice strain and phase transformation in morphotropic phase boundary composition of PbTiO 3-BiScO 3 piezoelectric ceramic

    DOE PAGES

    Khatua, Dipak Kumar; V., Lalitha K.; Fancher, Chris M.; ...

    2016-10-18

    High energy synchrotron X-ray diffraction, in situ with electric field, was carried out on the morphotropic phase boundary composition of the piezoelectric alloy PbTiO 3-BiScO 3. We demonstrate a strong correlation between ferroelectric-ferroelastic domain reorientation, lattice strain and phase transformation. Lastly, we also show the occurrence of the three phenomena and persistence of their correlation in the weak field regime.

  5. Efficient Computation of Closed-loop Frequency Response for Large Order Flexible Systems

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Giesy, Daniel P.

    1997-01-01

    An efficient and robust computational scheme is given for the calculation of the frequency response function of a large order, flexible system implemented with a linear, time invariant control system. Advantage is taken of the highly structured sparsity of the system matrix of the plant based on a model of the structure using normal mode coordinates. The computational time per frequency point of the new computational scheme is a linear function of system size, a significant improvement over traditional, full-matrix techniques whose computational times per frequency point range from quadratic to cubic functions of system size. This permits the practical frequency domain analysis of systems of much larger order than by traditional, full-matrix techniques. Formulations are given for both open and closed loop loop systems. Numerical examples are presented showing the advantages of the present formulation over traditional approaches, both in speed and in accuracy. Using a model with 703 structural modes, a speed-up of almost two orders of magnitude was observed while accuracy improved by up to 5 decimal places.

  6. The Emergence of Organizing Structure in Conceptual Representation.

    PubMed

    Lake, Brenden M; Lawrence, Neil D; Tenenbaum, Joshua B

    2018-06-01

    Both scientists and children make important structural discoveries, yet their computational underpinnings are not well understood. Structure discovery has previously been formalized as probabilistic inference about the right structural form-where form could be a tree, ring, chain, grid, etc. (Kemp & Tenenbaum, 2008). Although this approach can learn intuitive organizations, including a tree for animals and a ring for the color circle, it assumes a strong inductive bias that considers only these particular forms, and each form is explicitly provided as initial knowledge. Here we introduce a new computational model of how organizing structure can be discovered, utilizing a broad hypothesis space with a preference for sparse connectivity. Given that the inductive bias is more general, the model's initial knowledge shows little qualitative resemblance to some of the discoveries it supports. As a consequence, the model can also learn complex structures for domains that lack intuitive description, as well as predict human property induction judgments without explicit structural forms. By allowing form to emerge from sparsity, our approach clarifies how both the richness and flexibility of human conceptual organization can coexist. Copyright © 2018 Cognitive Science Society, Inc.

  7. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  8. Trends in extreme learning machines: a review.

    PubMed

    Huang, Gao; Huang, Guang-Bin; Song, Shiji; You, Keyou

    2015-01-01

    Extreme learning machine (ELM) has gained increasing interest from various research fields recently. In this review, we aim to report the current state of the theoretical research and practical advances on this subject. We first give an overview of ELM from the theoretical perspective, including the interpolation theory, universal approximation capability, and generalization ability. Then we focus on the various improvements made to ELM which further improve its stability, sparsity and accuracy under general or specific conditions. Apart from classification and regression, ELM has recently been extended for clustering, feature selection, representational learning and many other learning tasks. These newly emerging algorithms greatly expand the applications of ELM. From implementation aspect, hardware implementation and parallel computation techniques have substantially sped up the training of ELM, making it feasible for big data processing and real-time reasoning. Due to its remarkable efficiency, simplicity, and impressive generalization performance, ELM have been applied in a variety of domains, such as biomedical engineering, computer vision, system identification, and control and robotics. In this review, we try to provide a comprehensive view of these advances in ELM together with its future perspectives.

  9. Efficient evaluation of nonlocal operators in density functional theory

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Chih; Chen, Jing-Zhe; Michaud-Rioux, Vincent; Shi, Qing; Guo, Hong

    2018-02-01

    We present a method which combines plane waves (PW) and numerical atomic orbitals (NAO) to efficiently evaluate nonlocal operators in density functional theory with periodic boundary conditions. Nonlocal operators are first expanded using PW and then transformed to NAO so that the problem of distance-truncation is avoided. The general formalism is implemented using the hybrid functional HSE06 where the nonlocal operator is the exact exchange. Comparison of electronic structures of a wide range of semiconductors to a pure PW scheme validates the accuracy of our method. Due to the locality of NAO, thus sparsity of matrix representations of the operators, the computational complexity of the method is asymptotically quadratic in the number of electrons. Finally, we apply the technique to investigate the electronic structure of the interface between a single-layer black phosphorous and the high-κ dielectric material c -HfO2 . We predict that the band offset between the two materials is 1.29 eV and 2.18 eV for valence and conduction band edges, respectively, and such offsets are suitable for 2D field-effect transistor applications.

  10. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  11. Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando

    2017-10-01

    We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.

  12. Hierarchical Bayesian sparse image reconstruction with application to MRFM.

    PubMed

    Dobigeon, Nicolas; Hero, Alfred O; Tourneret, Jean-Yves

    2009-09-01

    This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument.

  13. DOA Estimation for Underwater Wideband Weak Targets Based on Coherent Signal Subspace and Compressed Sensing.

    PubMed

    Li, Jun; Lin, Qiu-Hua; Kang, Chun-Yu; Wang, Kai; Yang, Xiu-Ting

    2018-03-18

    Direction of arrival (DOA) estimation is the basis for underwater target localization and tracking using towed line array sonar devices. A method of DOA estimation for underwater wideband weak targets based on coherent signal subspace (CSS) processing and compressed sensing (CS) theory is proposed. Under the CSS processing framework, wideband frequency focusing is accompanied by a two-sided correlation transformation, allowing the DOA of underwater wideband targets to be estimated based on the spatial sparsity of the targets and the compressed sensing reconstruction algorithm. Through analysis and processing of simulation data and marine trial data, it is shown that this method can accomplish the DOA estimation of underwater wideband weak targets. Results also show that this method can considerably improve the spatial spectrum of weak target signals, enhancing the ability to detect them. It can solve the problems of low directional resolution and unreliable weak-target detection in traditional beamforming technology. Compared with the conventional minimum variance distortionless response beamformers (MVDR), this method has many advantages, such as higher directional resolution, wider detection range, fewer required snapshots and more accurate detection for weak targets.

  14. Presence of an SH2 domain in the actin-binding protein tensin.

    PubMed

    Davis, S; Lu, M L; Lo, S H; Lin, S; Butler, J A; Druker, B J; Roberts, T M; An, Q; Chen, L B

    1991-05-03

    The molecular cloning of the complementary DNA coding for a 90-kilodalton fragment of tensin, an actin-binding component of focal contacts and other submembraneous cytoskeletal structures, is reported. The derived amino acid sequence revealed the presence of a Src homology 2 (SH2) domain. This domain is shared by a number of signal transduction proteins including nonreceptor tyrosine kinases such as Abl, Fps, Src, and Src family members, the transforming protein Crk, phospholipase C-gamma 1, PI-3 (phosphatidylinositol) kinase, and guanosine triphosphatase-activating protein (GAP). Like the SH2 domain found in Src, Crk, and Abl, the SH2 domain of tensin bound specifically to a number of phosphotyrosine-containing proteins from v-src-transformed cells. Tensin was also found to be phosphorylated on tyrosine residues. These findings suggest that by possessing both actin-binding and phosphotyrosine-binding activities and being itself a target for tyrosine kinases, tensin may link signal transduction pathways with the cytoskeleton.

  15. Fair and Square Computation of Inverse "Z"-Transforms of Rational Functions

    ERIC Educational Resources Information Center

    Moreira, M. V.; Basilio, J. C.

    2012-01-01

    All methods presented in textbooks for computing inverse "Z"-transforms of rational functions have some limitation: 1) the direct division method does not, in general, provide enough information to derive an analytical expression for the time-domain sequence "x"("k") whose "Z"-transform is "X"("z"); 2) computation using the inversion integral…

  16. Image reconstruction by domain-transform manifold learning.

    PubMed

    Zhu, Bo; Liu, Jeremiah Z; Cauley, Stephen F; Rosen, Bruce R; Rosen, Matthew S

    2018-03-21

    Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction-automated transform by manifold approximation (AUTOMAP)-which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.

  17. Image reconstruction by domain-transform manifold learning

    NASA Astrophysics Data System (ADS)

    Zhu, Bo; Liu, Jeremiah Z.; Cauley, Stephen F.; Rosen, Bruce R.; Rosen, Matthew S.

    2018-03-01

    Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.

  18. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    PubMed

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  19. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    NASA Astrophysics Data System (ADS)

    Song, Chenchen; Martínez, Todd J.

    2016-05-01

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N2.6 for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  20. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    PubMed

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  1. A Kernel-Based Low-Rank (KLR) Model for Low-Dimensional Manifold Recovery in Highly Accelerated Dynamic MRI.

    PubMed

    Nakarmi, Ukash; Wang, Yanhua; Lyu, Jingyuan; Liang, Dong; Ying, Leslie

    2017-11-01

    While many low rank and sparsity-based approaches have been developed for accelerated dynamic magnetic resonance imaging (dMRI), they all use low rankness or sparsity in input space, overlooking the intrinsic nonlinear correlation in most dMRI data. In this paper, we propose a kernel-based framework to allow nonlinear manifold models in reconstruction from sub-Nyquist data. Within this framework, many existing algorithms can be extended to kernel framework with nonlinear models. In particular, we have developed a novel algorithm with a kernel-based low-rank model generalizing the conventional low rank formulation. The algorithm consists of manifold learning using kernel, low rank enforcement in feature space, and preimaging with data consistency. Extensive simulation and experiment results show that the proposed method surpasses the conventional low-rank-modeled approaches for dMRI.

  2. High resolution computational on-chip imaging of biological samples using sparsity constraint (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rivenson, Yair; Wu, Chris; Wang, Hongda; Zhang, Yibo; Ozcan, Aydogan

    2017-03-01

    Microscopic imaging of biological samples such as pathology slides is one of the standard diagnostic methods for screening various diseases, including cancer. These biological samples are usually imaged using traditional optical microscopy tools; however, the high cost, bulkiness and limited imaging throughput of traditional microscopes partially restrict their deployment in resource-limited settings. In order to mitigate this, we previously demonstrated a cost-effective and compact lens-less on-chip microscopy platform with a wide field-of-view of >20-30 mm^2. The lens-less microscopy platform has shown its effectiveness for imaging of highly connected biological samples, such as pathology slides of various tissue samples and smears, among others. This computational holographic microscope requires a set of super-resolved holograms acquired at multiple sample-to-sensor distances, which are used as input to an iterative phase recovery algorithm and holographic reconstruction process, yielding high-resolution images of the samples in phase and amplitude channels. Here we demonstrate that in order to reconstruct clinically relevant images with high resolution and image contrast, we require less than 50% of the previously reported nominal number of holograms acquired at different sample-to-sensor distances. This is achieved by incorporating a loose sparsity constraint as part of the iterative holographic object reconstruction. We demonstrate the success of this sparsity-based computational lens-less microscopy platform by imaging pathology slides of breast cancer tissue and Papanicolaou (Pap) smears.

  3. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    NASA Astrophysics Data System (ADS)

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  4. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  5. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    NASA Astrophysics Data System (ADS)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  6. 3D transient electromagnetic simulation using a modified correspondence principle for wave and diffusion fields

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ji, Y.; Egbert, G. D.

    2015-12-01

    The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.

  7. Transformation as a Design Process and Runtime Architecture for High Integrity Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bespalko, S.J.; Winter, V.L.

    1999-04-05

    We have discussed two aspects of creating high integrity software that greatly benefit from the availability of transformation technology, which in this case is manifest by the requirement for a sophisticated backtracking parser. First, because of the potential for correctly manipulating programs via small changes, an automated non-procedural transformation system can be a valuable tool for constructing high assurance software. Second, modeling the processing of translating data into information as a, perhaps, context-dependent grammar leads to an efficient, compact implementation. From a practical perspective, the transformation process should begin in the domain language in which a problem is initially expressed.more » Thus in order for a transformation system to be practical it must be flexible with respect to domain-specific languages. We have argued that transformation applied to specification results in a highly reliable system. We also attempted to briefly demonstrate that transformation technology applied to the runtime environment will result in a safe and secure system. We thus believe that the sophisticated multi-lookahead backtracking parsing technology is central to the task of being in a position to demonstrate the existence of HIS.« less

  8. How can healthcare organizations implement patient-centered care? Examining a large-scale cultural transformation.

    PubMed

    Bokhour, Barbara G; Fix, Gemmae M; Mueller, Nora M; Barker, Anna M; Lavela, Sherri L; Hill, Jennifer N; Solomon, Jeffrey L; Lukas, Carol VanDeusen

    2018-03-07

    Healthcare organizations increasingly are focused on providing care which is patient-centered rather than disease-focused. Yet little is known about how best to transform the culture of care in these organizations. We sought to understand key organizational factors for implementing patient-centered care cultural transformation through an examination of efforts in the US Department of Veterans Affairs. We conducted multi-day site visits at four US Department of Veterans Affairs medical centers designated as leaders in providing patient-centered care. We conducted qualitative semi-structured interviews with 108 employees (22 senior leaders, 42 middle managers, 37 front-line providers and 7 staff). Transcripts of audio recordings were analyzed using a priori codes based on the Consolidated Framework for Implementation Research. We used constant comparison analysis to synthesize codes into meaningful domains. Sites described actions taken to foster patient-centered care in seven domains: 1) leadership; 2) patient and family engagement; 3) staff engagement; 4) focus on innovations; 5) alignment of staff roles and priorities; 6) organizational structures and processes; 7) environment of care. Within each domain, we identified multi-faceted strategies for implementing change. These included efforts by all levels of organizational leaders who modeled patient-centered care in their interactions and fostered willingness to try novel approaches to care amongst staff. Alignment and integration of patient centered care within the organization, particularly surrounding roles, priorities and bureaucratic rules, remained major challenges. Transforming healthcare systems to focus on patient-centered care and better serve the "whole" patient is a complex endeavor. Efforts to transform healthcare culture require robust, multi-pronged efforts at all levels of the organization; leadership is only the beginning. Challenges remain for incorporating patient-centered approaches in the context of competing priorities and regulations. Through actions within each of the domains, organizations may begin to truly transform to patient-driven care.

  9. [Continuum based fast Fourier transform processing of infrared spectrum].

    PubMed

    Liu, Qing-Jie; Lin, Qi-Zhong; Wang, Qin-Jun; Li, Hui; Li, Shuai

    2009-12-01

    To recognize ground objects with infrared spectrum, high frequency noise removing is one of the most important phases in spectrum feature analysis and extraction. A new method for infrared spectrum preprocessing was given combining spectrum continuum processing and Fast Fourier Transform (CFFT). Continuum was firstly removed from the noise polluted infrared spectrum to standardize hyper-spectra. Then the spectrum was transformed into frequency domain (FD) with fast Fourier transform (FFT), separating noise information from target information After noise eliminating from useful information with a low-pass filter, the filtered FD spectrum was transformed into time domain (TD) with fast Fourier inverse transform. Finally the continuum was recovered to the spectrum, and the filtered infrared spectrum was achieved. Experiment was performed for chlorite spectrum in USGS polluted with two kinds of simulated white noise to validate the filtering ability of CFFT by contrast with cubic function of five point (CFFP) in time domain and traditional FFT in frequency domain. A circle of CFFP has limited filtering effect, so it should work much with more circles and consume more time to achieve better filtering result. As for conventional FFT, Gibbs phenomenon has great effect on preprocessing result at edge bands because of special character of rock or mineral spectra, while works well at middle bands. Mean squared error of CFFT is 0. 000 012 336 with cut-off frequency of 150, while that of FFT and CFFP is 0. 000 061 074 with cut-off frequency of 150 and 0.000 022 963 with 150 working circles respectively. Besides the filtering result of CFFT can be improved by adjusting the filter cut-off frequency, and has little effect on working time. The CFFT method overcomes the Gibbs problem of FFT in spectrum filtering, and can be more convenient, dependable, and effective than traditional TD filter methods.

  10. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  11. A Frequency-Domain Implementation of a Sliding-Window Traffic Sign Detector for Large Scale Panoramic Datasets

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; De With, P. H. N.

    2013-10-01

    In large-scale automatic traffic sign surveying systems, the primary computational effort is concentrated at the traffic sign detection stage. This paper focuses on reducing the computational load of particularly the sliding window object detection algorithm which is employed for traffic sign detection. Sliding-window object detectors often use a linear SVM to classify the features in a window. In this case, the classification can be seen as a convolution of the feature maps with the SVM kernel. It is well known that convolution can be efficiently implemented in the frequency domain, for kernels larger than a certain size. We show that by careful reordering of sliding-window operations, most of the frequency-domain transformations can be eliminated, leading to a substantial increase in efficiency. Additionally, we suggest to use the overlap-add method to keep the memory use within reasonable bounds. This allows us to keep all the transformed kernels in memory, thereby eliminating even more domain transformations, and allows all scales in a multiscale pyramid to be processed using the same set of transformed kernels. For a typical sliding-window implementation, we have found that the detector execution performance improves with a factor of 5.3. As a bonus, many of the detector improvements from literature, e.g. chi-squared kernel approximations, sub-class splitting algorithms etc., can be more easily applied at a lower performance penalty because of an improved scalability.

  12. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  13. Co-autodisplay of Z-domains and bovine caseins on the outer membrane of E. coli.

    PubMed

    Yoo, Gu; Saenger, Thorsten; Bong, Ji-Hong; Jose, Joachim; Kang, Min-Jung; Pyun, Jae-Chul

    2015-12-01

    In this work, two proteins, Z-domains and bovine casein, were auto-displayed on the outer membrane of the same Escherichia coli cells by co-transformation of two different auto-display vectors. On the basis of SDS-PAGE densitometry, Z-domains and bovine casein were expressed at 3.12 × 10⁵ and 1.55 × 10⁵ proteins/E. coli cell, respectively. The co-auto-displayed Z-domains had antibody-binding activity and the bovine casein had adhesive properties. E. coli with co-auto-displayed proteins were analyzed by fluorescence assisted cell sorting (FACS). E. coli with co-auto-displayed Z-domains and bovine casein aggregated due to hydrophobic interaction. For application to immunoassays, the Z-domain activity was estimated after (1) immobilizing the E. coli and (2) forming an OM layer. E. coli with co-auto-displayed two proteins that were immobilized on a polystyrene microplate had the same antibody-binding activity as did E. coli with auto-displayed Z-domains only. The OM layer from the co-transformed E. coli had Z-domains and bovine casein expressed at a 1:2 ratio from antibody-binding activity measurements. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Expression of the high capacity calcium-binding domain of calreticulin increases bioavailable calcium stores in plants

    NASA Technical Reports Server (NTRS)

    Wyatt, Sarah E.; Tsou, Pei-Lan; Robertson, Dominique; Brown, C. S. (Principal Investigator)

    2002-01-01

    Modulation of cytosolic calcium levels in both plants and animals is achieved by a system of Ca2+-transport and storage pathways that include Ca2+ buffering proteins in the lumen of intracellular compartments. To date, most research has focused on the role of transporters in regulating cytosolic calcium. We used a reverse genetics approach to modulate calcium stores in the lumen of the endoplasmic reticulum. Our goals were two-fold: to use the low affinity, high capacity Ca2+ binding characteristics of the C-domain of calreticulin to selectively increase Ca2+ storage in the endoplasmic reticulum, and to determine if those alterations affected plant physiological responses to stress. The C-domain of calreticulin is a highly acidic region that binds 20-50 moles of Ca2+ per mole of protein and has been shown to be the major site of Ca2+ storage within the endoplasmic reticulum of plant cells. A 377-bp fragment encoding the C-domain and ER retention signal from the maize calreticulin gene was fused to a gene for the green fluorescent protein and expressed in Arabidopsis under the control of a heat shock promoter. Following induction on normal medium, the C-domain transformants showed delayed loss of chlorophyll after transfer to calcium depleted medium when compared to seedlings transformed with green fluorescent protein alone. Total calcium measurements showed a 9-35% increase for induced C-domain transformants compared to controls. The data suggest that ectopic expression of the calreticulin C-domain increases Ca2+ stores, and that this Ca2+ reserve can be used by the plant in times of stress.

  15. Integrated In Silico-In Vitro Identification and Characterization of the SH3-Mediated Interaction between Human PTTG and its Cognate Partners in Medulloblastoma.

    PubMed

    Liu, Jiangang; Wang, Dapeng; Li, Yanyan; Yao, Hui; Zhang, Nan; Zhang, Xuewen; Zhong, Fangping; Huang, Yulun

    2018-06-01

    The human pituitary tumor-transforming gene is an oncogenic protein which serves as a central hub in the cellular signaling network of medulloblastoma. The protein contains two vicinal PxxP motifs at its C terminus that are potential binding sites of peptide-recognition SH3 domains. Here, a synthetic protocol that integrated in silico analysis and in vitro assay was described to identify the SH3-binding partners of pituitary tumor-transforming gene in the gene expression profile of medulloblastoma. In the procedure, a variety of structurally diverse, non-redundant SH3 domains with high gene expression in medulloblastoma were compiled, and their three-dimensional structures were either manually retrieved from the protein data bank database or computationally modeled through bioinformatics technique. The binding capability of these domains towards the two PxxP-containing peptides m1p: 161 LGPPSPVK 168 and m2p: 168 KMPSPPWE 175 of pituitary tumor-transforming gene were ranked by structure-based scoring and fluorescence-based assay. Consequently, a number of SH3 domains, including MAP3K and PI3K, were found to have moderate or high affinity for m1p and/or m2p. Interestingly, the two overlapping peptides exhibits a distinct binding profile to these identified domain partners, suggesting that the binding selectivity of m1p and m2p is optimized across the medulloblastoma expression spectrum by competing for domain candidates. In addition, two redesigned versions of m1p peptide ware obtained via a structure-based rational mutation approach, which exhibited an increased affinity for the domain as compared to native peptide.

  16. The PDZ domain binding motif (PBM) of human T-cell leukemia virus type 1 Tax can be substituted by heterologous PBMs from viral oncoproteins during T-cell transformation.

    PubMed

    Aoyagi, Tomoya; Takahashi, Masahiko; Higuchi, Masaya; Oie, Masayasu; Tanaka, Yuetsu; Kiyono, Tohru; Aoyagi, Yutaka; Fujii, Masahiro

    2010-04-01

    Several tumor viruses, such as human T-cell leukemia virus (HTLV), human papilloma virus (HPV), human adenovirus, have high-oncogenic and low-oncogenic subtypes, and such subtype-specific oncogenesis is associated with the PDZ-domain binding motif (PBM) in their transforming proteins. HTLV-1, the causative agent of adult T-cell leukemia, encodes Tax1 with PBM as a transforming protein. The Tax1 PBM was substituted with those from other oncoviruses, and the transforming activity was examined. Tax1 mutants with PBM from either HPV-16 E6 or adenovirus type 9 E4ORF1 are fully active in the transformation of a mouse T-cell line from interleukin-2-dependent growth into independent growth. Interestingly, one such Tax1 PBM mutant had an extra amino acid insertion derived from E6 between PBM and the rest of Tax1, thus suggesting that the amino acid sequences of the peptides between PBM and the rest of Tax1 and the numbers only slightly affect the function of PBM in the transformation. Tax1 and Tax1 PBM mutants interacted with tumor suppressors Dlg1 and Scribble with PDZ-domains. Unlike E6, Tax1 PBM mutants as well as Tax1 did not or minimally induced the degradations of Dlg1 and Scribble, but instead induced their subcellular translocation from the detergent-soluble fraction into the insoluble fraction, thus suggesting that the inactivation mechanism of these tumor suppressor proteins is distinct. The present results suggest that PBMs of high-risk oncoviruses have a common function(s) required for these three tumor viruses to transform cells, which is likely associated with the subtype-specific oncogenesis of these tumor viruses.

  17. Methods of increasing secretion of polypeptides having biological activity

    DOEpatents

    Merino, Sandra

    2014-05-27

    The present invention relates to methods for producing a secreted polypeptide having biological activity, comprising: (a) transforming a fungal host cell with a fusion protein construct encoding a fusion protein, which comprises: (i) a first polynucleotide encoding a signal peptide; (ii) a second polynucleotide encoding at least a catalytic domain of an endoglucanase or a portion thereof; and (iii) a third polynucleotide encoding at least a catalytic domain of a polypeptide having biological activity; wherein the signal peptide and at least the catalytic domain of the endoglucanase increases secretion of the polypeptide having biological activity compared to the absence of at least the catalytic domain of the endoglucanase; (b) cultivating the transformed fungal host cell under conditions suitable for production of the fusion protein; and (c) recovering the fusion protein, a component thereof, or a combination thereof, having biological activity, from the cultivation medium.

  18. Methods of increasing secretion of polypeptides having biological activity

    DOEpatents

    Merino, Sandra

    2014-10-28

    The present invention relates to methods for producing a secreted polypeptide having biological activity, comprising: (a) transforming a fungal host cell with a fusion protein construct encoding a fusion protein, which comprises: (i) a first polynucleotide encoding a signal peptide; (ii) a second polynucleotide encoding at least a catalytic domain of an endoglucanase or a portion thereof; and (iii) a third polynucleotide encoding at least a catalytic domain of a polypeptide having biological activity; wherein the signal peptide and at least the catalytic domain of the endoglucanase increases secretion of the polypeptide having biological activity compared to the absence of at least the catalytic domain of the endoglucanase; (b) cultivating the transformed fungal host cell under conditions suitable for production of the fusion protein; and (c) recovering the fusion protein, a component thereof, or a combination thereof, having biological activity, from the cultivation medium.

  19. Methods of increasing secretion of polypeptides having biological activity

    DOEpatents

    Merino, Sandra

    2015-04-14

    The present invention relates to methods for producing a secreted polypeptide having biological activity, comprising: (a) transforming a fungal host cell with a fusion protein construct encoding a fusion protein, which comprises: (i) a first polynucleotide encoding a signal peptide; (ii) a second polynucleotide encoding at least a catalytic domain of an endoglucanase or a portion thereof; and (iii) a third polynucleotide encoding at least a catalytic domain of a polypeptide having biological activity; wherein the signal peptide and at least the catalytic domain of the endoglucanase increases secretion of the polypeptide having biological activity compared to the absence of at least the catalytic domain of the endoglucanase; (b) cultivating the transformed fungal host cell under conditions suitable for production of the fusion protein; and (c) recovering the fusion protein, a component thereof, or a combination thereof, having biological activity, from the cultivation medium.

  20. Methods of increasing secretion of polypeptides having biological activity

    DOEpatents

    Merino, Sandra

    2013-10-01

    The present invention relates to methods for producing a secreted polypeptide having biological activity, comprising: (a) transforming a fungal host cell with a fusion protein construct encoding a fusion protein, which comprises: (i) a first polynucleotide encoding a signal peptide; (ii) a second polynucleotide encoding at least a catalytic domain of an endoglucanase or a portion thereof; and (iii) a third polynucleotide encoding at least a catalytic domain of a polypeptide having biological activity; wherein the signal peptide and at least the catalytic domain of the endoglucanase increases secretion of the polypeptide having biological activity compared to the absence of at least the catalytic domain of the endoglucanase; (b) cultivating the transformed fungal host cell under conditions suitable for production of the fusion protein; and (c) recovering the fusion protein, a component thereof, or a combination thereof, having biological activity, from the cultivation medium.

  1. Signal processing method and system for noise removal and signal extraction

    DOEpatents

    Fu, Chi Yung; Petrich, Loren

    2009-04-14

    A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.

  2. Direct observation of magnetic domains by Kerr microscopy in a Ni-Mn-Ga magnetic shape-memory alloy

    NASA Astrophysics Data System (ADS)

    Perevertov, O.; Heczko, O.; Schäfer, R.

    2017-04-01

    The magnetic domains in a magnetic shape-memory Ni-Mn-Ga alloy were observed by magneto-optical Kerr microscopy using monochromatic blue LED light. The domains were observed for both single- and multivariant ferroelastic states of modulated martensite. The multivariant state with very fine twins was spontaneously formed after transformation from high-temperature austenite. For both cases, bar domains separated by 180∘ domain walls were found and their dynamics was studied. A quasidomain model was applied to explain the domains in the multivariant state.

  3. Efficient processing of MPEG-21 metadata in the binary domain

    NASA Astrophysics Data System (ADS)

    Timmerer, Christian; Frank, Thomas; Hellwagner, Hermann; Heuer, Jörg; Hutter, Andreas

    2005-10-01

    XML-based metadata is widely adopted across the different communities and plenty of commercial and open source tools for processing and transforming are available on the market. However, all of these tools have one thing in common: they operate on plain text encoded metadata which may become a burden in constrained and streaming environments, i.e., when metadata needs to be processed together with multimedia content on the fly. In this paper we present an efficient approach for transforming such kind of metadata which are encoded using MPEG's Binary Format for Metadata (BiM) without additional en-/decoding overheads, i.e., within the binary domain. Therefore, we have developed an event-based push parser for BiM encoded metadata which transforms the metadata by a limited set of processing instructions - based on traditional XML transformation techniques - operating on bit patterns instead of cost-intensive string comparisons.

  4. AS Migration and Optimization of the Power Integrated Data Network

    NASA Astrophysics Data System (ADS)

    Zhou, Junjie; Ke, Yue

    2018-03-01

    In the transformation process of data integration network, the impact on the business has always been the most important reference factor to measure the quality of network transformation. With the importance of the data network carrying business, we must put forward specific design proposals during the transformation, and conduct a large number of demonstration and practice to ensure that the transformation program meets the requirements of the enterprise data network. This paper mainly demonstrates the scheme of over-migrating point-to-point access equipment in the reconstruction project of power data comprehensive network to migrate the BGP autonomous domain to the specified domain defined in the industrial standard, and to smooth the intranet OSPF protocol Migration into ISIS agreement. Through the optimization design, eventually making electric power data network performance was improved on traffic forwarding, traffic forwarding path optimized, extensibility, get larger, lower risk of potential loop, the network stability was improved, and operational cost savings, etc.

  5. A new class of sonic composites

    NASA Astrophysics Data System (ADS)

    Munteanu, Ligia; Chiroiu, Veturia; Donescu, Ştefania; Brişan, Cornel

    2014-03-01

    Transformation acoustics opens a new avenue towards the architecture, modeling and simulation of a new class of sonic composites with scatterers made of various materials and having various shapes embedded in an epoxy matrix. The design of acoustic scatterers is based on the property of Helmholtz equations to be invariant under a coordinate transformation, i.e., a specific spatial compression is equivalent to a new material in a new space. In this paper, the noise suppression for a wide full band-gap of frequencies is discussed for spherical shell scatterers made of auxetic materials (materials with negative Poisson's ratio). The original domain consists of spheres made from conventional foams with positive Poisson's ratio. The spatial compression is controlled by the coordinate transformation, and leads to an equivalent domain filled with an auxetic material. The coordinate transformation is strongly supported by the manufacturing of auxetics which is based on the pore size reduction through radial compression molds.

  6. Motion compensation via redundant-wavelet multihypothesis.

    PubMed

    Fowler, James E; Cui, Suxia; Wang, Yonghui

    2006-10-01

    Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.

  7. Domain shape instabilities and dendrite domain growth in uniaxial ferroelectrics

    NASA Astrophysics Data System (ADS)

    Shur, Vladimir Ya.; Akhmatkhanov, Andrey R.

    2018-01-01

    The effects of domain wall shape instabilities and the formation of nanodomains in front of moving walls obtained in various uniaxial ferroelectrics are discussed. Special attention is paid to the formation of self-assembled nanoscale and dendrite domain structures under highly non-equilibrium switching conditions. All obtained results are considered in the framework of the unified kinetic approach to domain structure evolution based on the analogy with first-order phase transformation. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  8. Thermal stabilization of static single-mirror Fourier transform spectrometers

    NASA Astrophysics Data System (ADS)

    Schardt, Michael; Schwaller, Christian; Tremmel, Anton J.; Koch, Alexander W.

    2017-05-01

    Fourier transform spectroscopy has become a standard method for spectral analysis of infrared light. With this method, an interferogram is created by two beam interference which is subsequently Fourier-transformed. Most Fourier transform spectrometers used today provide the interferogram in the temporal domain. In contrast, static Fourier transform spectrometers generate interferograms in the spatial domain. One example of this type of spectrometer is the static single-mirror Fourier transform spectrometer which offers a high etendue in combination with a simple, miniaturized optics design. As no moving parts are required, it also features a high vibration resistance and high measurement rates. However, it is susceptible to temperature variations. In this paper, we therefore discuss the main sources for temperature-induced errors in static single-mirror Fourier transform spectrometers: changes in the refractive index of the optical components used, variations of the detector sensitivity, and thermal expansion of the housing. As these errors manifest themselves in temperature-dependent wavenumber shifts and intensity shifts, they prevent static single-mirror Fourier transform spectrometers from delivering long-term stable spectra. To eliminate these shifts, we additionally present a work concept for the thermal stabilization of the spectrometer. With this stabilization, static single-mirror Fourier transform spectrometers are made suitable for infrared process spectroscopy under harsh thermal environmental conditions. As the static single-mirror Fourier transform spectrometer uses the so-called source-doubling principle, many of the mentioned findings are transferable to other designs of static Fourier transform spectrometers based on the same principle.

  9. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    NASA Astrophysics Data System (ADS)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  10. The magnifying glass - A feature space local expansion for visual analysis. [and image enhancement

    NASA Technical Reports Server (NTRS)

    Juday, R. D.

    1981-01-01

    The Magnifying Glass Transformation (MGT) technique is proposed, as a multichannel spectral operation yielding visual imagery which is enhanced in a specified spectral vicinity, guided by the statistics of training samples. An application example is that in which the discrimination among spectral neighbors within an interactive display may be increased without altering distant object appearances or overall interpretation. A direct histogram specification technique is applied to the channels within the multispectral image so that a subset of the spectral domain occupies an increased fraction of the domain. The transformation is carried out by obtaining the training information, establishing the condition of the covariance matrix, determining the influenced solid, and initializing the lookup table. Finally, the image is transformed.

  11. Data characteristic analysis of air conditioning load based on fast Fourier transform

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhang, Yanchi; Xie, Da

    2018-04-01

    With the development of economy and the improvement of people's living standards, air conditioning equipment is more and more popular. The influence of air conditioning load for power grid is becoming more and more serious. In this context it is necessary to study the characteristics of air conditioning load. This paper analyzes the data of air conditioning power consumption in an office building. The data is used for Fast Fourier Transform by data analysis software. Then a series of maps are drawn for the transformed data. The characteristics of each map were analyzed separately. The hidden rules of these data are mined from the angle of frequency domain. And these rules are hard to find in the time domain.

  12. Analytical Study on the Saturated Polarization Under Electric Field and Phase Equilibrium of Three-Phase Polycrystalline Ferroelectrics by Using the Generalized Inverse-Pole-Figure Model

    NASA Astrophysics Data System (ADS)

    Ju, Kyong-Sik; Ryo, Hyok-Su; Pak, Sung-Nam; Pak, Chang-Su; Ri, Sung-Guk; Ri, Dok-Hwan

    2018-07-01

    By using the generalized inverse-pole-figure model, the numbers of crystalline particles involved in different domain-switching near the triple tetragonal-rhombohedral-orthorhombic (T-R-O) points of three-phase polycrystalline ferroelectrics have been analytically calculated and domain-switching which can bring out phase transformations has been considered. Through polarization by an electric field, different numbers of crystalline particles can be involved in different phase transformations. According to the phase equilibrium conditions, the phase equilibrium compositions of the three phases coexisting near the T-R-O triple point have been evaluated from the results of the numbers of crystalline particles involved in different phase transformations.

  13. Representation of Complex Spectra in Auditory Cortex

    DTIC Science & Technology

    1997-01-01

    predict the response to any broadband dynamic sound. Fourier Transform Inverse Transform ∫ [.] exp(±2πjΩx±2πjwt) 2 1 2 / 1 1 a 2 1 2 / 1 1 a...Systems Research University of Maryland Spectro-Temporal Transform Ω wx = log f t w = “ripple velocity” Ω = “ripple frequency” Fourier Transform Inverse ... Transform ∫ [.] exp(±2πjΩx±2πjwt) Real functions in the spectro-temporal domain give rise to complex conjugate symmetric functions in the Fourier

  14. Exploring the Conceptual Compatibility of Transformative Learning Theory in Accounts of Christian Spiritual Renewal at Wheaton College in 1995

    ERIC Educational Resources Information Center

    McLaughlin, Richard J.

    2014-01-01

    This research explored the conceptual compatibility of Transformative Learning Theory in accounts of Christian spiritual renewal at Wheaton College in 1995. The literature review examined two domains: Transformative Learning Theory (TLT) and renewal of spiritual life in American students. TLT was applied as quadrants of experience, critical…

  15. Trigonometric Transforms for Image Reconstruction

    DTIC Science & Technology

    1998-06-01

    applying trigo - nometric transforms to image reconstruction problems. Many existing linear image reconstruc- tion techniques rely on knowledge of...ancestors. The research performed for this dissertation represents the first time the symmetric convolution-multiplication property of trigo - nometric...Fourier domain. The traditional representation of these filters will be similar to new trigo - nometric transform versions derived in later chapters

  16. Irreversible transformation of ferromagnetic ordered stripe domains in single-shot infrared-pump/resonant-x-ray-scattering-probe experiments

    NASA Astrophysics Data System (ADS)

    Bergeard, Nicolas; Schaffert, Stefan; López-Flores, Víctor; Jaouen, Nicolas; Geilhufe, Jan; Günther, Christian M.; Schneider, Michael; Graves, Catherine; Wang, Tianhan; Wu, Benny; Scherz, Andreas; Baumier, Cédric; Delaunay, Renaud; Fortuna, Franck; Tortarolo, Marina; Tudu, Bharati; Krupin, Oleg; Minitti, Michael P.; Robinson, Joe; Schlotter, William F.; Turner, Joshua J.; Lüning, Jan; Eisebitt, Stefan; Boeglin, Christine

    2015-02-01

    The evolution of a magnetic domain structure upon excitation by an intense, femtosecond infrared (IR) laser pulse has been investigated using single-shot based time-resolved resonant x-ray scattering at the x-ray free electron laser LCLS. A well-ordered stripe domain pattern as present in a thin CoPd alloy film has been used as a prototype magnetic domain structure for this study. The fluence of the IR laser pump pulse was sufficient to lead to an almost complete quenching of the magnetization within the ultrafast demagnetization process taking place within the first few hundreds of femtoseconds following the IR laser pump pulse excitation. On longer time scales this excitation gave rise to subsequent irreversible transformations of the magnetic domain structure. Under our specific experimental conditions, it took about 2 ns before the magnetization started to recover. After about 5 ns the previously ordered stripe domain structure had evolved into a disordered labyrinth domain structure. Surprisingly, we observe after about 7 ns the occurrence of a partially ordered stripe domain structure reoriented into a novel direction. It is this domain structure in which the sample's magnetization stabilizes as revealed by scattering patterns recorded long after the initial pump-probe cycle. Using micromagnetic simulations we can explain this observation based on changes of the magnetic anisotropy going along with heat dissipation in the film.

  17. Frequency-Domain Identification Of Aeroelastic Modes

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.; Tischler, Mark B.

    1991-01-01

    Report describes flight measurements and frequency-domain analyses of aeroelastic vibrational modes of wings of XV-15 tilt-rotor aircraft. Begins with description of flight-test methods. Followed by brief discussion of methods of analysis, which include Fourier-transform computations using chirp z transformers, use of coherence and other spectral functions, and methods and computer programs to obtain frequencies and damping coefficients from measurements. Includes brief description of results of flight tests and comparisions among various experimental and theoretical results. Ends with section on conclusions and recommended improvements in techniques.

  18. Time history solution program, L225 (TEV126). Volume 1: Engineering and usage

    NASA Technical Reports Server (NTRS)

    Kroll, R. I.; Tornallyay, A.; Clemmons, R. E.

    1979-01-01

    Volume 1 of a two volume document is presented. The usage of the convolution program L225 (TEV 126) is described. The program calculates the time response of a linear system by convoluting the impulsive response function with the time-dependent excitation function. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. A brief description of the analysis used is presented.

  19. Fourier transform coupled tryptophan scanning mutagenesis identifies a bending point on the lipid-exposed δM3 transmembrane domain of the Torpedo californica nicotinic acetylcholine receptor

    PubMed Central

    Caballero-Rivera, Daniel; Cruz-Nieves, Omar A; Oyola-Cintrón, Jessica; Torres-Núñez, David A; Otero-Cruz, José D

    2011-01-01

    The nicotinic acetylcholine receptor (nAChR) is a member of a family of ligand-gated ion channels that mediate diverse physiological functions, including fast synaptic transmission along the peripheral and central nervous systems. Several studies have made significant advances toward determining the structure and dynamics of the lipid-exposed domains of the nAChR. However, a high-resolution atomic structure of the nAChR still remains elusive. In this study, we extended the Fourier transform coupled tryptophan scanning mutagenesis (FT-TrpScanM) approach to gain insight into the secondary structure of the δM3 transmembrane domain of the Torpedo californica nAChR, to monitor conformational changes experienced by this domain during channel gating, and to identify which lipid-exposed positions are linked to the regulation of ion channel kinetics. The perturbations produced by periodic tryptophan substitutions along the δM3 transmembrane domain were characterized by two-electrode voltage clamp and 125I-labeled α-bungarotoxin binding assays. The periodicity profiles and Fourier transform spectra of this domain revealed similar helical structures for the closed- and open-channel states. However, changes in the oscillation patterns observed between positions Val-299 and Val-304 during transition between the closed- and open-channel states can be explained by the structural effects caused by the presence of a bending point introduced by a Thr-Gly motif at positions 300–301. The changes in periodicity and localization of residues between the closed-and open-channel states could indicate a structural transition between helix types in this segment of the domain. Overall, the data further demonstrate a functional link between the lipid-exposed transmembrane domain and the nAChR gating machinery. PMID:21785268

  20. Accelerated High-Dimensional MR Imaging with Sparse Sampling Using Low-Rank Tensors

    PubMed Central

    He, Jingfei; Liu, Qiegen; Christodoulou, Anthony G.; Ma, Chao; Lam, Fan

    2017-01-01

    High-dimensional MR imaging often requires long data acquisition time, thereby limiting its practical applications. This paper presents a low-rank tensor based method for accelerated high-dimensional MR imaging using sparse sampling. This method represents high-dimensional images as low-rank tensors (or partially separable functions) and uses this mathematical structure for sparse sampling of the data space and for image reconstruction from highly undersampled data. More specifically, the proposed method acquires two datasets with complementary sampling patterns, one for subspace estimation and the other for image reconstruction; image reconstruction from highly undersampled data is accomplished by fitting the measured data with a sparsity constraint on the core tensor and a group sparsity constraint on the spatial coefficients jointly using the alternating direction method of multipliers. The usefulness of the proposed method is demonstrated in MRI applications; it may also have applications beyond MRI. PMID:27093543

  1. Deconfounding the effects of local element spatial heterogeneity and sparsity on processing dominance.

    PubMed

    Montoro, Pedro R; Luna, Dolores

    2009-10-01

    Previous studies on the processing of hierarchical patterns (Luna & Montoro, 2008) have shown that altering the spatial relationships between the local elements affected processing dominance by decreasing global advantage. In the present article, the authors examine whether heterogeneity or a sparse distribution of the local elements was the responsible factor for this effect. In Experiments 1 and 2, the distance between the local elements was increased in a similar way, but between-element distance was homogeneous in Experiment 1 and heterogeneous in Experiment 2. In Experiment 3, local elements' size was varied by presenting global patterns composed of similar large or small local elements and of different large and small sizes. The results of the present research showed that, instead of element sparsity, spatial heterogeneity that could change the appearance of the global form as well as the salience of the local elements was the main determiner of impairing global processing.

  2. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  3. Visual tracking based on the sparse representation of the PCA subspace

    NASA Astrophysics Data System (ADS)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  4. Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring.

    PubMed

    Kallenberg, Michiel; Petersen, Kersten; Nielsen, Mads; Ng, Andrew Y; Pengfei Diao; Igel, Christian; Vachon, Celine M; Holland, Katharina; Winkel, Rikke Rass; Karssemeijer, Nico; Lillholm, Martin

    2016-05-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.

  5. Variance based joint sparsity reconstruction of synthetic aperture radar data for speckle reduction

    NASA Astrophysics Data System (ADS)

    Scarnati, Theresa; Gelb, Anne

    2018-04-01

    In observing multiple synthetic aperture radar (SAR) images of the same scene, it is apparent that the brightness distributions of the images are not smooth, but rather composed of complicated granular patterns of bright and dark spots. Further, these brightness distributions vary from image to image. This salt and pepper like feature of SAR images, called speckle, reduces the contrast in the images and negatively affects texture based image analysis. This investigation uses the variance based joint sparsity reconstruction method for forming SAR images from the multiple SAR images. In addition to reducing speckle, the method has the advantage of being non-parametric, and can therefore be used in a variety of autonomous applications. Numerical examples include reconstructions of both simulated phase history data that result in speckled images as well as the images from the MSTAR T-72 database.

  6. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  7. Gain-Sparsity and Symmetry-Forced Rigidity in the Plane.

    PubMed

    Jordán, Tibor; Kaszanitzky, Viktória E; Tanigawa, Shin-Ichi

    We consider planar bar-and-joint frameworks with discrete point group symmetry in which the joint positions are as generic as possible subject to the symmetry constraint. We provide combinatorial characterizations for symmetry-forced rigidity of such structures with rotation symmetry or dihedral symmetry of order 2 k with odd k , unifying and extending previous work on this subject. We also explore the matroidal background of our results and show that the matroids induced by the row independence of the orbit matrices of the symmetric frameworks are isomorphic to gain sparsity matroids defined on the quotient graph of the framework, whose edges are labeled by elements of the corresponding symmetry group. The proofs are based on new Henneberg type inductive constructions of the gain graphs that correspond to the bases of the matroids in question, which can also be seen as symmetry preserving graph operations in the original graph.

  8. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    NASA Astrophysics Data System (ADS)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  9. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  10. Sparsity of the normal matrix in the refinement of macromolecules at atomic and subatomic resolution.

    PubMed

    Jelsch, C

    2001-09-01

    The normal matrix in the least-squares refinement of macromolecules is very sparse when the resolution reaches atomic and subatomic levels. The elements of the normal matrix, related to coordinates, thermal motion and charge-density parameters, have a global tendency to decrease rapidly with the interatomic distance between the atoms concerned. For instance, in the case of the protein crambin at 0.54 A resolution, the elements are reduced by two orders of magnitude for distances above 1.5 A. The neglect a priori of most of the normal-matrix elements according to a distance criterion represents an approximation in the refinement of macromolecules, which is particularly valid at very high resolution. The analytical expressions of the normal-matrix elements, which have been derived for the coordinates and the thermal parameters, show that the degree of matrix sparsity increases with the diffraction resolution and the size of the asymmetric unit.

  11. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  12. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Chenchen; Martínez, Todd J.; SLAC National Accelerator Laboratory, Menlo Park, California 94025

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 aremore » less than 0.5 kcal/mol for all systems tested (up to 162 atoms).« less

  13. Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise

    PubMed Central

    Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang

    2015-01-01

    The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860

  14. Critical Domains of Culturally Relevant Leadership Learning: A Call to Transform Leadership Programs.

    PubMed

    Jones, Tamara Bertrand; Guthrie, Kathy L; Osteen, Laura

    2016-12-01

    This chapter introduces the critical domains of culturally relevant leadership learning. The model explores how capacity, identity, and efficacy of student leaders interact with dimensions of campus climate. © 2016 Wiley Periodicals, Inc., A Wiley Company.

  15. The space transformation in the simulation of multidimensional random fields

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    Space transformations are proposed as a mathematically meaningful and practically comprehensive approach to simulate multidimensional random fields. Within this context the turning bands method of simulation is reconsidered and improved in both the space and frequency domains. ?? 1987.

  16. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  17. Direct interaction of Ski with either Smad3 or Smad4 is necessary and sufficient for Ski-mediated repression of transforming growth factor-beta signaling.

    PubMed

    Ueki, Nobuhide; Hayman, Michael J

    2003-08-29

    The oncoprotein Ski represses transforming growth factor (TGF)-beta signaling in an N-CoR-independent manner. However, the molecular mechanism(s) underlying this event has not been elucidated. Here, we identify an additional domain in Ski that mediates interaction with Smad3 which is important for this repression. This domain is distinct from the previously reported N-terminal Smad3 binding domain in Ski. Individual alanine substitution of several residues in the domain significantly affected Ski-Smad3 interaction. Furthermore, combined mutations within this domain, together with those in the previously identified Smad3 binding domain, can completely abolish the interaction of Ski with Smad3, while mutation in each domain alone retained partial interaction. By introducing those mutations that abolish direct interaction with Smad3 or Smad4 individually, or in combination, we show that interaction of Ski with either Smad3 or Smad4 is sufficient for Ski-mediated repression of TGF-beta signaling. Furthermore our results clearly demonstrate that Ski does not disrupt Smad3-Smad4 heteromer formation, and recruitment of Ski to the Smad3/4 complex through binding to either Smad3 or Smad4 is both necessary and sufficient for repression.

  18. Recurrent (2;2) and (2;8) Translocations in Rhabdomyosarcoma without the Canonical PAX-FOXO1 fuse PAX3 to Members of the Nuclear Receptor Transcriptional Coactivator (NCOA) Family

    PubMed Central

    Sumegi, Janos; Streblow, Renae; Frayer, Robert W.; Cin, Paola Dal; Rosenberg, Andrew; Meloni-Ehrig, Aurelia; Bridge, Julia A.

    2009-01-01

    The fusion oncoproteins PAX3-FOXO1 [t(2;13)(q35;q14)] and PAX7-FOXO1 [t(1;13)(p36;q14)] typify alveolar rhabdomyosarcoma (ARMS); however, 20-30% of cases lack these specific translocations. In this study, cytogenetic and/or molecular characterization to include FISH, RT-PCR and sequencing analyses of five rhabdomyosarcomas [four ARMS and one embryonal rhabdomyosarcoma (ERMS)] with novel, recurrent t(2;2)(p23;q35) or t(2;8)(q35;q13) revealed that these non-canonical translocations fuse PAX3 to NCOA1 or NCOA2 respectively. The PAX3-NCOA1 and PAX3-NCOA2 transcripts encode chimeric proteins composed of the paired-box and homeodomain DNA-binding domains of PAX3, and the CID domain, the Q-rich region and the AD2 domain of NCOA1 or NCOA2. To investigate the biological function of these recurrent variant translocations, the coding regions of PAX3-NCOA1 and PAX3-NCOA2 cDNA constructs were introduced into expression vectors with tetracycline-regulated expression. Both fusion proteins showed transforming activity in the soft agar assay. Deletion of the AD2 portion of the PAX3-NCOA fusion proteins reduced the transforming activity of each chimeric protein. Similarly, but with greater impact, CID domain deletion fully abrogated the transforming activity of the chimeric protein. These studies: (1) expand our knowledge of PAX3 variant translocations in RMS with identification of a novel PAX3-NCOA2 fusion; (2) show that both PAX3-NCOA1 and PAX3-NCOA2 represent recurrent RMS rearrangements; (3) confirm the transforming activity of both translocation events and demonstrate the essentiality of intact AD2 and CID domains for optimal transforming activity; and, (5) provide alternative approaches (FISH and RT-PCR) for detecting PAX-NCOA fusions in nondividing cells of RMS. The latter could potentially be utilized as aids in diagnostically challenging cases. PMID:19953635

  19. Spectral analysis using CCDs

    NASA Technical Reports Server (NTRS)

    Hewes, C. R.; Brodersen, R. W.; De Wit, M.; Buss, D. D.

    1976-01-01

    Charge-coupled devices (CCDs) are ideally suited for performing sampled-data transversal filtering operations in the analog domain. Two algorithms have been identified for performing spectral analysis in which the bulk of the computation can be performed in a CCD transversal filter; the chirp z-transform and the prime transform. CCD implementation of both these transform algorithms is presented together with performance data and applications.

  20. Application of higher-order cepstral techniques in problems of fetal heart signal extraction

    NASA Astrophysics Data System (ADS)

    Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.

    1996-10-01

    Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.

  1. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  2. Detailed Vibration Analysis of Pinion Gear with Time-Frequency Methods

    NASA Technical Reports Server (NTRS)

    Mosher, Marianne; Pryor, Anna H.; Lewicki, David G.

    2003-01-01

    In this paper, the authors show a detailed analysis of the vibration signal from the destructive testing of a spiral bevel gear and pinion pair containing seeded faults. The vibration signal is analyzed in the time domain, frequency domain and with four time-frequency transforms: the Short Time Frequency Transform (STFT), the Wigner-Ville Distribution with the Choi-Williams kernel (WV-CW), the Continuous Wavelet' Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels and damage conditions, are analyzed using these methods. A new metric for automatic anomaly detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the time-frequency transforms, as well as time and frequency representations, on this data set. Analysis with the CWT detects changes in the signal at low torque levels not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic anomaly detection and to develop fault detection methods for the metric.

  3. Viscoelastic damped response of cross-ply laminated shallow spherical shells subjected to various impulsive loads

    NASA Astrophysics Data System (ADS)

    Şahan, Mehmet Fatih

    2017-11-01

    In this paper, the viscoelastic damped response of cross-ply laminated shallow spherical shells is investigated numerically in a transformed Laplace space. In the proposed approach, the governing differential equations of cross-ply laminated shallow spherical shell are derived using the dynamic version of the principle of virtual displacements. Following this, the Laplace transform is employed in the transient analysis of viscoelastic laminated shell problem. Also, damping can be incorporated with ease in the transformed domain. The transformed time-independent equations in spatial coordinate are solved numerically by Gauss elimination. Numerical inverse transformation of the results into the real domain are operated by the modified Durbin transform method. Verification of the presented method is carried out by comparing the results with those obtained by the Newmark method and ANSYS finite element software. Furthermore, the developed solution approach is applied to problems with several impulsive loads. The novelty of the present study lies in the fact that a combination of the Navier method and Laplace transform is employed in the analysis of cross-ply laminated shallow spherical viscoelastic shells. The numerical sample results have proved that the presented method constitutes a highly accurate and efficient solution, which can be easily applied to the laminated viscoelastic shell problems.

  4. Patient-centered medical home transformation with payment reform: patient experience outcomes.

    PubMed

    Heyworth, Leonie; Bitton, Asaf; Lipsitz, Stuart R; Schilling, Thad; Schiff, Gordon D; Bates, David W; Simon, Steven R

    2014-01-01

    To examine changes in patient experience across key domains of the patient-centered medical home (PCMH) following practice transformation with Lean quality improvement methodology inclusive of payment reform. Pre-intervention/post-intervention analysis of intervention with a comparison group, a quasi-experimental design. We surveyed patients following office visits at the intervention (n = 2502) and control (n = 1622) practices during the 15-month period before and 14-month period after PCMH Lean transformation (April-October 2009). We measured and compared pre-intervention and post-intervention levels of patient satisfaction and other indicators of patient-centered care. Propensity weights adjusted for potential case-mix differences in intervention and control groups; propensity-adjusted proportions accounted for physician-level clustering. More intervention patients were very satisfied with their care after the PCMH Lean intervention (68%) compared with pre-intervention (62%). Among control patients, there was no corresponding increase in satisfaction (63% very satisfied pre-intervention vs 64% very satisfied post-intervention). This comparison resulted in a statistical trend (P = .10) toward greater overall satisfaction attributable to the intervention. Post-intervention, patients in the intervention practice consistently rated indicators of patient-centered care higher than patients in the control practice, particularly in the personal physician and communication domain. In this domain, intervention patients reported superior provider explanations, time spent, provider concern, and follow-up instructions compared with control participants, whereas control group ratings fell in the post-intervention period (P for difference <.05). In a pilot PCMH transformation including Lean enhancement with payment reform, patient experience was sustained or improved across key PCMH domains.

  5. A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan

    2016-11-01

    The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.

  6. Track monitoring from the dynamic response of a passing train: A sparse approach

    NASA Astrophysics Data System (ADS)

    Lederman, George; Chen, Siheng; Garrett, James H.; Kovačević, Jelena; Noh, Hae Young; Bielak, Jacobo

    2017-06-01

    Collecting vibration data from revenue service trains could be a low-cost way to more frequently monitor railroad tracks, yet operational variability makes robust analysis a challenge. We propose a novel analysis technique for track monitoring that exploits the sparsity inherent in train-vibration data. This sparsity is based on the observation that large vertical train vibrations typically involve the excitation of the train's fundamental mode due to track joints, switchgear, or other discrete hardware. Rather than try to model the entire rail profile, in this study we examine a sparse approach to solving an inverse problem where (1) the roughness is constrained to a discrete and limited set of "bumps"; and (2) the train system is idealized as a simple damped oscillator that models the train's vibration in the fundamental mode. We use an expectation maximization (EM) approach to iteratively solve for the track profile and the train system properties, using orthogonal matching pursuit (OMP) to find the sparse approximation within each step. By enforcing sparsity, the inverse problem is well posed and the train's position can be found relative to the sparse bumps, thus reducing the uncertainty in the GPS data. We validate the sparse approach on two sections of track monitored from an operational train over a 16 month period of time, one where track changes did not occur during this period and another where changes did occur. We show that this approach can not only detect when track changes occur, but also offers insight into the type of such changes.

  7. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Fast evaluation of scaled opposite spin second-order Møller-Plesset correlation energies using auxiliary basis expansions and exploiting sparsity.

    PubMed

    Jung, Yousung; Shao, Yihan; Head-Gordon, Martin

    2007-09-01

    The scaled opposite spin Møller-Plesset method (SOS-MP2) is an economical way of obtaining correlation energies that are computationally cheaper, and yet, in a statistical sense, of higher quality than standard MP2 theory, by introducing one empirical parameter. But SOS-MP2 still has a fourth-order scaling step that makes the method inapplicable to very large molecular systems. We reduce the scaling of SOS-MP2 by exploiting the sparsity of expansion coefficients and local integral matrices, by performing local auxiliary basis expansions for the occupied-virtual product distributions. To exploit sparsity of 3-index local quantities, we use a blocking scheme in which entire zero-rows and columns, for a given third global index, are deleted by comparison against a numerical threshold. This approach minimizes sparse matrix book-keeping overhead, and also provides sufficiently large submatrices after blocking, to allow efficient matrix-matrix multiplies. The resulting algorithm is formally cubic scaling, and requires only moderate computational resources (quadratic memory and disk space) and, in favorable cases, is shown to yield effective quadratic scaling behavior in the size regime we can apply it to. Errors associated with local fitting using the attenuated Coulomb metric and numerical thresholds in the blocking procedure are found to be insignificant in terms of the predicted relative energies. A diverse set of test calculations shows that the size of system where significant computational savings can be achieved depends strongly on the dimensionality of the system, and the extent of localizability of the molecular orbitals. Copyright 2007 Wiley Periodicals, Inc.

  9. Phylogenetic distribution and evolutionary dynamics of the sex determination genes doublesex and transformer in insects.

    PubMed

    Geuverink, E; Beukeboom, L W

    2014-01-01

    Sex determination in insects is characterized by a gene cascade that is conserved at the bottom but contains diverse primary signals at the top. The bottom master switch gene doublesex is found in all insects. Its upstream regulator transformer is present in the orders Hymenoptera, Coleoptera and Diptera, but has thus far not been found in Lepidoptera and in the basal lineages of Diptera. transformer is presumed to be ancestral to the holometabolous insects based on its shared domains and conserved features of autoregulation and sex-specific splicing. We interpret that its absence in basal lineages of Diptera and its order-specific conserved domains indicate multiple independent losses or recruitments into the sex determination cascade. Duplications of transformer are found in derived families within the Hymenoptera, characterized by their complementary sex determination mechanism. As duplications are not found in any other insect order, they appear linked to the haplodiploid reproduction of the Hymenoptera. Further phylogenetic analyses combined with functional studies are needed to understand the evolutionary history of the transformer gene among insects. © 2013 S. Karger AG, Basel.

  10. Twinning induced by the rhombohedral to orthorhombic phase transition in lanthanum gallate (LaGaO3)

    NASA Astrophysics Data System (ADS)

    Wang, W. L.; Lu, H. Y.

    2006-10-01

    Phase-transformation-induced twins in pressureless-sintered lanthanum gallate (LaGaO3) ceramics have been analysed using the transmission electron microscopy (TEM). Twins are induced by solid state phase transformation upon cooling from the rhombohedral (r, Rbar{3}c) to orthorhombic ( o, Pnma) symmetry at ˜145°C. Three types of transformation twins {101} o , {121} o , and {123} o were found in grains containing multiple domains that represent orientation variants. Three orthorhombic orientation variants were distinguished from the transformation domains converged into a triple junction. These twins are the reflection type as confirmed by tilting experiment in the microscope. Although not related by group-subgroup relation, the transformation twins generated by phase transition from rhombohedral to orthorhombic are consistent with those derived from taking cubic Pm {bar {3}}m aristotype of the lowest common supergroup symmetry as an intermediate metastable structure. The r→ o phase transition of first order in nature may have occurred by a diffusionless, martensitic-type or discontinuous nucleation and growth mechanism.

  11. Accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations on rectangular domains

    NASA Astrophysics Data System (ADS)

    Ji, Songsong; Yang, Yibo; Pang, Gang; Antoine, Xavier

    2018-01-01

    The aim of this paper is to design some accurate artificial boundary conditions for the semi-discretized linear Schrödinger and heat equations in rectangular domains. The Laplace transform in time and discrete Fourier transform in space are applied to get Green's functions of the semi-discretized equations in unbounded domains with single-source. An algorithm is given to compute these Green's functions accurately through some recurrence relations. Furthermore, the finite-difference method is used to discretize the reduced problem with accurate boundary conditions. Numerical simulations are presented to illustrate the accuracy of our method in the case of the linear Schrödinger and heat equations. It is shown that the reflection at the corners is correctly eliminated.

  12. Mechanism of the α -ɛ phase transformation in iron

    NASA Astrophysics Data System (ADS)

    Dewaele, A.; Denoual, C.; Anzellini, S.; Occelli, F.; Mezouar, M.; Cordier, P.; Merkel, S.; Véron, M.; Rausch, E.

    2015-05-01

    The α -Fe↔ɛ -Fe pressure-induced transformation under pure hydrostatic static compression has been characterized with in situ x-ray diffraction using α -Fe single crystals as starting samples. The forward transition starts at 14.9 GPa, and the reverse at 12 GPa, with a width of α -ɛ coexistence domain of the order of 2 GPa. The elastic stress in the sample increases in this domain, and partially relaxes after completion of the transformation. Orientation relations between parent α -Fe and child ɛ -Fe have been determined, which definitely validates the Burgers path for the direct transition. On the reverse transition, an unexpected variant selection is observed. X-ray diffraction data, complemented with ex situ microstructural observations, suggest that this selection is caused by defects and stresses accumulated during the direct transition.

  13. High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform

    PubMed Central

    Chan, Kenny K. H.; Tang, Shuo

    2010-01-01

    The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551

  14. Analysis of spike-wave discharges in rats using discrete wavelet transform.

    PubMed

    Ubeyli, Elif Derya; Ilbay, Gül; Sahin, Deniz; Ateş, Nurbay

    2009-03-01

    A feature is a distinctive or characteristic measurement, transform, structural component extracted from a segment of a pattern. Features are used to represent patterns with the goal of minimizing the loss of important information. The discrete wavelet transform (DWT) as a feature extraction method was used in representing the spike-wave discharges (SWDs) records of Wistar Albino Glaxo/Rijswijk (WAG/Rij) rats. The SWD records of WAG/Rij rats were decomposed into time-frequency representations using the DWT and the statistical features were calculated to depict their distribution. The obtained wavelet coefficients were used to identify characteristics of the signal that were not apparent from the original time domain signal. The present study demonstrates that the wavelet coefficients are useful in determining the dynamics in the time-frequency domain of SWD records.

  15. Degradation diagnosis of transformer insulating oils with terahertz time-domain spectroscopy

    NASA Astrophysics Data System (ADS)

    Kang, Seung Beom; Kim, Won-Seok; Chung, Dong Chul; Joung, Jong Man; Kwak, Min Hwan

    2017-12-01

    We report the frequency-dependent complex optical constants, refractive index and absorption, and complex dielectric properties over the frequency range from 0.2 to 3.0 THz for aged power transformer mineral insulating oils. These results have been obtained using terahertz time-domain spectroscopy (THz-TDS) and demonstrate the double-Debye relaxation behavior of the mineral insulating oil. The measured complex optical and dielectric characteristics can be important benchmarks for liquid molecular dynamics and theoretical studies of insulating oils. Due to clear differences in THz responses of aged mineral insulating oils, THz-TDS can be used as a novel on-site diagnostic technique to monitor the insulation condition in aged power transformers and may be valuable alternative to characterize other developing eco-friendly insulating oils and industrial liquids.

  16. Bispectral Inversion: The Construction of a Time Series from Its Bispectrum

    DTIC Science & Technology

    1988-04-13

    take the inverse transform . Since the goal is to compute a time series given its bispectrum, it would also be nice to stay entirely in the frequency...domain and be able to go directly from the bispectrum to the Fourier transform of the time series without the need to inverse transform continuous...the picture. The approximations arise from representing the bicovariance, which is the inverse transform of a continuous function, by the inverse disrte

  17. Spatio-temporal phase retrieval in speckle interferometry with Hilbert transform and two-dimensional phase unwrapping

    NASA Astrophysics Data System (ADS)

    Li, Xiangyu; Huang, Zhanhua; Zhu, Meng; He, Jin; Zhang, Hao

    2014-12-01

    Hilbert transform (HT) is widely used in temporal speckle pattern interferometry, but errors from low modulations might propagate and corrupt the calculated phase. A spatio-temporal method for phase retrieval using temporal HT and spatial phase unwrapping is presented. In time domain, the wrapped phase difference between the initial and current states is directly determined by using HT. To avoid the influence of the low modulation intensity, the phase information between the two states is ignored. As a result, the phase unwrapping is shifted from time domain to space domain. A phase unwrapping algorithm based on discrete cosine transform is adopted by taking advantage of the information in adjacent pixels. An experiment is carried out with a Michelson-type interferometer to study the out-of-plane deformation field. High quality whole-field phase distribution maps with different fringe densities are obtained. Under the experimental conditions, the maximum number of fringes resolvable in a 416×416 frame is 30, which indicates a 15λ deformation along the direction of loading.

  18. Predicting chroma from luma with frequency domain intra prediction

    NASA Astrophysics Data System (ADS)

    Egge, Nathan E.; Valin, Jean-Marc

    2015-03-01

    This paper describes a technique for performing intra prediction of the chroma planes based on the reconstructed luma plane in the frequency domain. This prediction exploits the fact that while RGB to YUV color conversion has the property that it decorrelates the color planes globally across an image, there is still some correlation locally at the block level.1 Previous proposals compute a linear model of the spatial relationship between the luma plane (Y) and the two chroma planes (U and V).2 In codecs that use lapped transforms this is not possible since transform support extends across the block boundaries3 and thus neighboring blocks are unavailable during intra- prediction. We design a frequency domain intra predictor for chroma that exploits the same local correlation with lower complexity than the spatial predictor and which works with lapped transforms. We then describe a low- complexity algorithm that directly uses luma coefficients as a chroma predictor based on gain-shape quantization and band partitioning. An experiment is performed that compares these two techniques inside the experimental Daala video codec and shows the lower complexity algorithm to be a better chroma predictor.

  19. Embedding multiple watermarks in the DFT domain using low- and high-frequency bands

    NASA Astrophysics Data System (ADS)

    Ganic, Emir; Dexter, Scott D.; Eskicioglu, Ahmet M.

    2005-03-01

    Although semi-blind and blind watermarking schemes based on Discrete Cosine Transform (DCT) or Discrete Wavelet Transform (DWT) are robust to a number of attacks, they fail in the presence of geometric attacks such as rotation, scaling, and translation. The Discrete Fourier Transform (DFT) of a real image is conjugate symmetric, resulting in a symmetric DFT spectrum. Because of this property, the popularity of DFT-based watermarking has increased in the last few years. In a recent paper, we generalized a circular watermarking idea to embed multiple watermarks in lower and higher frequencies. Nevertheless, a circular watermark is visible in the DFT domain, providing a potential hacker with valuable information about the location of the watermark. In this paper, our focus is on embedding multiple watermarks that are not visible in the DFT domain. Using several frequency bands increases the overall robustness of the proposed watermarking scheme. Specifically, our experiments show that the watermark embedded in lower frequencies is robust to one set of attacks, and the watermark embedded in higher frequencies is robust to a different set of attacks.

  20. Mass type-specific sparse representation for mass classification in computer-aided detection on mammograms

    PubMed Central

    2013-01-01

    Background Breast cancer is the leading cause of both incidence and mortality in women population. For this reason, much research effort has been devoted to develop Computer-Aided Detection (CAD) systems for early detection of the breast cancers on mammograms. In this paper, we propose a new and novel dictionary configuration underpinning sparse representation based classification (SRC). The key idea of the proposed algorithm is to improve the sparsity in terms of mass margins for the purpose of improving classification performance in CAD systems. Methods The aim of the proposed SRC framework is to construct separate dictionaries according to the types of mass margins. The underlying idea behind our method is that the separated dictionaries can enhance the sparsity of mass class (true-positive), leading to an improved performance for differentiating mammographic masses from normal tissues (false-positive). When a mass sample is given for classification, the sparse solutions based on corresponding dictionaries are separately solved and combined at score level. Experiments have been performed on both database (DB) named as Digital Database for Screening Mammography (DDSM) and clinical Full Field Digital Mammogram (FFDM) DBs. In our experiments, sparsity concentration in the true class (SCTC) and area under the Receiver operating characteristic (ROC) curve (AUC) were measured for the comparison between the proposed method and a conventional single dictionary based approach. In addition, a support vector machine (SVM) was used for comparing our method with state-of-the-arts classifier extensively used for mass classification. Results Comparing with the conventional single dictionary configuration, the proposed approach is able to improve SCTC of up to 13.9% and 23.6% on DDSM and FFDM DBs, respectively. Moreover, the proposed method is able to improve AUC with 8.2% and 22.1% on DDSM and FFDM DBs, respectively. Comparing to SVM classifier, the proposed method improves AUC with 2.9% and 11.6% on DDSM and FFDM DBs, respectively. Conclusions The proposed dictionary configuration is found to well improve the sparsity of dictionaries, resulting in an enhanced classification performance. Moreover, the results show that the proposed method is better than conventional SVM classifier for classifying breast masses subject to various margins from normal tissues. PMID:24564973

  1. New mechanism of structuring associated with the quasi-merohedral twinning by an example of Ca{sub 1–x}La{sub x}F{sub 2+x} ordered solid solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maksimov, S. K., E-mail: maksimov-sk@comtv.ru; Maksimov, K. S., E-mail: kuros@rambler.ru; Sukhov, N. D.

    Merohedry is considered an inseparable property of atomic structures, and uses for the refinement of structural data in a process of correct determination of structure of compounds. Transformation of faulty structures stimulated by decreasing of systemic cumulative energy leads to generation of merohedral twinning type. Ordering is accompanied by origin of antiphase domains. If ordering belongs to the CuAu type, it is accompanied by tetragonal distortions along different (100) directions. If a crystal consists of mosaic of nanodimensional antiphase domains, the conjugation of antiphase domains with different tetragonality leads to monoclinic distortions, at that, conjugated domains are distorted mirrorly. Similarmore » system undergoes further transformation by means of quasi-merohedral twinning. As a result of quasi-merohedry, straight-lines of lattices with different monoclinic distortions are transformed into coherent lattice broken-lines providing minimization of the cumulative energy. Structuring is controlled by regularities of the self-organization. However stochasticity of ordering predetermines the origin areas where few domains with different tetragonality contact which leads to the origin of faulty fields braking regular passage of structuring. Resulting crystal has been found structurally non-uniform, furthermore structural non-uniformity permits identifying elements and stages of a process. However there is no precondition preventing arising the origin of homogenous states. Effect has been revealed in Ca{sub 1–x}La{sub x}F{sub 2+x} solid solution, but it can be expected that distortions of regular alternation of ions similar to antiphase domains can be obtained in non-equilibrium conditions in compounds and similar effect of the quasi-merohedry can falsify results of structural analysis.« less

  2. Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.

    PubMed

    Reena Benjamin, J; Jayasree, T

    2018-02-01

    In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.

  3. Structure-property relationships of multiferroic materials: A nano perspective

    NASA Astrophysics Data System (ADS)

    Bai, Feiming

    The integration of sensors, actuators, and control systems is an ongoing process in a wide range of applications covering automotive, medical, military, and consumer electronic markets. Four major families of ceramic and metallic actuators are under development: piezoelectrics, electrostrictors, magnetostrictors, and shape-memory alloys. All of these materials undergo at least two phase transformations with coupled thermodynamic order parameters. These transformations lead to complex domain wall behaviors, which are driven by electric fields (ferroelectrics), magnetic fields (ferromagnetics), or mechanical stress (ferroelastics) as they transform from nonferroic to ferroic states, contributing to the sensing and actuating capabilities. This research focuses on two multiferroic crystals, Pb(Mg1/3Nb 2/3)O3-PbTiO3 and Fe-Ga, which are characterized by the co-existence and coupling of ferroelectric polarization and ferroelastic strain, or ferro-magnetization and ferroelastic strain. These materials break the conventional boundary between piezoelectric and electrostrictors, or magnetostrictors and shape-memory alloys. Upon applying field or in a poled condition, they yield not only a large strain but also a large strain over field ratio, which is desired and much benefits for advanced actuator and sensor applications. In this thesis, particular attention has been given to understand the structure-property relationships of these two types of materials from atomic to the nano/macro scale. X-ray and neutron diffraction were used to obtain the lattice structure and phase transformation characteristics. Piezoresponse and magnetic force microscopy were performed to establish the dependence of domain configurations on composition, thermal history and applied fields. It has been found that polar nano regions (PNRs) make significant contributions to the enhanced electromechanical properties of PMN-x%PT crystals via assisting intermediate phase transformation. With increasing PT concentration, an evolution of PNR→PND (polar nano domains)→ micron-domains→macro-domains was found. In addition, a domain hierarchy was observed for the compositions near a morphotropic phase boundary (MPB) on various length scales ranging from nanometer to millimeter. The existence of a domain hierarchy down to the nm scale fulfills the requirement of low domain wall energy, which is necessary for polarization rotation. Thus, upon applying an E-field along <001> direction(s) in a composition near the MPB, low symmetry phase transitions (monoclinic or orthorhombic) can easily be induced. For PMN-30%PT, a complete E-T (electric field vs temperature) diagram has been established. As for Fe-x at.% Ga alloys, short-range Ga-pairs serve as both magnetic and magnetoelastic defects, coupling magnetic domains with bulk elastic strain, and contributing to enhanced magnetostriction. Such short-range ordering was evidenced by a clear 2theta peak broadening on neutron scattering profiles near A2-DO3 phase boundary. In addition, a strong degree of preferred [100] orientation was found in the magnetic domains of Fe-12 at.%Ga and Fe-20 at.%Ga alloys with the A2 or A2+DO3 structures, which clearly indicates a deviation from cubic symmetry; however, no domain alignment was found in Fe-25 at.%Ga with the DO3 structure. Furthermore, an increasing degree of domain fluctuations was found during magnetization rotation, which may be related to short-range Ga-pairs cluster with a large local anisotropy constant, due to a lower-symmetry structure.

  4. Gearsketch: An Adaptive Drawing-Based Learning Environment for the Gears Domain

    ERIC Educational Resources Information Center

    Leenaars, Frank A.; Joolingen, Wouter R.; Gijlers, Hannie; Bollen, Lars

    2014-01-01

    GearSketch is a learning environment for the gears domain, aimed at students in the final years of primary school. It is designed for use with a touchscreen device and is based on ideas from drawing-based learning and research on cognitive tutors. At the heart of GearSketch is a domain model that is used to transform learners' strokes into…

  5. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of discontinuous model data with adjustable sharpness and structure. This work was supported by the Sandia National Laboratories Seniors’ Council LDRD (Laboratory Directed Research and Development) program. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Company, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. Application of the Laplace-Borel transformation to the representation of analytical solutions of Duffing's equation

    NASA Technical Reports Server (NTRS)

    Truong, K. V.; Unal, Aynur; Tobak, M.

    1989-01-01

    Various features of the solutions of Duffing's equation are described using a representation of the solutions in the Laplace-Borel transform domain. An application of this technique is illustrated for the symmetry-breaking bifurcation of a hard spring.

  7. Time-Domain Computation Of Electromagnetic Fields In MMICs

    NASA Technical Reports Server (NTRS)

    Lansing, Faiza S.; Rascoe, Daniel L.

    1995-01-01

    Maxwell's equations solved on three-dimensional, conformed orthogonal grids by finite-difference techniques. Method of computing frequency-dependent electrical parameters of monolithic microwave integrated circuit (MMIC) involves time-domain computation of propagation of electromagnetic field in response to excitation by single pulse at input terminal, followed by computation of Fourier transforms to obtain frequency-domain response from time-domain response. Parameters computed include electric and magnetic fields, voltages, currents, impedances, scattering parameters, and effective dielectric constants. Powerful and efficient means for analyzing performance of even complicated MMIC.

  8. Influence of a perturbation in the Gyrator domain for a joint transform correlator-based encryption system

    NASA Astrophysics Data System (ADS)

    Vilardy, Juan M.; Millán, María. S.; Pérez-Cabré, Elisabet

    2017-08-01

    We present the results of the noise and occlusion tests in the Gyrator domain (GD) for a joint transform correlator-based encryption system. This encryption system was recently proposed and it was implemented by using a fully phase nonzero-order joint transform correlator (JTC) and the Gyrator transform (GT). The decryption system was based on two successive GTs. In this paper, we make several numerical simulations in order to test the performance and robustness of the JTC-based encryption-decryption system in the GD when the encrypted image is corrupted by noise or occlusion. The encrypted image is affected by additive and multiplicative noise. We also test the effect of data loss due to partial occlusion of the encrypted information. Finally, we evaluate the performance and robustness of the encryption-decryption system in the GD by using the metric of the root mean square error (RMSE) between the original image and the decrypted image when the encrypted image is degraded by noise or modified by occlusion.

  9. Structural and electronic transformation in low-angle twisted bilayer graphene

    NASA Astrophysics Data System (ADS)

    Gargiulo, Fernando; Yazyev, Oleg V.

    2018-01-01

    Experiments on bilayer graphene unveiled a fascinating realization of stacking disorder where triangular domains with well-defined Bernal stacking are delimited by a hexagonal network of strain solitons. Here we show by means of numerical simulations that this is a consequence of a structural transformation of the moiré pattern inherent to twisted bilayer graphene taking place at twist angles θ below a crossover angle θ\\star=1.2\\circ . The transformation is governed by the interplay between the interlayer van der Waals interaction and the in-plane strain field, and is revealed by a change in the functional form of the twist energy density. This transformation unveils an electronic regime characteristic of vanishing twist angles in which the charge density converges, though not uniformly, to that of ideal bilayer graphene with Bernal stacking. On the other hand, the stacking domain boundaries form a distinct charge density pattern that provides the STM signature of the hexagonal solitonic network.

  10. Electromagnetic field scattering by a triangular aperture.

    PubMed

    Harrison, R E; Hyman, E

    1979-03-15

    The multiple Laplace transform has been applied to analysis and computation of scattering by a double triangular aperture. Results are obtained which match far-field intensity distributions observed in experiments. Arbitrary polarization components, as well as in-phase and quadrature-phase components, may be determined, in the transform domain, as a continuous function of distance from near to far-field for any orientation, aperture, and transformable waveform. Numerical results are obtained by application of numerical multiple inversions of the fully transformed solution.

  11. Numerical inverse Laplace transformation for determining the system response of linear systems in the time domain

    NASA Technical Reports Server (NTRS)

    Friedrich, R.; Drewelow, W.

    1978-01-01

    An algorithm is described that is based on the method of breaking the Laplace transform down into partial fractions which are then inverse-transformed separately. The sum of the resulting partial functions is the wanted time function. Any problems caused by equation system forms are largely limited by appropriate normalization using an auxiliary parameter. The practical limits of program application are reached when the degree of the denominator of the Laplace transform is seven to eight.

  12. Mean-Square Error Due to Gradiometer Field Measuring Devices

    DTIC Science & Technology

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  13. Two-Port Representation of a Linear Transmission Line in the Time Domain.

    DTIC Science & Technology

    1980-01-01

    which is a rational function. To use the Prony procedure it is necessary to inverse transform the admittance functions. For the transmission line, most...impulse is a constant, the inverse transform of Y0(s) contains an impulse of value ._ Therefore, if we were to numerically inverse transform Yo(s), we...would remove this im- pulse and inverse transform Y-(S) Y (S) 1’LR+C~ (23) The prony procedure would then be applied to the result. Of course, an impulse

  14. Saddlepoint Approximations in Conditional Inference

    DTIC Science & Technology

    1990-06-11

    Then the inverse transform can be written as (%, Y) = (T, q(T, Z)) for some function q. When the transform is not one to one, the domain should be...general regularity conditions described at the beginning of this section hold and that the solution t1 in (9) exists. Denote the inverse transform by (X, Y...density hn(t 0 l z) are desired. Then the inverse transform (Y, ) = (T, q(T, Z)) exists and the variable v in the cumulant generating function K(u, v

  15. Face recognition using slow feature analysis and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  16. An ontological model of the practice transformation process.

    PubMed

    Sen, Arun; Sinha, Atish P

    2016-06-01

    Patient-centered medical home is defined as an approach for providing comprehensive primary care that facilitates partnerships between individual patients and their personal providers. The current state of the practice transformation process is ad hoc and no methodological basis exists for transforming a practice into a patient-centered medical home. Practices and hospitals somehow accomplish the transformation and send the transformation information to a certification agency, such as the National Committee for Quality Assurance, completely ignoring the development and maintenance of the processes that keep the medical home concept alive. Many recent studies point out that such a transformation is hard as it requires an ambitious whole-practice reengineering and redesign. As a result, the practices suffer change fatigue in getting the transformation done. In this paper, we focus on the complexities of the practice transformation process and present a robust ontological model for practice transformation. The objective of the model is to create an understanding of the practice transformation process in terms of key process areas and their activities. We describe how our ontology captures the knowledge of the practice transformation process, elicited from domain experts, and also discuss how, in the future, that knowledge could be diffused across stakeholders in a healthcare organization. Our research is the first effort in practice transformation process modeling. To build an ontological model for practice transformation, we adopt the Methontology approach. Based on the literature, we first identify the key process areas essential for a practice transformation process to achieve certification status. Next, we develop the practice transformation ontology by creating key activities and precedence relationships among the key process areas using process maturity concepts. At each step, we employ a panel of domain experts to verify the intermediate representations of the ontology. Finally, we implement a prototype of the practice transformation ontology using Protégé. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.

    PubMed

    Dai, Guoxian; Xie, Jin; Fang, Yi

    2018-07-01

    How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.

  18. Identification of amino acids in the transmembrane and juxtamembrane domains of the platelet-derived growth factor receptor required for productive interaction with the bovine papillomavirus E5 protein.

    PubMed

    Petti, L M; Reddy, V; Smith, S O; DiMaio, D

    1997-10-01

    The bovine papillomavirus E5 protein forms a stable complex with the cellular platelet-derived growth factor (PDGF) beta receptor, resulting in receptor activation and cell transformation. Amino acids in both the putative transmembrane domain and extracytoplasmic carboxyl-terminal domain of the E5 protein appear important for PDGF receptor binding and activation. Previous analysis indicated that the transmembrane domain of the receptor was also required for complex formation and receptor activation. Here we analyzed receptor chimeras and point mutants to identify specific amino acids in the PDGF beta receptor required for productive interaction with the E5 protein. These receptor mutants were analyzed in murine Ba/F3 cells, which do not express endogenous receptor. Our results confirmed the importance of the transmembrane domain of the receptor for complex formation, receptor tyrosine phosphorylation, and mitogenic signaling in response to the E5 protein and established that the threonine residue in this domain is required for these activities. In addition, a positive charge in the extracellular juxtamembrane domain of the receptor was required for E5 interaction and signaling, whereas replacement of the wild-type lysine with either a neutral or acidic amino acid inhibited E5-induced receptor activation and transformation. All of the receptor mutants defective for activation by the E5 protein responded to acute treatment with PDGF and to stable expression of v-Sis, a form of PDGF. The required juxtamembrane lysine and transmembrane threonine are predicted to align precisely on the same face of an alpha helix packed in a left-handed coiled-coil geometry. These results establish that the E5 protein and v-Sis recognize distinct binding sites on the PDGF beta receptor and further clarify the nature of the interaction between the viral transforming protein and its cellular target.

  19. Characterization of System Level Single Event Upset (SEU) Responses using SEU Data, Classical Reliability Models, and Space Environment Data

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; Label, Kenneth; Campola, Michael; Xapsos, Michael

    2017-01-01

    We propose a method for the application of single event upset (SEU) data towards the analysis of complex systems using transformed reliability models (from the time domain to the particle fluence domain) and space environment data.

  20. The Simulation Realization of Pavement Roughness in the Time Domain

    NASA Astrophysics Data System (ADS)

    XU, H. L.; He, L.; An, D.

    2017-10-01

    As the needs for the dynamic study on the vehicle-pavement system and the simulated vibration table test, how to simulate the pavement roughness actually is important guarantee for whether calculation and test can reflect the actual situation or not. Using the power spectral density function, the simulation of pavement roughness can be realized by Fourier inverse transform. The main idea of this method was that the spectrum amplitude and random phase were obtained separately according to the power spectrum, and then the simulation of pavement roughness was obtained in the time domain through the Fourier inverse transform (IFFT). In the process, the sampling interval (Δl) was 0.1m, and the sampling points(N) was 4096, which satisfied the accuracy requirements. Using this method, the simulate results of pavement roughness (A~H grades) were obtain in the time domain.

Top