Science.gov

Sample records for adaptive wavelet denoising

  1. Baseline Adaptive Wavelet Thresholding Technique for sEMG Denoising

    NASA Astrophysics Data System (ADS)

    Bartolomeo, L.; Zecca, M.; Sessa, S.; Lin, Z.; Mukaeda, Y.; Ishii, H.; Takanishi, Atsuo

    2011-06-01

    The surface Electromyography (sEMG) signal is affected by different sources of noises: current technology is considerably robust to the interferences of the power line or the cable motion artifacts, but still there are many limitations with the baseline and the movement artifact noise. In particular, these sources have frequency spectra that include also the low-frequency components of the sEMG frequency spectrum; therefore, a standard all-bandwidth filtering could alter important information. The Wavelet denoising method has been demonstrated to be a powerful solution in processing white Gaussian noise in biological signals. In this paper we introduce a new technique for the denoising of the sEMG signal: by using the baseline of the signal before the task, we estimate the thresholds to apply to the Wavelet thresholding procedure. The experiments have been performed on ten healthy subjects, by placing the electrodes on the Extensor Carpi Ulnaris and Triceps Brachii on right upper and lower arms, and performing a flexion and extension of the right wrist. An Inertial Measurement Unit, developed in our group, has been used to recognize the movements of the hands to segment the exercise and the pre-task baseline. Finally, we show better performances of the proposed method in term of noise cancellation and distortion of the signal, quantified by a new suggested indicator of denoising quality, compared to the standard Donoho technique.

  2. [Adaptive de-noising of ECG signal based on stationary wavelet transform].

    PubMed

    Dong, Hong-sheng; Zhang, Ai-hua; Hao, Xiao-hong

    2009-03-01

    According to the limitations of wavelet threshold in de-noising method, we approached a combining algorithm of the stationary wavelet transform with adaptive filter. The stationary wavelet transformation can suppress Gibbs phenomena in traditional DWT effectively, and adaptive filter is introduced at the high scale wavelet coefficient of the stationary wavelet transformation. It would remove baseline wander and keep the shape of low frequency and low amplitude P wave, T wave and ST segment wave of ECG signal well. That is important for analyzing ECG signal of other feature information.

  3. Wavelet-based adaptive denoising and baseline correction for MALDI TOF MS.

    PubMed

    Shin, Hyunjin; Sampat, Mehul P; Koomen, John M; Markey, Mia K

    2010-06-01

    Proteomic profiling by MALDI TOF mass spectrometry (MS) is an effective method for identifying biomarkers from human serum/plasma, but the process is complicated by the presence of noise in the spectra. In MALDI TOF MS, the major noise source is chemical noise, which is defined as the interference from matrix material and its clusters. Because chemical noise is nonstationary and nonwhite, wavelet-based denoising is more effective than conventional noise reduction schemes based on Fourier analysis. However, current wavelet-based denoising methods for mass spectrometry do not fully consider the characteristics of chemical noise. In this article, we propose new wavelet-based high-frequency noise reduction and baseline correction methods that were designed based on the discrete stationary wavelet transform. The high-frequency noise reduction algorithm adaptively estimates the time-varying threshold for each frequency subband from multiple realizations of chemical noise and removes noise from mass spectra of samples using the estimated thresholds. The baseline correction algorithm computes the monotonically decreasing baseline in the highest approximation of the wavelet domain. The experimental results demonstrate that our algorithms effectively remove artifacts in mass spectra that are due to chemical noise while preserving informative features as compared to commonly used denoising methods.

  4. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  5. Application of wavelet analysis in laser Doppler vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Lan, Yu-fei; Xue, Hui-feng; Li, Xin-liang; Liu, Dan

    2010-10-01

    Large number of experiments show that, due to external disturbances, the measured surface is too rough and other factors make use of laser Doppler technique to detect the vibration signal contained complex information, low SNR, resulting in Doppler frequency shift signals unmeasured, can not be demodulated Doppler phase and so on. This paper first analyzes the laser Doppler signal model and feature in the vibration test, and studies the most commonly used three ways of wavelet denoising techniques: the modulus maxima wavelet denoising method, the spatial correlation denoising method and wavelet threshold denoising method. Here we experiment with the vibration signals and achieve three ways by MATLAB simulation. Processing results show that the wavelet modulus maxima denoising method at low laser Doppler vibration SNR, has an advantage for the signal which mixed with white noise and contained more singularities; the spatial correlation denoising method is more suitable for denoising the laser Doppler vibration signal which noise level is not very high, and has a better edge reconstruction capacity; wavelet threshold denoising method has a wide range of adaptability, computational efficiency, and good denoising effect. Specifically, in the wavelet threshold denoising method, we estimate the original noise variance by spatial correlation method, using an adaptive threshold denoising method, and make some certain amendments in practice. Test can be shown that, compared with conventional threshold denoising, this method is more effective to extract the feature of laser Doppler vibration signal.

  6. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  7. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  8. An Adaptive Wavelet-Based Denoising Algorithm for Enhancing Speech in Non-stationary Noise Environment

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Ching

    Traditional wavelet-based speech enhancement algorithms are ineffective in the presence of highly non-stationary noise because of the difficulties in the accurate estimation of the local noise spectrum. In this paper, a simple method of noise estimation employing the use of a voice activity detector is proposed. We can improve the output of a wavelet-based speech enhancement algorithm in the presence of random noise bursts according to the results of VAD decision. The noisy speech is first preprocessed using bark-scale wavelet packet decomposition (BSWPD) to convert a noisy signal into wavelet coefficients (WCs). It is found that the VAD using bark-scale spectral entropy, called as BS-Entropy, parameter is superior to other energy-based approach especially in variable noise-level. The wavelet coefficient threshold (WCT) of each subband is then temporally adjusted according to the result of VAD approach. In a speech-dominated frame, the speech is categorized into either a voiced frame or an unvoiced frame. A voiced frame possesses a strong tone-like spectrum in lower subbands, so that the WCs of lower-band must be reserved. On the contrary, the WCT tends to increase in lower-band if the speech is categorized as unvoiced. In a noise-dominated frame, the background noise can be almost completely removed by increasing the WCT. The objective and subjective experimental results are then used to evaluate the proposed system. The experiments show that this algorithm is valid on various noise conditions, especially for color noise and non-stationary noise conditions.

  9. Denoising seismic data using wavelet methods: a comparison study

    NASA Astrophysics Data System (ADS)

    Hloupis, G.; Vallianatos, F.

    2009-04-01

    In order to derive onset times, amplitudes or other useful characteristic from a seismogram, the usual denoising procedure involves the use of a linear pass-band filter. This family of filters is zero-phase and is useful according to phase properties but their efficiency is reduced when transients are existing near seismic signals. The alternative solution is the Wiener filter which focuses on the elimination of the mean square error between recorded and expected signal. Its main disadvantage is the assumption that signal and noise are stationary. This assumption does not hold for the seismic signals leading to denoising solutions that does not assume stationarity. Solutions based on Wavelet Transform proved effective for denoising problems across several areas. Here we present recent WT denoising methods (WDM) that will applied later to seismic sequences of Seismological Network of Crete. Wavelet denoising schemes have proved to be well adapted to several types of signals. For non-stationary signals, such as seismograms, the use of linear and non-linear wavelet denoising methods seems promising. The contribution of this study is a comparison for wavelet denoising methods suitable for seismic signals, which proved from previous studies their superiority against appropriate conventional filtering techniques. The importance of wavelet denoising methods relies on two facts: they recovered the seismic signals having fewer artifacts than conventional filters (for high SNR seismograms) and at the same time they can provide satisfactory representations (for detecting the earthquake's primary arrival) for low SNR seismograms or microearthquakes. The latter is very important for a possible development of an automatic procedure for the regular daily detection of small or non-regional earthquakes especially when the number of the stations is quite big. Initially, their performance is measured over a database of synthetic seismic signals in order to evaluate the better wavelet

  10. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  11. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition.

  12. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  13. Raman spectral data denoising based on wavelet analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract As one kind of molecule scattering spectroscopy, Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. RS has a broad application in biological, chemical, environmental and industrial fields. But signals in Raman spectral analysis often have noise, which greatly influences the achievement of accurate analytical results. The de-noising of RS signals is an important part of spectral analysis. Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. In de-noising of Raman Spectroscopy, wavelet is chosen to construct de-noising function because of its excellent properties. In this paper, bior wavelet is adopted to remove the noise in the Raman spectra. It eliminates noise obviously and the result is satisfying. This method can provide some bases for practical de-noising in Raman spectra.

  14. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  15. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  16. Microarray image enhancement by denoising using stationary wavelet transform.

    PubMed

    Wang, X H; Istepanian, Robert S H; Song, Yong Hua

    2003-12-01

    Microarray imaging is considered an important tool for large scale analysis of gene expression. The accuracy of the gene expression depends on the experiment itself and further image processing. It's well known that the noises introduced during the experiment will greatly affect the accuracy of the gene expression. How to eliminate the effect of the noise constitutes a challenging problem in microarray analysis. Traditionally, statistical methods are used to estimate the noises while the microarray images are being processed. In this paper, we present a new approach to deal with the noise inherent in the microarray image processing procedure. That is, to denoise the image noises before further image processing using stationary wavelet transform (SWT). The time invariant characteristic of SWT is particularly useful in image denoising. The testing result on sample microarray images has shown an enhanced image quality. The results also show that it has a superior performance than conventional discrete wavelet transform and widely used adaptive Wiener filter in this procedure.

  17. Adaptive Image Denoising by Mixture Adaptation

    NASA Astrophysics Data System (ADS)

    Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.

    2016-10-01

    We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.

  18. Impedance cardiography signal denoising using discrete wavelet transform.

    PubMed

    Chabchoub, Souhir; Mansouri, Sofienne; Salah, Ridha Ben

    2016-09-01

    Impedance cardiography (ICG) is a non-invasive technique for diagnosing cardiovascular diseases. In the acquisition procedure, the ICG signal is often affected by several kinds of noise which distort the determination of the hemodynamic parameters. Therefore, doctors cannot recognize ICG waveform correctly and the diagnosis of cardiovascular diseases became inaccurate. The aim of this work is to choose the most suitable method for denoising the ICG signal. Indeed, different wavelet families are used to denoise the ICG signal. The Haar, Daubechies (db2, db4, db6, and db8), Symlet (sym2, sym4, sym6, sym8) and Coiflet (coif2, coif3, coif4, coif5) wavelet families are tested and evaluated in order to select the most suitable denoising method. The wavelet family with best performance is compared with two denoising methods: one based on Savitzky-Golay filtering and the other based on median filtering. Each method is evaluated by means of the signal to noise ratio (SNR), the root mean square error (RMSE) and the percent difference root mean square (PRD). The results show that the Daubechies wavelet family (db8) has superior performance on noise reduction in comparison to other methods. PMID:27376722

  19. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  20. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    PubMed Central

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  1. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  2. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  3. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion.

    PubMed

    Rahman, S M Mahbubur; Ahmad, M Omair; Swamy, M N S

    2008-10-01

    The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.

  4. Application of Wavelet Analysis Technique in the Signal Denoising of Life Sign Detection

    NASA Astrophysics Data System (ADS)

    Zhen, Zhang; Fang, LIU.

    In life sign detection, radar echo signal is very weak and hard to extract. For solve this problem, weak life signal de-noising based on wavelet transform is studied. Through the studies of wavelet threshold de-noising method, the use of it in weak life signal de-noising in strong noise background, and the verification of simulation by Matlab, the results shows that wavelet threshold de-noising method can remove the noise signal from weak life signal effectively and be an effective de-noising and extraction method for weak life signal.

  5. Examining Alternatives to Wavelet Denoising for Astronomical Source Finding

    NASA Astrophysics Data System (ADS)

    Jurek, R.; Brown, S.

    2012-08-01

    The Square Kilometre Array and its pathfinders ASKAP and MeerKAT will produce prodigious amounts of data that necessitate automated source finding. The performance of automated source finders can be improved by pre-processing a dataset. In preparation for the WALLABY and DINGO surveys, we have used a test HI datacube constructed from actual Westerbork Telescope noise and WHISP HI galaxies to test the real world improvement of linear smoothing, the Duchamp source finder's wavelet denoising, iterative median smoothing and mathematical morphology subtraction, on intensity threshold source finding of spectral line datasets. To compare these pre-processing methods we have generated completeness-reliability performance curves for each method and a range of input parameters. We find that iterative median smoothing produces the best source finding results for ASKAP HI spectral line observations, but wavelet denoising is a safer pre-processing technique. In this paper we also present our implementations of iterative median smoothing and mathematical morphology subtraction.

  6. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  7. Electrocardiogram signal denoising based on a new improved wavelet thresholding.

    PubMed

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method. PMID:27587134

  8. [An improved wavelet threshold algorithm for ECG denoising].

    PubMed

    Liu, Xiuling; Qiao, Lei; Yang, Jianli; Dong, Bin; Wang, Hongrui

    2014-06-01

    Due to the characteristics and environmental factors, electrocardiogram (ECG) signals are usually interfered by noises in the course of signal acquisition, so it is crucial for ECG intelligent analysis to eliminate noises in ECG signals. On the basis of wavelet transform, threshold parameters were improved and a more appropriate threshold expression was proposed. The discrete wavelet coefficients were processed using the improved threshold parameters, the accurate wavelet coefficients without noises were gained through inverse discrete wavelet transform, and then more original signal coefficients could be preserved. MIT-BIH arrythmia database was used to validate the method. Simulation results showed that the improved method could achieve better denoising effect than the traditional ones. PMID:25219225

  9. Optimization of dynamic measurement of receptor kinetics by wavelet denoising.

    PubMed

    Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan

    2006-04-01

    The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.

  10. Energy-based wavelet de-noising of hydrologic time series.

    PubMed

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed.

  11. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  12. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D; Lanier, R

    2007-10-29

    The investigation of wavelet analysis techniques as a means of filtering the gross-count signal obtained from radiation detectors has shown promise. These signals are contaminated with high frequency statistical noise and significantly varying background radiation levels. Wavelet transforms allow a signal to be split into its constituent frequency components without losing relative timing information. Initial simulations and an injection study have been performed. Additionally, acquisition and analysis software has been written which allowed the technique to be evaluated in real-time under more realistic operating conditions. The technique performed well when compared to more traditional triggering techniques with its performance primarily limited by false alarms due to prominent features in the signal. An initial investigation into the potential rejection and classification of these false alarms has also shown promise.

  13. Improved deadzone modeling for bivariate wavelet shrinkage-based image denoising

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2016-05-01

    Modern image processing performed on-board low Size, Weight, and Power (SWaP) platforms, must provide high- performance while simultaneously reducing memory footprint, power consumption, and computational complexity. Image preprocessing, along with downstream image exploitation algorithms such as object detection and recognition, and georegistration, place a heavy burden on power and processing resources. Image preprocessing often includes image denoising to improve data quality for downstream exploitation algorithms. High-performance image denoising is typically performed in the wavelet domain, where noise generally spreads and the wavelet transform compactly captures high information-bearing image characteristics. In this paper, we improve modeling fidelity of a previously-developed, computationally-efficient wavelet-based denoising algorithm. The modeling improvements enhance denoising performance without significantly increasing computational cost, thus making the approach suitable for low-SWAP platforms. Specifically, this paper presents modeling improvements to the Sendur-Selesnick model (SSM) which implements a bivariate wavelet shrinkage denoising algorithm that exploits interscale dependency between wavelet coefficients. We formulate optimization problems for parameters controlling deadzone size which leads to improved denoising performance. Two formulations are provided; one with a simple, closed form solution which we use for numerical result generation, and the second as an integral equation formulation involving elliptic integrals. We generate image denoising performance results over different image sets drawn from public domain imagery, and investigate the effect of wavelet filter tap length on denoising performance. We demonstrate denoising performance improvement when using the enhanced modeling over performance obtained with the baseline SSM model.

  14. The application study of wavelet packet transformation in the de-noising of dynamic EEG data.

    PubMed

    Li, Yifeng; Zhang, Lihui; Li, Baohui; Wei, Xiaoyang; Yan, Guiding; Geng, Xichen; Jin, Zhao; Xu, Yan; Wang, Haixia; Liu, Xiaoyan; Lin, Rong; Wang, Quan

    2015-01-01

    This paper briefly describes the basic principle of wavelet packet analysis, and on this basis introduces the general principle of wavelet packet transformation for signal den-noising. The dynamic EEG data under +Gz acceleration is made a de-noising treatment by using wavelet packet transformation, and the de-noising effects with different thresholds are made a comparison. The study verifies the validity and application value of wavelet packet threshold method for the de-noising of dynamic EEG data under +Gz acceleration. PMID:26405863

  15. Class of Fibonacci-Daubechies-4-Haar wavelets with applicability to ECG denoising

    NASA Astrophysics Data System (ADS)

    Smith, Christopher B.; Agaian, Sos S.

    2004-05-01

    The presented paper introduces a new class of wavelets that includes the simplest Haar wavelet (Daubechies-2) as well as the Daubechies-4 wavelet. This class is shown to have several properties similar to the Daubechies wavelets. In application, the new class of wavelets has been shown to effectively denoise ECG signals. In addition, the paper introduces a new polynomial soft threshold technique for denoising through wavelet shrinkage. The polynomial soft threshold technique is able to represent a wide class of polynomial behaviors, including classical soft thresholding.

  16. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  17. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness. PMID:27238055

  18. Wavelet-domain TI Wiener-like filtering for complex MR data denoising.

    PubMed

    Hu, Kai; Cheng, Qiaocui; Gao, Xieping

    2016-10-01

    Magnetic resonance (MR) images are affected by random noises, which degrade many image processing and analysis tasks. It has been shown that the noise in magnitude MR images follows a Rician distribution. Unlike additive Gaussian noise, the noise is signal-dependent, and consequently difficult to reduce, especially in low signal-to-noise ratio (SNR) images. Wirestam et al. in [20] proposed a Wiener-like filtering technique in wavelet-domain to reduce noise before construction of the magnitude MR image. Based on Wirestam's study, we propose a wavelet-domain translation-invariant (TI) Wiener-like filtering algorithm for noise reduction in complex MR data. The proposed denoising algorithm shows the following improvements compared with Wirestam's method: (1) we introduce TI property into the Wiener-like filtering in wavelet-domain to suppress artifacts caused by translations of the signal; (2) we integrate one Stein's Unbiased Risk Estimator (SURE) thresholding with two Wiener-like filters to make the hard-thresholding scale adaptive; and (3) the first Wiener-like filtering is used to filter the original noisy image in which the noise obeys Gaussian distribution and it provides more reasonable results. The proposed algorithm is applied to denoise the real and imaginary parts of complex MR images. To evaluate our proposed algorithm, we conduct extensive denoising experiments using T1-weighted simulated MR images, diffusion-weighted (DW) phantom and in vivo data. We compare our algorithm with other popular denoising methods. The results demonstrate that our algorithm outperforms others in term of both efficiency and robustness.

  19. GPU-based cone-beam reconstruction using wavelet denoising

    NASA Astrophysics Data System (ADS)

    Jin, Kyungchan; Park, Jungbyung; Park, Jongchul

    2012-03-01

    The scattering noise artifact resulted in low-dose projection in repetitive cone-beam CT (CBCT) scans decreases the image quality and lessens the accuracy of the diagnosis. To improve the image quality of low-dose CT imaging, the statistical filtering is more effective in noise reduction. However, image filtering and enhancement during the entire reconstruction process exactly may be challenging due to high performance computing. The general reconstruction algorithm for CBCT data is the filtered back-projection, which for a volume of 512×512×512 takes up to a few minutes on a standard system. To speed up reconstruction, massively parallel architecture of current graphical processing unit (GPU) is a platform suitable for acceleration of mathematical calculation. In this paper, we focus on accelerating wavelet denoising and Feldkamp-Davis-Kress (FDK) back-projection using parallel processing on GPU, utilize compute unified device architecture (CUDA) platform and implement CBCT reconstruction based on CUDA technique. Finally, we evaluate our implementation on clinical tooth data sets. Resulting implementation of wavelet denoising is able to process a 1024×1024 image within 2 ms, except data loading process, and our GPU-based CBCT implementation reconstructs a 512×512×512 volume from 400 projection data in less than 1 minute.

  20. Pulsar Signal Denoising Method Based on Laplace Distribution in No-subsampling Wavelet Packet Domain

    NASA Astrophysics Data System (ADS)

    Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang

    2016-11-01

    In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.

  1. Denoising of X-ray pulsar observed profile in the undecimated wavelet domain

    NASA Astrophysics Data System (ADS)

    Xue, Meng-fan; Li, Xiao-ping; Fu, Ling-zhong; Liu, Xiu-ping; Sun, Hai-feng; Shen, Li-rong

    2016-01-01

    The low intensity of the X-ray pulsar signal and the strong X-ray background radiation lead to low signal-to-noise ratio (SNR) of the X-ray pulsar observed profile obtained through epoch folding, especially when the observation time is not long enough. This signifies the necessity of denoising of the observed profile. In this paper, the statistical characteristics of the X-ray pulsar signal are studied, and a signal-dependent noise model is established for the observed profile. Based on this, a profile noise reduction method by performing a local linear minimum mean square error filtering in the un-decimated wavelet domain is developed. The detail wavelet coefficients are rescaled by multiplying their amplitudes by a locally adaptive factor, which is the local variance ratio of the noiseless coefficients to the noisy ones. All the nonstationary statistics needed in the algorithm are calculated from the observed profile, without a priori information. The results of experim! ents, carried out on simulated data obtained by the ground-based simulation system and real data obtained by Rossi X-Ray Timing Explorer satellite, indicate that the proposed method is excellent in both noise suppression and preservation of peak sharpness, and it also clearly outperforms four widely accepted and used wavelet denoising methods, in terms of SNR, Pearson correlation coefficient and root mean square error.

  2. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  3. Median Modified Wiener Filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra

    PubMed Central

    Cannistraci, Carlo Vittorio; Abbas, Ahmed; Gao, Xin

    2015-01-01

    Denoising multidimensional NMR-spectra is a fundamental step in NMR protein structure determination. The state-of-the-art method uses wavelet-denoising, which may suffer when applied to non-stationary signals affected by Gaussian-white-noise mixed with strong impulsive artifacts, like those in multi-dimensional NMR-spectra. Regrettably, Wavelet's performance depends on a combinatorial search of wavelet shapes and parameters; and multi-dimensional extension of wavelet-denoising is highly non-trivial, which hampers its application to multidimensional NMR-spectra. Here, we endorse a diverse philosophy of denoising NMR-spectra: less is more! We consider spatial filters that have only one parameter to tune: the window-size. We propose, for the first time, the 3D extension of the median-modified-Wiener-filter (MMWF), an adaptive variant of the median-filter, and also its novel variation named MMWF*. We test the proposed filters and the Wiener-filter, an adaptive variant of the mean-filter, on a benchmark set that contains 16 two-dimensional and three-dimensional NMR-spectra extracted from eight proteins. Our results demonstrate that the adaptive spatial filters significantly outperform their non-adaptive versions. The performance of the new MMWF* on 2D/3D-spectra is even better than wavelet-denoising. Noticeably, MMWF* produces stable high performance almost invariant for diverse window-size settings: this signifies a consistent advantage in the implementation of automatic pipelines for protein NMR-spectra analysis. PMID:25619991

  4. ECG signals denoising using wavelet transform and independent component analysis

    NASA Astrophysics Data System (ADS)

    Liu, Manjin; Hui, Mei; Liu, Ming; Dong, Liquan; Zhao, Zhu; Zhao, Yuejin

    2015-08-01

    A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.

  5. Forecasting performance of denoising signal by Wavelet and Fourier Transforms using SARIMA model

    NASA Astrophysics Data System (ADS)

    Ismail, Mohd Tahir; Mamat, Siti Salwana; Hamzah, Firdaus Mohamad; Karim, Samsul Ariffin Abdul

    2014-07-01

    The goal of this research is to determine the forecasting performance of denoising signal. Monthly rainfall and monthly number of raindays with duration of 20 years (1990-2009) from Bayan Lepas station are utilized as the case study. The Fast Fourier Transform (FFT) and Wavelet Transform (WT) are used in this research to find the denoise signal. The denoise data obtained by Fast Fourier Transform and Wavelet Transform are being analyze by seasonal ARIMA model. The best fitted model is determined by the minimum value of MSE. The result indicates that Wavelet Transform is an effective method in denoising the monthly rainfall and number of rain days signals compared to Fast Fourier Transform.

  6. Biomedical image and signal de-noising using dual tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Rizi, F. Yousefi; Noubari, H. Ahmadi; Setarehdan, S. K.

    2011-10-01

    Dual tree complex wavelet transform(DTCWT) is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The purposes of de-noising are reducing noise level and improving signal to noise ratio (SNR) without distorting the signal or image. This paper proposes a method for removing white Gaussian noise from ECG signals and biomedical images. The discrete wavelet transform (DWT) is very valuable in a large scope of de-noising problems. However, it has limitations such as oscillations of the coefficients at a singularity, lack of directional selectivity in higher dimensions, aliasing and consequent shift variance. The complex wavelet transform CWT strategy that we focus on in this paper is Kingsbury's and Selesnick's dual tree CWT (DTCWT) which outperforms the critically decimated DWT in a range of applications, such as de-noising. Each complex wavelet is oriented along one of six possible directions, and the magnitude of each complex wavelet has a smooth bell-shape. In the final part of this paper, we present biomedical image and signal de-noising by the means of thresholding magnitude of the wavelet coefficients.

  7. [Near infrared spectra (NIR) analysis of octane number by wavelet denoising-derivative method].

    PubMed

    Tian, Gao-you; Yuan, Hong-fu; Chu, Xiao-li; Liu, Hui-ying; Lu, Wan-zhen

    2005-04-01

    Derivative can correct baseline effects and also increase the level of noise. Wavelet transform has been proven an efficient tool for de-noising. This paper is directed to the application of wavelet transfer and derivative in the NIR analysis of octane number (RON). The derivative parameters, as well as their effects on the noise level and analytic accuracy of RON, have been studied in detail. The results show that derivative can correct the baseline effects and increase the analytic accuracy. Noise from the derivative spectra has great detriment to the analysis of RON. De-noising of wavelet transform can increase the S/N and improve the analytical accuracy.

  8. Study on an improved wavelet threshold denoising for the time-resolved photoacoustic signals of the glucose solution

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2015-08-01

    Although the time-domain method, frequency-domain method and wavelet-domain method can used into signal denoising or filtering, the denoising of blood glucose photoacoustic signals are limited due to some advantages. In this paper, an improved wavelet threshold denoising method is used to remove the noise of the blood glucose photoacoustic signals. In order to overcome some drawbacks of classical wavelet threshold denoising, an improved wavelet threshold function was proposed. In the simulation experiments, the different denoising results are compared between this improved wavelet threshold function and other functions. And the experiments of this improved wavelet threshold function into the denoising of the time-resolved photoacoustic signals of glucose solution are performed. The experimental result verifies that the improved wavelet threshold function denoising is available. The improved wavelet threshold function has better flexibility than others due to the usage of two threshold values and two factors. So, the improved wavelet threshold function has the potential value in the denoising field of blood glucose photoacoustic signals.

  9. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    PubMed

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations.

  10. Autocorrelation based denoising of manatee vocalizations using the undecimated discrete wavelet transform.

    PubMed

    Gur, Berke M; Niezrecki, Christopher

    2007-07-01

    Recent interest in the West Indian manatee (Trichechus manatus latirostris) vocalizations has been primarily induced by an effort to reduce manatee mortality rates due to watercraft collisions. A warning system based on passive acoustic detection of manatee vocalizations is desired. The success and feasibility of such a system depends on effective denoising of the vocalizations in the presence of high levels of background noise. In the last decade, simple and effective wavelet domain nonlinear denoising methods have emerged as an alternative to linear estimation methods. However, the denoising performances of these methods degrades considerably with decreasing signal-to-noise ratio (SNR) and therefore are not suited for denoising manatee vocalizations in which the typical SNR is below 0 dB. Manatee vocalizations possess a strong harmonic content and a slow decaying autocorrelation function. In this paper, an efficient denoising scheme that exploits both the autocorrelation function of manatee vocalizations and effectiveness of the nonlinear wavelet transform based denoising algorithms is introduced. The suggested wavelet-based denoising algorithm is shown to outperform linear filtering methods, extending the detection range of vocalizations. PMID:17614478

  11. Study on an improved wavelet shift-invariant threshold denoising for pulsed laser induced glucose photoacoustic signals

    NASA Astrophysics Data System (ADS)

    Wang, Zhengzi; Ren, Zhong; Liu, Guodong

    2015-10-01

    Noninvasive measurement of blood glucose concentration has become a hotspot research in the world due to its characteristic of convenient, rapid and non-destructive etc. The blood glucose concentration monitoring based on photoacoustic technique has attracted many attentions because the detected signal is ultrasonic signals rather than the photo signals. But during the acquisition of the photoacoustic signals of glucose, the photoacoustic signals are not avoid to be polluted by some factors, such as the pulsed laser, electronic noises and circumstance noises etc. These disturbances will impact the measurement accuracy of the glucose concentration, So, the denoising of the glucose photoacoustic signals is a key work. In this paper, a wavelet shift-invariant threshold denoising method is improved, and a novel wavelet threshold function is proposed. For the novel wavelet threshold function, two threshold values and two different factors are set, and the novel function is high order derivative and continuous, which can be looked as the compromise between the wavelet soft threshold denoising and hard threshold denoising. Simulation experimental results illustrate that, compared with other wavelet threshold denoising, this improved wavelet shift-invariant threshold denoising has higher signal-to-noise ratio(SNR) and smaller root mean-square error (RMSE) value. And this improved denoising also has better denoising effect than others. Therefore, this improved denoising has a certain of potential value in the denoising of glucose photoacoustic signals.

  12. Optimization of wavelet- and curvelet-based denoising algorithms by multivariate SURE and GCV

    NASA Astrophysics Data System (ADS)

    Mortezanejad, R.; Gholami, A.

    2016-06-01

    One of the most crucial challenges in seismic data processing is the reduction of noise in the data or improving the signal-to-noise ratio (SNR). Wavelet- and curvelet-based denoising algorithms have become popular to address random noise attenuation for seismic sections. Wavelet basis, thresholding function, and threshold value are three key factors of such algorithms, having a profound effect on the quality of the denoised section. Therefore, given a signal, it is necessary to optimize the denoising operator over these factors to achieve the best performance. In this paper a general denoising algorithm is developed as a multi-variant (variable) filter which performs in multi-scale transform domains (e.g. wavelet and curvelet). In the wavelet domain this general filter is a function of the type of wavelet, characterized by its smoothness, thresholding rule, and threshold value, while in the curvelet domain it is only a function of thresholding rule and threshold value. Also, two methods, Stein’s unbiased risk estimate (SURE) and generalized cross validation (GCV), evaluated using a Monte Carlo technique, are utilized to optimize the algorithm in both wavelet and curvelet domains for a given seismic signal. The best wavelet function is selected from a family of fractional B-spline wavelets. The optimum thresholding rule is selected from general thresholding functions which contain the most well known thresholding functions, and the threshold value is chosen from a set of possible values. The results obtained from numerical tests show high performance of the proposed method in both wavelet and curvelet domains in comparison to conventional methods when denoising seismic data.

  13. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  14. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  15. A criterion for signal-based selection of wavelets for denoising intrafascicular nerve recordings.

    PubMed

    Kamavuako, Ernest Nlandu; Jensen, Winnie; Yoshida, Ken; Kurstjens, Mathijs; Farina, Dario

    2010-02-15

    In this paper we propose a novel method for denoising intrafascicular nerve signals with the aim of improving action potential (AP) detection. The method is based on the stationary wavelet transform and thresholding of the wavelet coefficients. Since the choice of the mother wavelet substantially impact the performance, a criterion is proposed for selecting the optimal wavelet. The criterion for selection was based on the root mean square of the average of the output signal triggered by the detected APs. The mother wavelet was parameterized through the scaling filter, which allowed optimization through the proposed criterion. The method was tested on simulated signals and on experimental neural recordings. Experimental signals were recorded from the tibial branch of the sciatic nerve of three anaesthetized New Zealand white rabbits during controlled muscle stretches. The simulation results showed that the proposed method had an equivalent effect on AP detection performance (percentage of correct detection at 6 dB signal-to-noise ratio, mean+/-SD, 95.3+/-5.2%) to the a-posteriori choice of the best wavelet (96.1+/-3.6). Moreover, the AP detection after the proposed denoising method resulted in a correlation of 0.94+/-0.02 between the estimated spike rate and the muscle length. Therefore, the study proposes an effective method for selecting the optimal mother wavelet for denoising neural signals with the aim of improving AP detection.

  16. [Wavelet analysis and its application in denoising the spectrum of hyperspectral image].

    PubMed

    Zhou, Dan; Wang, Qin-Jun; Tian, Qing-Jiu; Lin, Qi-Zhong; Fu, Wen-Xue

    2009-07-01

    In order to remove the sawtoothed noise in the spectrum of hyperspectral remote sensing and improve the accuracy of information extraction using spectrum in the present research, the spectrum of vegetation in the USGS (United States Geological Survey) spectrum library was used to simulate the performance of wavelet denoising. These spectra were measured by a custom-modified and computer-controlled Beckman spectrometer at the USGS Denver Spectroscopy Lab. The wavelength accuracy is about 5 nm in the NIR and 2 nm in the visible. In the experiment, noise with signal to noise ratio (SNR) 30 was first added to the spectrum, and then removed by the wavelet denoising approach. For the purpose of finding the optimal parameters combinations, the SNR, mean squared error (MSE), spectral angle (SA) and integrated evaluation coefficient eta were used to evaluate the approach's denoising effects. Denoising effect is directly proportional to SNR, and inversely proportional to MSE, SA and the integrated evaluation coefficient eta. Denoising results show that the sawtoothed noise in noisy spectrum was basically eliminated, and the denoised spectrum basically coincides with the original spectrum, maintaining a good spectral characteristic of the curve. Evaluation results show that the optimal denoising can be achieved by firstly decomposing the noisy spectrum into 3-7 levels using db12, db10, sym9 and sym6 wavelets, then processing the wavelet transform coefficients by soft-threshold functions, and finally estimating the thresholds by heursure threshold selection rule and rescaling using a single estimation of level noise based on first-level coefficients. However, this approach depends on the noise level, which means that for different noise level the optimal parameters combination is also diverse.

  17. Parameters optimization for wavelet denoising based on normalized spectral angle and threshold constraint machine learning

    NASA Astrophysics Data System (ADS)

    Li, Hao; Ma, Yong; Liang, Kun; Tian, Yong; Wang, Rui

    2012-01-01

    Wavelet parameters (e.g., wavelet type, level of decomposition) affect the performance of the wavelet denoising algorithm in hyperspectral applications. Current studies select the best wavelet parameters for a single spectral curve by comparing similarity criteria such as spectral angle (SA). However, the method to find the best parameters for a spectral library that contains multiple spectra has not been studied. In this paper, a criterion named normalized spectral angle (NSA) is proposed. By comparing NSA, the best combination of parameters for a spectral library can be selected. Moreover, a fast algorithm based on threshold constraint and machine learning is developed to reduce the time of a full search. After several iterations of learning, the combination of parameters that constantly surpasses a threshold is selected. The experiments proved that by using the NSA criterion, the SA values decreased significantly, and the fast algorithm could save 80% time consumption, while the denoising performance was not obviously impaired.

  18. Robust 4D Flow Denoising Using Divergence-Free Wavelet Transform

    PubMed Central

    Ong, Frank; Uecker, Martin; Tariq, Umar; Hsiao, Albert; Alley, Marcus T; Vasanawala, Shreyas S.; Lustig, Michael

    2014-01-01

    Purpose To investigate four-dimensional flow denoising using the divergence-free wavelet (DFW) transform and compare its performance with existing techniques. Theory and Methods DFW is a vector-wavelet that provides a sparse representation of flow in a generally divergence-free field and can be used to enforce “soft” divergence-free conditions when discretization and partial voluming result in numerical nondivergence-free components. Efficient denoising is achieved by appropriate shrinkage of divergence-free wavelet and nondivergence-free coefficients. SureShrink and cycle spinning are investigated to further improve denoising performance. Results DFW denoising was compared with existing methods on simulated and phantom data and was shown to yield better noise reduction overall while being robust to segmentation errors. The processing was applied to in vivo data and was demonstrated to improve visualization while preserving quantifications of flow data. Conclusion DFW denoising of four-dimensional flow data was shown to reduce noise levels in flow data both quantitatively and visually. PMID:24549830

  19. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

    PubMed Central

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2010-01-01

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false

  20. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy

    PubMed Central

    Chang, Ching-Wei; Mycek, Mary-Ann

    2014-01-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

  1. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  2. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples. PMID:18495977

  3. NOTE: Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction

    NASA Astrophysics Data System (ADS)

    Holan, Scott H.; Viator, John A.

    2008-06-01

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  4. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.

  5. Improved total variation algorithms for wavelet-based denoising

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2007-04-01

    Many improvements of wavelet-based restoration techniques suggest the use of the total variation (TV) algorithm. The concept of combining wavelet and total variation methods seems effective but the reasons for the success of this combination have been so far poorly understood. We propose a variation of the total variation method designed to avoid artifacts such as oil painting effects and is more suited than the standard TV techniques to be implemented with wavelet-based estimates. We then illustrate the effectiveness of this new TV-based method using some of the latest wavelet transforms such as contourlets and shearlets.

  6. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  7. Comparative study of ECG signal denoising by wavelet thresholding in empirical and variational mode decomposition domains.

    PubMed

    Lahmiri, Salim

    2014-09-01

    Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD-DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach. PMID:26609387

  8. Application of Wavelet Based Denoising for T-Wave Alternans Analysis in High Resolution ECG Maps

    NASA Astrophysics Data System (ADS)

    Janusek, D.; Kania, M.; Zaczek, R.; Zavala-Fernandez, H.; Zbieć, A.; Opolski, G.; Maniewski, R.

    2011-01-01

    T-wave alternans (TWA) allows for identification of patients at an increased risk of ventricular arrhythmia. Stress test, which increases heart rate in controlled manner, is used for TWA measurement. However, the TWA detection and analysis are often disturbed by muscular interference. The evaluation of wavelet based denoising methods was performed to find optimal algorithm for TWA analysis. ECG signals recorded in twelve patients with cardiac disease were analyzed. In seven of them significant T-wave alternans magnitude was detected. The application of wavelet based denoising method in the pre-processing stage increases the T-wave alternans magnitude as well as the number of BSPM signals where TWA was detected.

  9. Classical low-pass filter and real-time wavelet-based denoising technique implemented on a DSP: a comparison study

    NASA Astrophysics Data System (ADS)

    Dolabdjian, Ch.; Fadili, J.; Huertas Leyva, E.

    2002-11-01

    We have implemented a real-time numerical denoising algorithm, using the Discrete Wavelet Transform (DWT), on a TMS320C3x Digital Signal Processor (DSP). We also compared from a theoretical and practical viewpoints this post-processing approach to a more classical low-pass filter. This comparison was carried out using an ECG-type signal (ElectroCardiogram). The denoising approach is an elegant and extremely fast alternative to the classical linear filters class. It is particularly adapted to non-stationary signals such as those encountered in biological applications. The denoising allows to substantially improve detection of such signals over Fourier-based techniques. This processing step is a vital element in our acquisition chain using high sensitivity magnetic sensors. It should enhance detection of cardiac-type magnetic signals or magnetic particles in movement.

  10. Denoising of arterial and venous Doppler signals using discrete wavelet transform: effect on clinical parameters.

    PubMed

    Tokmakçi, Mahmut; Erdoğan, Nuri

    2009-05-01

    In this paper, the effects of a wavelet transform based denoising strategy on clinical Doppler parameters are analyzed. The study scheme included: (a) Acquisition of arterial and venous Doppler signals by sampling the audio output of an ultrasound scanner from 20 healthy volunteers, (b) Noise reduction via decomposition of the signals through discrete wavelet transform, (c) Spectral analysis of noisy and noise-free signals with short time Fourier transform, (d) Curve fitting to spectrograms, (e) Calculation of clinical Doppler parameters, (f) Statistical comparison of parameters obtained from noisy and noise-free signals. The decomposition level was selected as the highest level at which the maximum power spectral density and its corresponding frequency were preserved. In all subjects, noise-free spectrograms had smoother trace with less ripples. In both arterial and venous spectrograms, denoising resulted in a significant decrease in the maximum (systolic) and mean frequency, with no statistical difference in the minimum (diastolic) frequency. In arterial signals, this leads to a significant decrease in the calculated parameters such as Systolic/Diastolic Velocity Ratio, Resistivity Index, Pulsatility Index and Acceleration Time. Acceleration Index did not change significantly. Despite a successful denoising, the effects of wavelet decomposition on high frequency components in the Doppler signal should be challenged by comparison with reference data, or, through clinical investigations. PMID:19470316

  11. Applications of wavelet analysis in differential propagation phase shift data de-noising

    NASA Astrophysics Data System (ADS)

    Hu, Zhiqun; Liu, Liping

    2014-07-01

    Using numerical simulation data of the forward differential propagation shift (ϕDP) of polarimetric radar, the principle and performing steps of noise reduction by wavelet analysis are introduced in detail. Profiting from the multiscale analysis, various types of noises can be identified according to their characteristics in different scales, and suppressed in different resolutions by a penalty threshold strategy through which a fixed threshold value is applied, a default threshold strategy through which the threshold value is determined by the noise intensity, or a ϕDP penalty threshold strategy through which a special value is designed for ϕDP de-noising. Then, a hard-or soft-threshold function, depending on the de-noising purpose, is selected to reconstruct the signal. Combining the three noise suppression strategies and the two signal reconstruction functions, and without loss of generality, two schemes are presented to verify the de-noising effect by dbN wavelets: (1) the penalty threshold strategy with the soft threshold function scheme (PSS); (2) the ϕDP penalty threshold strategy with the soft threshold function scheme (PPSS). Furthermore, the wavelet de-noising is compared with the mean, median, Kalman, and finite impulse response (FIR) methods with simulation data and two actual cases. The results suggest that both of the two schemes perform well, especially when ϕDP data are simultaneously polluted by various scales and types of noises. A slight difference is that the PSS method can retain more detail, and the PPSS can smooth the signal more successfully.

  12. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  13. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time

  14. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  15. Neurochip based on light-addressable potentiometric sensor with wavelet transform de-noising*

    PubMed Central

    Liu, Qing-jun; Ye, Wei-wei; Yu, Hui; Hu, Ning; Du, Li-ping; Wang, Ping

    2010-01-01

    Neurochip based on light-addressable potentiometric sensor (LAPS), whose sensing elements are excitable cells, can monitor electrophysiological properties of cultured neuron networks with cellular signals well analyzed. Here we report a kind of neurochip with rat pheochromocytoma (PC12) cells hybrid with LAPS and a method of de-noising signals based on wavelet transform. Cells were cultured on LAPS for several days to form networks, and we then used LAPS system to detect the extracellular potentials with signals de-noised according to decomposition in the time-frequency space. The signal was decomposed into various scales, and coefficients were processed based on the properties of each layer. At last, signal was reconstructed based on the new coefficients. The results show that after de-noising, baseline drift is removed and signal-to-noise ratio is increased. It suggests that the neurochip of PC12 cells coupled to LAPS is stable and suitable for long-term and non-invasive measurement of cell electrophysiological properties with wavelet transform, taking advantage of its time-frequency localization analysis to reduce noise. PMID:20443210

  16. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM.

    PubMed

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately.

  17. A wavelet denoising approach for signal action isolation in the ear canal.

    PubMed

    Vaidyanathan, Ravi; Wang, Shouyan; Gupta, Lalit

    2008-01-01

    The goal of this work was to develop and implement a new filtering strategy to denoise acoustic signals in the ear canal resulting from voluntary movement of the tongue (as a method of generating control input), as well as from other active actions, (speech, eating, drinking, smoking), and passive actions (swallowing, adjusting the jaw, physiological activity). The strategy is based on a denoising wavelet shrinkage approach that separates rhythmic bursting activity and white noise representing sustained tonic activity. While past work has addressed the discrimination of voluntary TMEP signals from one-another, no work has addressed acoustic artefact rejection within the ear. The results described here, combined with our past work in isolating critical components of tongue movement ear pressure (TMEP) signals, provide a basis for discriminating voluntary and involuntary actions of the tongue by monitoring pressure in the ear. At this time, the system has worked in real-time for assistive device control.

  18. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    PubMed Central

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  19. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal. PMID:25219236

  20. [Research on ECG de-noising method based on ensemble empirical mode decomposition and wavelet transform using improved threshold function].

    PubMed

    Ye, Linlin; Yang, Dan; Wang, Xu

    2014-06-01

    A de-noising method for electrocardiogram (ECG) based on ensemble empirical mode decomposition (EEMD) and wavelet threshold de-noising theory is proposed in our school. We decomposed noised ECG signals with the proposed method using the EEMD and calculated a series of intrinsic mode functions (IMFs). Then we selected IMFs and reconstructed them to realize the de-noising for ECG. The processed ECG signals were filtered again with wavelet transform using improved threshold function. In the experiments, MIT-BIH ECG database was used for evaluating the performance of the proposed method, contrasting with de-noising method based on EEMD and wavelet transform with improved threshold function alone in parameters of signal to noise ratio (SNR) and mean square error (MSE). The results showed that the ECG waveforms de-noised with the proposed method were smooth and the amplitudes of ECG features did not attenuate. In conclusion, the method discussed in this paper can realize the ECG denoising and meanwhile keep the characteristics of original ECG signal.

  1. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation.

  2. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  3. Research on biochemical spectrum denoising based on a novel wavelet threshold function and an improved translation-invariance method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Zeng, Lvming; Huang, Zhen; Huang, Shuanggen

    2008-12-01

    In this paper, an improved wavelet threshold denoising with combined translation invariance(TI)method is adopted to remove noises existed in the bio-chemical spectrum. Meanwhile, a novel wavelet threshold function and an optimal threshold determination algorithm are proposed. The new function is continuous and high-order derivable, it can overcome the vibration phenomena generated by the classical threshold function and decrease the error of reconstructed spectrum. So, it is superior to the frequency-domain filtering methods, the soft- and hard-threshold function proposed by D.L. Donoho and the semisoft-threshold function proposed by Gao, etc. The experimental results show that the improved TI wavelet threshold(TI-WT) denoising method can availably eliminate the Pseudo-Gibbs phenomena generated by the traditional wavelet thresholding method. At the same time, the improved wavelet threshold function and the TI-WT method present lower root mean-square-error (RMSE) and higher signal-to-noise ratio(SNR) than the frequency-domain filtering, classical soft and hard-threshold denoising The SNR increasing from 17.3200 to 32.5609, the RMSE decreasing from 4.0244 to 0.6257. Otherwise, The improved denoising method not only makes the spectrum smooth, but also effectively preserves the edge characteristics of the original spectrum.

  4. Wavelet-Based Speech Enhancement Using Time-Adapted Noise Estimation

    NASA Astrophysics Data System (ADS)

    Lei, Sheau-Fang; Tung, Ying-Kai

    Spectral subtraction is commonly used for speech enhancement in a single channel system because of the simplicity of its implementation. However, this algorithm introduces perceptually musical noise while suppressing the background noise. We propose a wavelet-based approach in this paper for suppressing the background noise for speech enhancement in a single channel system. The wavelet packet transform, which emulates the human auditory system, is used to decompose the noisy signal into critical bands. Wavelet thresholding is then temporally adjusted with the noise power by time-adapted noise estimation. The proposed algorithm can efficiently suppress the noise while reducing speech distortion. Experimental results, including several objective measurements, show that the proposed wavelet-based algorithm outperforms spectral subtraction and other wavelet-based denoising approaches for speech enhancement for nonstationary noise environments.

  5. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance.

  6. Streak image denoising and segmentation using adaptive Gaussian guided filter.

    PubMed

    Jiang, Zhuocheng; Guo, Baoping

    2014-09-10

    In streak tube imaging lidar (STIL), streak images are obtained using a CCD camera. However, noise in the captured streak images can greatly affect the quality of reconstructed 3D contrast and range images. The greatest challenge for streak image denoising is reducing the noise while preserving details. In this paper, we propose an adaptive Gaussian guided filter (AGGF) for noise removal and detail enhancement of streak images. The proposed algorithm is based on a guided filter (GF) and part of an adaptive bilateral filter (ABF). In the AGGF, the details are enhanced by optimizing the offset parameter. AGGF-denoised streak images are significantly sharper than those denoised by the GF. Moreover, the AGGF is a fast linear time algorithm achieved by recursively implementing a Gaussian filter kernel. Experimentally, AGGF demonstrates its capacity to preserve edges and thin structures and outperforms the existing bilateral filter and domain transform filter in terms of both visual quality and peak signal-to-noise ratio performance. PMID:25321679

  7. The application of wavelet shrinkage denoising to magnetic Barkhausen noise measurements

    SciTech Connect

    Thomas, James

    2014-02-18

    The application of Magnetic Barkhausen Noise (MBN) as a non-destructive method of defect detection has proliferated throughout the manufacturing community. Instrument technology and measurement methodology have matured commensurately as applications have moved from the R and D labs to the fully automated manufacturing environment. These new applications present a new set of challenges including a bevy of error sources. A significant obstacle in many industrial applications is a decrease in signal to noise ratio due to (i) environmental EMI and (II) compromises in sensor design for the purposes of automation. The stochastic nature of MBN presents a challenge to any method of noise reduction. An application of wavelet shrinkage denoising is proposed as a method of decreasing extraneous noise in MBN measurements. The method is tested and yields marked improvement on measurements subject to EMI, grounding noise, and even measurements in ideal conditions.

  8. A blind detection scheme based on modified wavelet denoising algorithm for wireless optical communications

    NASA Astrophysics Data System (ADS)

    Li, Ruijie; Dang, Anhong

    2015-10-01

    This paper investigates a detection scheme without channel state information for wireless optical communication (WOC) systems in turbulence induced fading channel. The proposed scheme can effectively diminish the additive noise caused by background radiation and photodetector, as well as the intensity scintillation caused by turbulence. The additive noise can be mitigated significantly using the modified wavelet threshold denoising algorithm, and then, the intensity scintillation can be attenuated by exploiting the temporal correlation of the WOC channel. Moreover, to improve the performance beyond that of the maximum likelihood decision, the maximum a posteriori probability (MAP) criterion is considered. Compared with conventional blind detection algorithm, simulation results show that the proposed detection scheme can improve the signal-to-noise ratio (SNR) performance about 4.38 dB while the bit error rate and scintillation index (SI) are 1×10-6 and 0.02, respectively.

  9. Wavelet-based noise-model driven denoising algorithm for differential phase contrast mammography.

    PubMed

    Arboleda, Carolina; Wang, Zhentian; Stampanoni, Marco

    2013-05-01

    Traditional mammography can be positively complemented by phase contrast and scattering x-ray imaging, because they can detect subtle differences in the electron density of a material and measure the local small-angle scattering power generated by the microscopic density fluctuations in the specimen, respectively. The grating-based x-ray interferometry technique can produce absorption, differential phase contrast (DPC) and scattering signals of the sample, in parallel, and works well with conventional X-ray sources; thus, it constitutes a promising method for more reliable breast cancer screening and diagnosis. Recently, our team proved that this novel technology can provide images superior to conventional mammography. This new technology was used to image whole native breast samples directly after mastectomy. The images acquired show high potential, but the noise level associated to the DPC and scattering signals is significant, so it is necessary to remove it in order to improve image quality and visualization. The noise models of the three signals have been investigated and the noise variance can be computed. In this work, a wavelet-based denoising algorithm using these noise models is proposed. It was evaluated with both simulated and experimental mammography data. The outcomes demonstrated that our method offers a good denoising quality, while simultaneously preserving the edges and important structural features. Therefore, it can help improve diagnosis and implement further post-processing techniques such as fusion of the three signals acquired.

  10. Non parametric denoising methods based on wavelets: Application to electron microscopy images in low exposure time

    SciTech Connect

    Soumia, Sid Ahmed; Messali, Zoubeida; Ouahabi, Abdeldjalil; Trepout, Sylvain E-mail: cedric.messaoudi@curie.fr Messaoudi, Cedric E-mail: cedric.messaoudi@curie.fr Marco, Sergio E-mail: cedric.messaoudi@curie.fr

    2015-01-13

    The 3D reconstruction of the Cryo-Transmission Electron Microscopy (Cryo-TEM) and Energy Filtering TEM images (EFTEM) hampered by the noisy nature of these images, so that their alignment becomes so difficult. This noise refers to the collision between the frozen hydrated biological samples and the electrons beam, where the specimen is exposed to the radiation with a high exposure time. This sensitivity to the electrons beam led specialists to obtain the specimen projection images at very low exposure time, which resulting the emergence of a new problem, an extremely low signal-to-noise ratio (SNR). This paper investigates the problem of TEM images denoising when they are acquired at very low exposure time. So, our main objective is to enhance the quality of TEM images to improve the alignment process which will in turn improve the three dimensional tomography reconstructions. We have done multiple tests on special TEM images acquired at different exposure time 0.5s, 0.2s, 0.1s and 1s (i.e. with different values of SNR)) and equipped by Golding beads for helping us in the assessment step. We herein, propose a structure to combine multiple noisy copies of the TEM images. The structure is based on four different denoising methods, to combine the multiple noisy TEM images copies. Namely, the four different methods are Soft, the Hard as Wavelet-Thresholding methods, Bilateral Filter as a non-linear technique able to maintain the edges neatly, and the Bayesian approach in the wavelet domain, in which context modeling is used to estimate the parameter for each coefficient. To ensure getting a high signal-to-noise ratio, we have guaranteed that we are using the appropriate wavelet family at the appropriate level. So we have chosen âĂIJsym8âĂİ wavelet at level 3 as the most appropriate parameter. Whereas, for the bilateral filtering many tests are done in order to determine the proper filter parameters represented by the size of the filter, the range parameter and the

  11. Locally adaptive bilateral clustering for universal image denoising

    NASA Astrophysics Data System (ADS)

    Toh, K. K. V.; Mat Isa, N. A.

    2012-12-01

    This paper presents a novel and efficient locally adaptive denoising method based on clustering of pixels into regions of similar geometric and radiometric structures. Clustering is performed by adaptively segmenting pixels in the local kernel based on their augmented variational series. Then, noise pixels are restored by selectively considering the radiometric and spatial properties of every pixel in the formed clusters. The proposed method is exceedingly robust in conveying reliable local structural information even in the presence of noise. As a result, the proposed method substantially outperforms other state-of-the-art methods in terms of image restoration and computational cost. We support our claims with ample simulated and real data experiments. The relatively fast runtime from extensive simulations also suggests that the proposed method is suitable for a variety of image-based products — either embedded in image capturing devices or applied as image enhancement software.

  12. Wavelet Transform-Based De-Noising for Two-Photon Imaging of Synaptic Ca2+ Transients

    PubMed Central

    Tigaret, Cezar M.; Tsaneva-Atanasova, Krasimira; Collingridge, Graham L.; Mellor, Jack R.

    2013-01-01

    Postsynaptic Ca2+ transients triggered by neurotransmission at excitatory synapses are a key signaling step for the induction of synaptic plasticity and are typically recorded in tissue slices using two-photon fluorescence imaging with Ca2+-sensitive dyes. The signals generated are small with very low peak signal/noise ratios (pSNRs) that make detailed analysis problematic. Here, we implement a wavelet-based de-noising algorithm (PURE-LET) to enhance signal/noise ratio for Ca2+ fluorescence transients evoked by single synaptic events under physiological conditions. Using simulated Ca2+ transients with defined noise levels, we analyzed the ability of the PURE-LET algorithm to retrieve the underlying signal. Fitting single Ca2+ transients with an exponential rise and decay model revealed a distortion of τrise but improved accuracy and reliability of τdecay and peak amplitude after PURE-LET de-noising compared to raw signals. The PURE-LET de-noising algorithm also provided a ∼30-dB gain in pSNR compared to ∼16-dB pSNR gain after an optimized binomial filter. The higher pSNR provided by PURE-LET de-noising increased discrimination accuracy between successes and failures of synaptic transmission as measured by the occurrence of synaptic Ca2+ transients by ∼20% relative to an optimized binomial filter. Furthermore, in comparison to binomial filter, no optimization of PURE-LET de-noising was required for reducing arbitrary bias. In conclusion, the de-noising of fluorescent Ca2+ transients using PURE-LET enhances detection and characterization of Ca2+ responses at central excitatory synapses. PMID:23473483

  13. Nonlinear denoising of functional magnetic resonance imaging time series with wavelets

    NASA Astrophysics Data System (ADS)

    Stausberg, Sven; Lehnertz, Klaus

    2009-04-01

    In functional magnetic resonance imaging (fMRI) the blood oxygenation level dependent (BOLD) effect is used to identify and delineate neuronal activity. The sensitivity of a fMRI-based detection of neuronal activation, however, strongly depends on the relative levels of signal and noise in the time series data, and a large number of different artifact and noise sources interfere with the weak signal changes of the BOLD response. Thus, noise reduction is important to allow an accurate estimation of single activation-related BOLD signals across brain regions. Techniques employed so far include filtering in the time or frequency domain which, however, does not take into account possible nonlinearities of the BOLD response. We here evaluate a previously proposed method for nonlinear denoising of short and transient signals, which combines the wavelet transform with techniques from nonlinear time series analysis. We adopt the method to the problem at hand and show that successful noise reduction and, more importantly, preservation of the shape of individual BOLD signals can be achieved even in the presence of in-band noise.

  14. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method.

  15. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. PMID:26753616

  16. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  17. Adapted waveform analysis, wavelet packets, and local cosine libraries as a tool for image processing

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.; Woog, Lionel J.

    1995-09-01

    Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given an image these methods provide special matched collections of templates (orthonormal bases) enabling an efficient coding of the image. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterised by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes)of various sizes locations and amplitudes using a variety of orthogonal bases. These selected basis functions are chosen inside predefined libraries of oscillatory localized functions (trigonometric and wavelet-packets waveforms) so as to optimize the number of parameters needed to describe our object. These algorithms are of complexity N log N opening the door for a large range of applications in signal and image processing, such as compression, feature extraction denoising and enhancement. In particular we describe a class of special purpose compressions for fingerprint irnages, as well as denoising tools for texture and noise extraction. We start by relating traditional Fourier methods to wavelet, wavelet-packet based algorithms using a recent refinement of the windowed sine and cosine transforms. We will then derive an adapted local sine transform show it's relation to wavelet and wavelet-packet analysis and describe an analysis toolkit illustrating the merits of different adaptive and nonadaptive schemes.

  18. Mouse EEG spike detection based on the adapted continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.

    2016-04-01

    Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.

  19. Nonlinear adaptive wavelet analysis of electrocardiogram signals

    NASA Astrophysics Data System (ADS)

    Yang, H.; Bukkapatnam, S. T.; Komanduri, R.

    2007-08-01

    Wavelet representation can provide an effective time-frequency analysis for nonstationary signals, such as the electrocardiogram (EKG) signals, which contain both steady and transient parts. In recent years, wavelet representation has been emerging as a powerful time-frequency tool for the analysis and measurement of EKG signals. The EKG signals contain recurring, near-periodic patterns of P , QRS , T , and U waveforms, each of which can have multiple manifestations. Identification and extraction of a compact set of features from these patterns is critical for effective detection and diagnosis of various disorders. This paper presents an approach to extract a fiducial pattern of EKG based on the consideration of the underlying nonlinear dynamics. The pattern, in a nutshell, is a combination of eigenfunctions of the ensembles created from a Poincare section of EKG dynamics. The adaptation of wavelet functions to the fiducial pattern thus extracted yields two orders of magnitude (some 95%) more compact representation (measured in terms of Shannon signal entropy). Such a compact representation can facilitate in the extraction of features that are less sensitive to extraneous noise and other variations. The adaptive wavelet can also lead to more efficient algorithms for beat detection and QRS cancellation as well as for the extraction of multiple classical EKG signal events, such as widths of QRS complexes and QT intervals.

  20. Powerline interference reduction in ECG signals using empirical wavelet transform and adaptive filtering.

    PubMed

    Singh, Omkar; Sunkaria, Ramesh Kumar

    2015-01-01

    Separating an information-bearing signal from the background noise is a general problem in signal processing. In a clinical environment during acquisition of an electrocardiogram (ECG) signal, The ECG signal is corrupted by various noise sources such as powerline interference (PLI), baseline wander and muscle artifacts. This paper presents novel methods for reduction of powerline interference in ECG signals using empirical wavelet transform (EWT) and adaptive filtering. The proposed methods are compared with the empirical mode decomposition (EMD) based PLI cancellation methods. A total of six methods for PLI reduction based on EMD and EWT are analysed and their results are presented in this paper. The EWT-based de-noising methods have less computational complexity and are more efficient as compared with the EMD-based de-noising methods. PMID:25412942

  1. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models

    PubMed Central

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today’s increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, Singapore’s MSCI futures, South Korea’s KOSPI 200 futures, and Taiwan’s TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis. PMID:27248692

  2. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models.

    PubMed

    Chan Phooi M'ng, Jacinta; Mehralizadeh, Mohammadali

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today's increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong's Hang Seng futures, Japan's NIKKEI 225 futures, Singapore's MSCI futures, South Korea's KOSPI 200 futures, and Taiwan's TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis.

  3. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models.

    PubMed

    Chan Phooi M'ng, Jacinta; Mehralizadeh, Mohammadali

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today's increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong's Hang Seng futures, Japan's NIKKEI 225 futures, Singapore's MSCI futures, South Korea's KOSPI 200 futures, and Taiwan's TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis. PMID:27248692

  4. A Neuro-Fuzzy Inference System Combining Wavelet Denoising, Principal Component Analysis, and Sequential Probability Ratio Test for Sensor Monitoring

    SciTech Connect

    Na, Man Gyun; Oh, Seungrohk

    2002-11-15

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.

  5. Adaptively Tuned Iterative Low Dose CT Image Denoising.

    PubMed

    Hashemi, SayedMasoud; Paul, Narinder S; Beheshti, Soosan; Cobbold, Richard S C

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  6. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  7. Adaptive wavelet Wiener filtering of ECG signals.

    PubMed

    Smital, Lukáš; Vítek, Martin; Kozumplík, Jiří; Provazník, Ivo

    2013-02-01

    In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter.

  8. Adaptive wavelet Wiener filtering of ECG signals.

    PubMed

    Smital, Lukáš; Vítek, Martin; Kozumplík, Jiří; Provazník, Ivo

    2013-02-01

    In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter. PMID:23192472

  9. Analysis of hydrological trend for radioactivity content in bore-hole water samples using wavelet based denoising.

    PubMed

    Paul, Sabyasachi; Suman, V; Sarkar, P K; Ranade, A K; Pulhani, V; Dafauti, S; Datta, D

    2013-08-01

    A wavelet transform based denoising methodology has been applied to detect the presence of any discernable trend in (137)Cs and (90)Sr activity levels in bore-hole water samples collected four times a year over a period of eight years, from 2002 to 2009, in the vicinity of typical nuclear facilities inside the restricted access zones. The conventional non-parametric methods viz., Mann-Kendall and Spearman rho, along with linear regression when applied for detecting the linear trend in the time series data do not yield results conclusive for trend detection with a confidence of 95% for most of the samples. The stationary wavelet based hard thresholding data pruning method with Haar as the analyzing wavelet was applied to remove the noise present in the same data. Results indicate that confidence interval of the established trend has significantly improved after pre-processing to more than 98% compared to the conventional non-parametric methods when applied to direct measurements.

  10. Adaptive approach for variable noise suppression on laser-induced breakdown spectroscopy responses using stationary wavelet transform.

    PubMed

    Schlenke, Jan; Hildebrand, Lars; Moros, Javier; Laserna, J Javier

    2012-11-19

    Spectral signals are often corrupted by noise during their acquisition and transmission. Signal processing refers to a variety of operations that can be carried out on measurements in order to enhance the quality of information. In this sense, signal denoising is used to reduce noise distortions while keeping alterations of the important signal features to a minimum. The minimization of noise is a highly critical task since, in many cases, there is no prior knowledge of the signal or of the noise. In the context of denoising, wavelet transformation has become a valuable tool. The present paper proposes a noise reduction technique for suppressing noise in laser-induced breakdown spectroscopy (LIBS) signals using wavelet transform. An extension of the Donoho's scheme, which uses a redundant form of wavelet transformation and an adaptive threshold estimation method, is suggested. Capabilities and results achieved on denoising processes of artificial signals and actual spectroscopic data, both corrupted by noise with changing intensities, are presented. In order to better consolidate the gains so far achieved by the proposed strategy, a comparison with alternative approaches, as well as with traditional techniques, is also made.

  11. Enhanced Terahertz Imaging of Small Forced Delamination in Woven Glass Fibre-reinforced Composites with Wavelet De-noising

    NASA Astrophysics Data System (ADS)

    Dong, Junliang; Locquet, Alexandre; Citrin, D. S.

    2016-03-01

    Terahertz (THz) reflection imaging is applied to characterize a woven glass fibre-reinforced composite laminate with a small region of forced delamination. The forced delamination is created by inserting a disk of 25- μ m-thick Upilex film, which is below the THz axial resolution, resulting in one featured echo with small amplitude in the reflected THz pulses. Low-amplitude components of the temporal signal due to ambient water vapor produce features of comparable amplitude with features associated with the THz pulse reflected off the interfaces of the delamination and suppress the contrast of THz C- and B-scans. Wavelet shrinkage de-noising is performed to remove water-vapor features, leading to enhanced THz C- and B-scans to locate the delamination in three dimensions with high contrast.

  12. Adaptive wavelet simulation of global ocean dynamics

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-07-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  13. Wavelet-based denoising of the Fourier metric in real-time wavefront correction for single molecule localization microscopy

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Mortensen, Luke J.; Kner, Peter

    2016-03-01

    Wavefront sensorless schemes for correction of aberrations induced by biological specimens require a time invariant property of an image as a measure of fitness. Image intensity cannot be used as a metric for Single Molecule Localization (SML) microscopy because the intensity of blinking fluorophores follows exponential statistics. Therefore a robust intensity-independent metric is required. We previously reported a Fourier Metric (FM) that is relatively intensity independent. The Fourier metric has been successfully tested on two machine learning algorithms, a Genetic Algorithm and Particle Swarm Optimization, for wavefront correction about 50 μm deep inside the Central Nervous System (CNS) of Drosophila. However, since the spatial frequencies that need to be optimized fall into regions of the Optical Transfer Function (OTF) that are more susceptible to noise, adding a level of denoising can improve performance. Here we present wavelet-based approaches to lower the noise level and produce a more consistent metric. We compare performance of different wavelets such as Daubechies, Bi-Orthogonal, and reverse Bi-orthogonal of different degrees and orders for pre-processing of images.

  14. A New Adaptive Mother Wavelet for Electromagnetic Transient Analysis

    NASA Astrophysics Data System (ADS)

    Guillén, Daniel; Idárraga-Ospina, Gina; Cortes, Camilo

    2016-01-01

    Wavelet Transform (WT) is a powerful technique of signal processing, its applications in power systems have been increasing to evaluate power system conditions, such as faults, switching transients, power quality issues, among others. Electromagnetic transients in power systems are due to changes in the network configuration, producing non-periodic signals, which have to be identified to avoid power outages in normal operation or transient conditions. In this paper a methodology to develop a new adaptive mother wavelet for electromagnetic transient analysis is proposed. Classification is carried out with an innovative technique based on adaptive wavelets, where filter bank coefficients will be adapted until a discriminant criterion is optimized. Then, its corresponding filter coefficients will be used to get the new mother wavelet, named wavelet ET, which allowed to identify and to distinguish the high frequency information produced by different electromagnetic transients.

  15. Adaptive Tensor-Based Principal Component Analysis for Low-Dose CT Image Denoising

    PubMed Central

    Ai, Danni; Yang, Jian; Fan, Jingfan; Cong, Weijian; Wang, Yongtian

    2015-01-01

    Computed tomography (CT) has a revolutionized diagnostic radiology but involves large radiation doses that directly impact image quality. In this paper, we propose adaptive tensor-based principal component analysis (AT-PCA) algorithm for low-dose CT image denoising. Pixels in the image are presented by their nearby neighbors, and are modeled as a patch. Adaptive searching windows are calculated to find similar patches as training groups for further processing. Tensor-based PCA is used to obtain transformation matrices, and coefficients are sequentially shrunk by the linear minimum mean square error. Reconstructed patches are obtained, and a denoised image is finally achieved by aggregating all of these patches. The experimental results of the standard test image show that the best results are obtained with two denoising rounds according to six quantitative measures. For the experiment on the clinical images, the proposed AT-PCA method can suppress the noise, enhance the edge, and improve the image quality more effectively than NLM and KSVD denoising methods. PMID:25993566

  16. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  17. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  18. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal

    PubMed Central

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  19. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    PubMed

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  20. An adaptive morphological gradient lifting wavelet for detecting bearing defects

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng

    2012-05-01

    This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.

  1. Application of a Multidimensional Wavelet Denoising Algorithm for the Detection and Characterization of Astrophysical Sources of Gamma Rays

    SciTech Connect

    Digel, S.W.; Zhang, B.; Chiang, J.; Fadili, J.M.; Starck, J.-L.; /Saclay /Stanford U., Statistics Dept.

    2005-12-02

    Zhang, Fadili, & Starck have recently developed a denoising procedure for Poisson data that offers advantages over other methods of intensity estimation in multiple dimensions. Their procedure, which is nonparametric, is based on thresholding wavelet coefficients. The restoration algorithm applied after thresholding provides good conservation of source flux. We present an investigation of the procedure of Zhang et al. for the detection and characterization of astrophysical sources of high-energy gamma rays, using realistic simulated observations with the Large Area Telescope (LAT). The LAT is to be launched in late 2007 on the Gamma-ray Large Area Space Telescope mission. Source detection in the LAT data is complicated by the low fluxes of point sources relative to the diffuse celestial background, the limited angular resolution, and the tremendous variation of that resolution with energy (from tens of degrees at {approx}30 MeV to 0.1{sup o} at 10 GeV). The algorithm is very fast relative to traditional likelihood model fitting, and permits immediate estimation of spectral properties. Astrophysical sources of gamma rays, especially active galaxies, are typically quite variable, and our current work may lead to a reliable method to quickly characterize the flaring properties of newly-detected sources.

  2. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  3. Signal enhancement of a novel multi-address coding lidar backscatters based on a combined technique of demodulation and wavelet de-noising

    NASA Astrophysics Data System (ADS)

    Xu, Fan; Wang, Yuanqing

    2015-11-01

    Multi-address coding (MAC) lidar is a novel lidar system recently developed by our laboratory. By applying a new combined technique of multi-address encoding, multiplexing and decoding, range resolution is effectively improved. In data processing, a signal enhancement method involving laser signal demodulation and wavelet de-noising in the downlink is proposed to improve the signal to noise ratio (SNR) of raw signal and the capability of remote application. In this paper, the working mechanism of MAC lidar is introduced and the implementation of encoding and decoding is also illustrated. We focus on the signal enhancement method and provide the mathematical model and analysis of an algorithm on the basis of the combined method of demodulation and wavelet de-noising. The experimental results and analysis demonstrate that the signal enhancement approach improves the SNR of raw data. Overall, compared with conventional lidar system, MAC lidar achieves a higher resolution and better de-noising performance in long-range detection.

  4. Adaptive non-local means filtering based on local noise level for CT denoising

    NASA Astrophysics Data System (ADS)

    Li, Zhoubo; Yu, Lifeng; Trzasko, Joshua D.; Fletcher, Joel G.; McCollough, Cynthia H.; Manduca, Armando

    2012-03-01

    Radiation dose from CT scans is an increasing health concern in the practice of radiology. Higher dose scans can produce clearer images with high diagnostic quality, but may increase the potential risk of radiation-induced cancer or other side effects. Lowering radiation dose alone generally produces a noisier image and may degrade diagnostic performance. Recently, CT dose reduction based on non-local means (NLM) filtering for noise reduction has yielded promising results. However, traditional NLM denoising operates under the assumption that image noise is spatially uniform noise, while in CT images the noise level varies significantly within and across slices. Therefore, applying NLM filtering to CT data using a global filtering strength cannot achieve optimal denoising performance. In this work, we have developed a technique for efficiently estimating the local noise level for CT images, and have modified the NLM algorithm to adapt to local variations in noise level. The local noise level estimation technique matches the true noise distribution determined from multiple repetitive scans of a phantom object very well. The modified NLM algorithm provides more effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with the clinical workflow.

  5. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  6. Big data extraction with adaptive wavelet analysis (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Qu, Hongya; Chen, Genda; Ni, Yiqing

    2015-04-01

    Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.

  7. A Comprehensive Noise Robust Speech Parameterization Algorithm Using Wavelet Packet Decomposition-Based Denoising and Speech Feature Representation Techniques

    NASA Astrophysics Data System (ADS)

    Kotnik, Bojan; Kačič, Zdravko

    2007-12-01

    This paper concerns the problem of automatic speech recognition in noise-intense and adverse environments. The main goal of the proposed work is the definition, implementation, and evaluation of a novel noise robust speech signal parameterization algorithm. The proposed procedure is based on time-frequency speech signal representation using wavelet packet decomposition. A new modified soft thresholding algorithm based on time-frequency adaptive threshold determination was developed to efficiently reduce the level of additive noise in the input noisy speech signal. A two-stage Gaussian mixture model (GMM)-based classifier was developed to perform speech/nonspeech as well as voiced/unvoiced classification. The adaptive topology of the wavelet packet decomposition tree based on voiced/unvoiced detection was introduced to separately analyze voiced and unvoiced segments of the speech signal. The main feature vector consists of a combination of log-root compressed wavelet packet parameters, and autoregressive parameters. The final output feature vector is produced using a two-staged feature vector postprocessing procedure. In the experimental framework, the noisy speech databases Aurora 2 and Aurora 3 were applied together with corresponding standardized acoustical model training/testing procedures. The automatic speech recognition performance achieved using the proposed noise robust speech parameterization procedure was compared to the standardized mel-frequency cepstral coefficient (MFCC) feature extraction procedures ETSI ES 201 108 and ETSI ES 202 050.

  8. Rolling element bearings multi-fault classification based on the wavelet denoising and support vector machine

    NASA Astrophysics Data System (ADS)

    Abbasion, S.; Rafsanjani, A.; Farshidianfar, A.; Irani, N.

    2007-10-01

    Due to the importance of rolling bearings as one of the most widely used industrial machinery elements, development of proper monitoring and fault diagnosis procedure to prevent malfunctioning and failure of these elements during operation is necessary. For rolling bearing fault detection, it is expected that a desired time-frequency analysis method has good computational efficiency, and has good resolution in both, time and frequency domains. The point of interest of this investigation is the presence of an effective method for multi-fault diagnosis in such systems with optimizing signal decomposition levels by using wavelet analysis and support vector machine (SVM). The system that is under study is an electric motor which has two rolling bearings, one of them is next to the output shaft and the other one is next to the fan and for each of them there is one normal form and three false forms, which make 8 forms for study. The results that we achieved from wavelet analysis and SVM are fully in agreement with empirical result.

  9. Evaluation of Wavelet and Non-Local Mean Denoising of Terrestrial Laser Scanning Data for Small-Scale Joint Roughness Estimation

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2016-06-01

    Terrestrial Laser Scanning (TLS) is a well-known remote sensing tool that enables precise 3D acquisition of surface morphology from distances of a few meters to a few kilometres. The morphological representations obtained are important in engineering geology and rock mechanics, where surface morphology details are of particular interest in rock stability problems and engineering construction. The actual size of the discernible surface detail depends on the instrument range error (noise effect) and effective data resolution (smoothing effect). Range error can be (partly) removed by applying a denoising method. Based on the positive results from previous studies, two denoising methods, namely 2D wavelet transform (WT) and non-local mean (NLM), are tested here, with the goal of obtaining roughness estimations that are suitable in the context of rock engineering practice. Both methods are applied in two variants: conventional Discrete WT (DWT) and Stationary WT (SWT), classic NLM (NLM) and probabilistic NLM (PNLM). The noise effect and denoising performance are studied in relation to the TLS effective data resolution. Analyses are performed on the reference data acquired by a highly precise Advanced TOpometric Sensor (ATOS) on a 20x30 cm rock joint sample. Roughness ratio is computed by comparing the noisy and denoised surfaces to the original ATOS surface. The roughness ratio indicates the success of all denoising methods. Besides, it shows that SWT oversmoothes the surface and the performance of the DWT, NLM and PNLM vary with the noise level and data resolution. The noise effect becomes less prominent when data resolution decreases.

  10. Improving the performance of the prony method using a wavelet domain filter for MRI denoising.

    PubMed

    Jaramillo, Rodney; Lentini, Marianela; Paluszny, Marco

    2014-01-01

    The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method.

  11. Wavelet De-noising of GNSS Based Bridge Health Monitoring Data

    NASA Astrophysics Data System (ADS)

    Ogundipe, Oluropo; Lee, Jae Kang; Roberts, Gethin Wyn

    2014-11-01

    GNSS signal multipath occurs when the GNSS signal reflects of objects in the antenna environment and arrives at the antenna via multiple paths. A bridge environment is one that is prone to multipath with the bridge structure, as well as passing vehicles providing static and dynamic sources of multipath. In this paper, the Wavelet Transform (WT) is applied to bridge data collected on the Machang cable stayed bridge in Korea. The WT algorithm was applied to the GNSS derived bridge defection data at the mid-span. Up to 41% improvement in RMS was observed afterwavelet shrinkage de-noisingwas applied.Application of this algorithm to the torsion data showed significant improvement with the residual average and RMS decreased by 40% and 45% respectively. This method enabled the generation of more accurate information for bridge health monitoring systems in terms of the analysis of frequency, mode shape and three dimensional defections.

  12. Improving the Performance of the Prony Method Using a Wavelet Domain Filter for MRI Denoising

    PubMed Central

    Lentini, Marianela; Paluszny, Marco

    2014-01-01

    The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T2 MR images, and the filter is applied to each image before using the variant of the Prony method. PMID:24834108

  13. Adaptive nonlocal means filtering based on local noise level for CT denoising

    SciTech Connect

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.; Blezek, Daniel J.; Manduca, Armando; Yu, Lifeng; Fletcher, Joel G.; McCollough, Cynthia H.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analytical noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the

  14. Subsidence measurement and DSM extraction of IFSAR data using anisotropic diffusion and wavelet denoising filters

    NASA Astrophysics Data System (ADS)

    Sartor, Kenneth; Allen, Josef De Vaughn; Ganthier, Emile; Rahmes, Mark; Tenali, Gnana Bhaskar; Kozaitis, Samuel

    2008-04-01

    The most commonly used smoothing algorithms for complex data processing are low pass filters. Unfortunately, an undesired side effect of the aforementioned techniques is the blurring of scene discontinuities in the interferogram. For Digital Surface Map (DSM) extraction and subsidence measurement, the smoothing of the scene discontinuities can cause inaccuracy in the final product. Our goal is to perform spatially non-uniform smoothing to overcome the aforementioned disadvantages. We achieve this by using an Anisotropic Non-Linear Diffuser (ANDI). Here, in this paper we will show the utility of ANDI filtering on simulated and actual Interferometric Synthetic Aperture Radar (IFSAR) data for preprocessing, subsidence measurement and DSM extraction to overcome the difficulties of typical filters. We also compare the results of the ANDI filter with a wavelet filter. Finally, we detail some of our results of the New Orleans IFSAR research project with Canadian Space Agency, NASA, and USGS. The Harris LiteSite TM Urban 3D Modeling software is used to illustrate some of the results of our RADARSAT-1 processing.

  15. Simultaneous seismic data interpolation and denoising with a new adaptive method based on dreamlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Chen, Xiaohong; Li, Jingye

    2015-05-01

    Interpolation and random noise removal is a pre-requisite for multichannel techniques because the irregularity and random noise in observed data can affect their performances. Projection Onto Convex Sets (POCS) method can better handle seismic data interpolation if the data's signal-to-noise ratio (SNR) is high, while it has difficulty in noisy situations because it inserts the noisy observed seismic data in each iteration. Weighted POCS method can weaken the noise effects, while the performance is affected by the choice of weight factors and is still unsatisfactory. Thus, a new weighted POCS method is derived through the Iterative Hard Threshold (IHT) view, and in order to eliminate random noise, a new adaptive method is proposed to achieve simultaneous seismic data interpolation and denoising based on dreamlet transform. Performances of the POCS method, the weighted POCS method and the proposed method are compared in simultaneous seismic data interpolation and denoising which demonstrate the validity of the proposed method. The recovered SNRs confirm that the proposed adaptive method is the most effective among the three methods. Numerical examples on synthetic and real data demonstrate the validity of the proposed adaptive method.

  16. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  17. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  18. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  19. Visualizing 3D Turbulence On Temporally Adaptive Wavelet Collocation Grids

    NASA Astrophysics Data System (ADS)

    Goldstein, D. E.; Kadlec, B. J.; Yuen, D. A.; Erlebacher, G.

    2005-12-01

    Today there is an explosion in data from high-resolution computations of nonlinear phenomena in many fields, including the geo- and environmental sciences. The efficient storage and subsequent visualization of these large data sets is a trade off in storage costs versus data quality. New dynamically adaptive simulation methodologies promise significant computational cost savings and have the added benefit of producing results on adapted grids that significantly reduce storage and data manipulation costs. Yet, with these adaptive simulation methodologies come new challenges in the visualization of temporally adaptive data sets. In this work turbulence data sets from Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) are visualized with the open source tool ParaView, as a challenging case study. SCALES simulations use a temporally adaptive collocation grid defined by wavelet threshold filtering to resolve the most energetic coherent structures in a turbulence field. A subgrid scale model is used to account for the effect of unresolved subgrid scale modes. The results from the SCALES simulations are saved on a thresholded dyadic wavelet collocation grid, which by its nature does not include cell information. Paraview is an open source visualization package developed by KitWare(tm) that is based on the widely used VTK graphics toolkit. The efficient generation of cell information, required with current ParaView data formats, is explored using custom algorithms and VTK toolkit routines. Adaptive 3d visualizations using isosurfaces and volume visualizations are compared with non-adaptive visualizations. To explore the localized multiscale structures in the turbulent data sets the wavelet coefficients are also visualized allowing visualization of energy contained in local physical regions as well as in local wave number space.

  20. Solving Chemical Master Equations by an Adaptive Wavelet Method

    SciTech Connect

    Jahnke, Tobias; Galan, Steffen

    2008-09-01

    Solving chemical master equations is notoriously difficult due to the tremendous number of degrees of freedom. We present a new numerical method which efficiently reduces the size of the problem in an adaptive way. The method is based on a sparse wavelet representation and an algorithm which, in each time step, detects the essential degrees of freedom required to approximate the solution up to the desired accuracy.

  1. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-03-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.

  2. Focal artifact removal from ongoing EEG--a hybrid approach based on spatially-constrained ICA and wavelet de-noising.

    PubMed

    Akhtar, Muhammad Tahir; James, Christopher J

    2009-01-01

    Detecting artifacts produced in electroencephalographic (EEG) data by muscle activity, eye blinks and electrical noise, etc., is an important problem in EEG signal processing research. These artifacts must be corrected before further analysis because it renders subsequent analysis very error-prone. One solution is to reject the data segment if artifact is present during the observation interval, however, the rejected data segment could contain important information masked by the artifact. It has already been demonstrated that independent component analysis (ICA) can be an effective and applicable method for EEG de-noising. The goal of this paper is to propose a framework, based on ICA and wavelet denoising (WD), to improve the pre-processing of EEG signals. In particular we employ the concept of spatially-constrained ICA (SCICA) to extract artifact-only independent components (ICs) from the given EEG data, use WD to remove any brain activity from extracted artifacts, and finally project back the artifacts to be subtracted from EEG signals to get clean EEG data. The main advantage of the proposed approach is faster computation, as all ICs are not identified in the usual manner due to the square mixing assumption. Simulation results demonstrate the effectiveness of the proposed approach in removing focal artifacts that can be well separated by SCICA.

  3. An 8.12 μW wavelet denoising chip for PPG detection and portable heart rate monitoring in 0.18 μm CMOS

    NASA Astrophysics Data System (ADS)

    Xiang, Li; Xu, Zhang; Peng, Li; Xiaohui, Hu; Hongda, Chen

    2016-05-01

    A low power wavelet denoising chip for photoplethysmography (PPG) detection and portable heart rate monitoring is presented. To eliminate noise and improve detection accuracy, Harr wavelet (HWT) is chosen as the processing tool. An optimized finite impulse response structure is proposed to lower the computational complexity of proposed algorithm, which is benefit for reducing the power consumption of proposed chip. The modulus maxima pair location module is design to accurately locate the PPG peaks. A clock control unit is designed to further reduce the power consumption of the proposed chip. Fabricated with the 0.18 μm N-well CMOS 1P6M technology, the power consumption of proposed chip is only 8.12 μW in 1 V voltage supply. Validated with PPG signals in multiparameter intelligent monitoring in intensive care databases and signals acquired by the wrist photoelectric volume detection front end, the proposed chip can accurately detect PPG signals. The average sensitivity and positive prediction are 99.91% and 100%, respectively.

  4. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  5. Efficient Combustion Simulation via the Adaptive Wavelet Collocation Method

    NASA Astrophysics Data System (ADS)

    Lung, Kevin; Brown-Dymkoski, Eric; Guerrero, Victor; Doran, Eric; Museth, Ken; Balme, Jo; Urberger, Bob; Kessler, Andre; Jones, Stephen; Moses, Billy; Crognale, Anthony

    Rocket engine development continues to be driven by the intuition and experience of designers, progressing through extensive trial-and-error test campaigns. Extreme temperatures and pressures frustrate direct observation, while high-fidelity simulation can be impractically expensive owing to the inherent muti-scale, multi-physics nature of the problem. To address this cost, an adaptive multi-resolution PDE solver has been designed which targets the high performance, many-core architecture of GPUs. The adaptive wavelet collocation method is used to maintain a sparse-data representation of the high resolution simulation, greatly reducing the memory footprint while tightly controlling physical fidelity. The tensorial, stencil topology of wavelet-based grids lends itself to highly vectorized algorithms which are necessary to exploit the performance of GPUs. This approach permits efficient implementation of direct finite-rate kinetics, and improved resolution of steep thermodynamic gradients and the smaller mixing scales that drive combustion dynamics. Resolving these scales is crucial for accurate chemical kinetics, which are typically degraded or lost in statistical modeling approaches.

  6. Adaptive segmentation of wavelet transform coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Wasilewski, Piotr

    2000-04-01

    This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.

  7. Adaptive wavelet detection of transients using the bootstrap

    NASA Astrophysics Data System (ADS)

    Hewer, Gary A.; Kuo, Wei; Peterson, Lawrence A.

    1996-03-01

    A Daubechies wavelet-based bootstrap detection strategy based on the research of Carmona was applied to a set of test signals. The detector was a function of the d-scales. The adaptive detection statistics were derived using Efron's bootstrap methodology, which relieved us from having to make parametric assumptions about the underlying noise and offered a method of overcoming the constraints of modeling the detector statistics. The test set of signals used to evaluate the Daubechies/bootstrap pulse detector were generated with a Hewlett-Packard Fast Agile Signal Simulator (FASS). These video pulses, with varying signal-to-noise ratios (SNRs), included unmodulated, linear chirp, and Barker phase-code modulations baseband (IF) video pulses mixed with additive white Gaussian noise. Simulated examples illustrating the bootstrap methodology are presented, along with a complete set of constant false alarm rate (CFAR) detection statistics for the test signals. The CFAR curves clearly show that the wavelet bootstrap can adaptively detect transient pulses at low SNRs.

  8. Edge preserved enhancement of medical images using adaptive fusion-based denoising by shearlet transform and total variation algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Deep; Anand, Radhey Shyam; Tyagi, Barjeev

    2013-10-01

    Edge preserved enhancement is of great interest in medical images. Noise present in medical images affects the quality, contrast resolution, and most importantly, texture information and can make post-processing difficult also. An enhancement approach using an adaptive fusion algorithm is proposed which utilizes the features of shearlet transform (ST) and total variation (TV) approach. In the proposed method, three different denoised images processed with TV method, shearlet denoising, and edge information recovered from the remnant of the TV method and processed with the ST are fused adaptively. The result of enhanced images processed with the proposed method helps to improve the visibility and detectability of medical images. For the proposed method, different weights are evaluated from the different variance maps of individual denoised image and the edge extracted information from the remnant of the TV approach. The performance of the proposed method is evaluated by conducting various experiments on both the standard images and different medical images such as computed tomography, magnetic resonance, and ultrasound. Experiments show that the proposed method provides an improvement not only in noise reduction but also in the preservation of more edges and image details as compared to the others.

  9. Adaptive directional wavelet transform based on directional prefiltering.

    PubMed

    Tanaka, Yuichi; Hasegawa, Madoka; Kato, Shigeo; Ikehara, Masaaki; Nguyen, Truong Q

    2010-04-01

    This paper proposes an efficient approach for adaptive directional wavelet transform (WT) based on directional prefiltering. Although the adaptive directional WT is able to transform an image along diagonal orientations as well as traditional horizontal and vertical directions, it sacrifices computation speed for good image coding performance. We present two efficient methods to find the best transform directions by prefiltering using 2-D filter bank or 1-D directional WT along two fixed directions. The proposed direction calculation methods achieve comparable image coding performance comparing to the conventional one with less complexity. Furthermore, transform direction data of the proposed method can be used for content-based image retrieval to increase retrieval ratio. PMID:20028625

  10. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India. PMID:26697285

  11. Subspace based adaptive denoising of surface EMG from neurological injury patients

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Ying, Dongwen; Zev Rymer, William; Zhou, Ping

    2014-10-01

    Objective: After neurological injuries such as spinal cord injury, voluntary surface electromyogram (EMG) signals recorded from affected muscles are often corrupted by interferences, such as spurious involuntary spikes and background noises produced by physiological and extrinsic/accidental origins, imposing difficulties for signal processing. Conventional methods did not well address the problem caused by interferences. It is difficult to mitigate such interferences using conventional methods. The aim of this study was to develop a subspace-based denoising method to suppress involuntary background spikes contaminating voluntary surface EMG recordings. Approach: The Karhunen-Loeve transform was utilized to decompose a noisy signal into a signal subspace and a noise subspace. An optimal estimate of EMG signal is derived from the signal subspace and the noise power. Specifically, this estimator is capable of making a tradeoff between interference reduction and signal distortion. Since the estimator partially relies on the estimate of noise power, an adaptive method was presented to sequentially track the variation of interference power. The proposed method was evaluated using both semi-synthetic and real surface EMG signals. Main results: The experiments confirmed that the proposed method can effectively suppress interferences while keep the distortion of voluntary EMG signal in a low level. The proposed method can greatly facilitate further signal processing, such as onset detection of voluntary muscle activity. Significance: The proposed method can provide a powerful tool for suppressing background spikes and noise contaminating voluntary surface EMG signals of paretic muscles after neurological injuries, which is of great importance for their multi-purpose applications.

  12. Computed Tomography Images De-noising using a Novel Two Stage Adaptive Algorithm

    PubMed Central

    Fadaee, Mojtaba; Shamsi, Mousa; Saberkari, Hamidreza; Sedaaghi, Mohammad Hossein

    2015-01-01

    In this paper, an optimal algorithm is presented for de-noising of medical images. The presented algorithm is based on improved version of local pixels grouping and principal component analysis. In local pixels grouping algorithm, blocks matching based on L2 norm method is utilized, which leads to matching performance improvement. To evaluate the performance of our proposed algorithm, peak signal to noise ratio (PSNR) and structural similarity (SSIM) evaluation criteria have been used, which are respectively according to the signal to noise ratio in the image and structural similarity of two images. The proposed algorithm has two de-noising and cleanup stages. The cleanup stage is carried out comparatively; meaning that it is alternately repeated until the two conditions based on PSNR and SSIM are established. Implementation results show that the presented algorithm has a significant superiority in de-noising. Furthermore, the quantities of SSIM and PSNR values are higher in comparison to other methods. PMID:26955565

  13. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  14. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  15. Fast Fourier and Wavelet Transforms for Wavefront Reconstruction in Adaptive Optics

    SciTech Connect

    Dowla, F U; Brase, J M; Olivier, S S

    2000-07-28

    Wavefront reconstruction techniques using the least-squares estimators are computationally quite expensive. We compare wavelet and Fourier transforms techniques in addressing the computation issues of wavefront reconstruction in adaptive optics. It is shown that because the Fourier approach is not simply a numerical approximation technique unlike the wavelet method, the Fourier approach might have advantages in terms of numerical accuracy. However, strictly from a numerical computations viewpoint, the wavelet approximation method might have advantage in terms of speed. To optimize the wavelet method, a statistical study might be necessary to use the best basis functions or ''approximation tree.''

  16. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  17. Denoising of high resolution small animal 3D PET data using the non-subsampled Haar wavelet transform

    NASA Astrophysics Data System (ADS)

    Ochoa Domínguez, Humberto de Jesús; Máynez, Leticia O.; Vergara Villegas, Osslan O.; Mederos, Boris; Mejía, José M.; Cruz Sánchez, Vianey G.

    2015-06-01

    PET allows functional imaging of the living tissue. However, one of the most serious technical problems affecting the reconstructed data is the noise, particularly in images of small animals. In this paper, a method for high-resolution small animal 3D PET data is proposed with the aim to reduce the noise and preserve details. The method is based on the estimation of the non-subsampled Haar wavelet coefficients by using a linear estimator. The procedure is applied to the volumetric images, reconstructed without correction factors (plane reconstruction). Results show that the method preserves the structures and drastically reduces the noise that contaminates the image.

  18. Haar wavelet processor for adaptive on-line image compression

    NASA Astrophysics Data System (ADS)

    Diaz, F. Javier; Buron, Angel M.; Solana, Jose M.

    2005-06-01

    An image coding processing scheme based on a variant of the Haar Wavelet Transform that uses only addition and subtraction is presented. After computing the transform, the selection and coding of the coefficients is performed using a methodology optimized to attain the lowest hardware implementation complexity. Coefficients are sorted in groups according to the number of pixels used in their computing. The idea behind it is to use a different threshold for each group of coefficients; these thresholds are obtained recurrently from an initial one. Parameter values used to achieve the desired compression level are established "on-line", adapting their values to each image, which leads to an improvement in the quality obtained for a preset compression level. Despite its adaptive characteristic, the coding scheme presented leads to a hardware implementation of markedly low circuit complexity. The compression reached for images of 512x512 pixels (256 grey levels) is over 22:1 (~0.4 bits/pixel) with a rmse of 8-10%. An image processor (excluding memory) prototype designed to compute the proposed transform has been implemented using FPGA chips. The processor for images of 256x256 pixels has been implemented using only one general-purpose low-cost FPGA chip, thus proving the design reliability and its relative simplicity.

  19. Morphology analysis of EKG R waves using wavelets with adaptive parameters derived from fuzzy logic

    NASA Astrophysics Data System (ADS)

    Caldwell, Max A.; Barrington, William W.; Miles, Richard R.

    1996-03-01

    Understanding of the EKG components P, QRS (R wave), and T is essential in recognizing cardiac disorders and arrhythmias. An estimation method is presented that models the R wave component of the EKG by adaptively computing wavelet parameters using fuzzy logic. The parameters are adaptively adjusted to minimize the difference between the original EKG waveform and the wavelet. The R wave estimate is derived from minimizing the combination of mean squared error (MSE), amplitude difference, spread difference, and shift difference. We show that the MSE in both non-noise and additive noise environment is less using an adaptive wavelet than a static wavelet. Research to date has focused on the R wave component of the EKG signal. Extensions of this method to model P and T waves are discussed.

  20. Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data

    NASA Astrophysics Data System (ADS)

    Seitel, Alexander; dos Santos, Thiago R.; Mersmann, Sven; Penne, Jochen; Groch, Anja; Yung, Kwong; Tetzlaff, Ralf; Meinzer, Hans-Peter; Maier-Hein, Lena

    2011-03-01

    Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.

  1. An adaptive demodulation approach for bearing fault detection based on adaptive wavelet filtering and spectral subtraction

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Tang, Baoping; Liu, Ziran; Chen, Rengxiang

    2016-02-01

    Fault diagnosis of rolling element bearings is important for improving mechanical system reliability and performance. Vibration signals contain a wealth of complex information useful for state monitoring and fault diagnosis. However, any fault-related impulses in the original signal are often severely tainted by various noises and the interfering vibrations caused by other machine elements. Narrow-band amplitude demodulation has been an effective technique to detect bearing faults by identifying bearing fault characteristic frequencies. To achieve this, the key step is to remove the corrupting noise and interference, and to enhance the weak signatures of the bearing fault. In this paper, a new method based on adaptive wavelet filtering and spectral subtraction is proposed for fault diagnosis in bearings. First, to eliminate the frequency associated with interfering vibrations, the vibration signal is bandpass filtered with a Morlet wavelet filter whose parameters (i.e. center frequency and bandwidth) are selected in separate steps. An alternative and efficient method of determining the center frequency is proposed that utilizes the statistical information contained in the production functions (PFs). The bandwidth parameter is optimized using a local ‘greedy’ scheme along with Shannon wavelet entropy criterion. Then, to further reduce the residual in-band noise in the filtered signal, a spectral subtraction procedure is elaborated after wavelet filtering. Instead of resorting to a reference signal as in the majority of papers in the literature, the new method estimates the power spectral density of the in-band noise from the associated PF. The effectiveness of the proposed method is validated using simulated data, test rig data, and vibration data recorded from the transmission system of a helicopter. The experimental results and comparisons with other methods indicate that the proposed method is an effective approach to detecting the fault-related impulses

  2. Detailed resolution of the nonlinear Schrodinger equation using the full adaptive wavelet transform

    NASA Astrophysics Data System (ADS)

    Stedham, Mark A.; Banerjee, Partha P.

    2000-04-01

    The propagation of optical pulses in nonlinear optical fibers is described by the nonlinear Schrodinger (NLS) equation. This equation can generally be solved exactly using the inverse scattering method, or for more detailed analysis, through the use of numerical techniques. Perhaps the best known numerical technique for solving he NLS equation is the split-step Fourier method, which effects a solution by assuming that the dispersion and nonlinear effects act independently during pulse propagation along the fiber. In this paper we describe an alternative numerical solution to the NLS equation using an adaptive wavelet transform technique, done entirely in the wavelet domain. This technique differs form previous work involving wavelet solutions tithe NLS equation in that these previous works used a 'split-step wavelet' method in which the linear analysis was performed in the wavelet domain while the nonlinear portion was done in the space domain. Our method takes ful advantage of the set of wavelet coefficients, thus allowing the flexibility to investigate pulse propagation entirely in either the wavelet or the space domain. Additionally, this method is fully adaptive in that it is capable of accurately tracking steep gradients which may occur during the numerical simulation.

  3. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  4. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  5. Serial identification of EEG patterns using adaptive wavelet-based analysis

    NASA Astrophysics Data System (ADS)

    Nazimov, A. I.; Pavlov, A. N.; Nazimova, A. A.; Grubov, V. V.; Koronovskii, A. A.; Sitnikova, E.; Hramov, A. E.

    2013-10-01

    A problem of recognition specific oscillatory patterns in the electroencephalograms with the continuous wavelet-transform is discussed. Aiming to improve abilities of the wavelet-based tools we propose a serial adaptive method for sequential identification of EEG patterns such as sleep spindles and spike-wave discharges. This method provides an optimal selection of parameters based on objective functions and enables to extract the most informative features of the recognized structures. Different ways of increasing the quality of patterns recognition within the proposed serial adaptive technique are considered.

  6. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  7. Weak transient fault feature extraction based on an optimized Morlet wavelet and kurtosis

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Xing, Jianfeng; Mao, Yongfang

    2016-08-01

    Aimed at solving the key problem in weak transient detection, the present study proposes a new transient feature extraction approach using the optimized Morlet wavelet transform, kurtosis index and soft-thresholding. Firstly, a fast optimization algorithm based on the Shannon entropy is developed to obtain the optimized Morlet wavelet parameter. Compared to the existing Morlet wavelet parameter optimization algorithm, this algorithm has lower computation complexity. After performing the optimized Morlet wavelet transform on the analyzed signal, the kurtosis index is used to select the characteristic scales and obtain the corresponding wavelet coefficients. From the time-frequency distribution of the periodic impulsive signal, it is found that the transient signal can be reconstructed by the wavelet coefficients at several characteristic scales, rather than the wavelet coefficients at just one characteristic scale, so as to improve the accuracy of transient detection. Due to the noise influence on the characteristic wavelet coefficients, the adaptive soft-thresholding method is applied to denoise these coefficients. With the denoised wavelet coefficients, the transient signal can be reconstructed. The proposed method was applied to the analysis of two simulated signals, and the diagnosis of a rolling bearing fault and a gearbox fault. The superiority of the method over the fast kurtogram method was verified by the results of simulation analysis and real experiments. It is concluded that the proposed method is extremely suitable for extracting the periodic impulsive feature from strong background noise.

  8. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter

    PubMed Central

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-01-01

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved. PMID:27420062

  9. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-01-01

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved. PMID:27420062

  10. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  11. Research of fetal ECG extraction using wavelet analysis and adaptive filtering.

    PubMed

    Wu, Shuicai; Shen, Yanni; Zhou, Zhuhuang; Lin, Lan; Zeng, Yanjun; Gao, Xiaofeng

    2013-10-01

    Extracting clean fetal electrocardiogram (ECG) signals is very important in fetal monitoring. In this paper, we proposed a new method for fetal ECG extraction based on wavelet analysis, the least mean square (LMS) adaptive filtering algorithm, and the spatially selective noise filtration (SSNF) algorithm. First, abdominal signals and thoracic signals were processed by stationary wavelet transform (SWT), and the wavelet coefficients at each scale were obtained. For each scale, the detail coefficients were processed by the LMS algorithm. The coefficient of the abdominal signal was taken as the original input of the LMS adaptive filtering system, and the coefficient of the thoracic signal as the reference input. Then, correlations of the processed wavelet coefficients were computed. The threshold was set and noise components were removed with the SSNF algorithm. Finally, the processed wavelet coefficients were reconstructed by inverse SWT to obtain fetal ECG. Twenty cases of simulated data and 12 cases of clinical data were used. Experimental results showed that the proposed method outperforms the LMS algorithm: (1) it shows improvement in case of superposition R-peaks of fetal ECG and maternal ECG; (2) noise disturbance is eliminated by incorporating the SSNF algorithm and the extracted waveform is more stable; and (3) the performance is proven quantitatively by SNR calculation. The results indicated that the proposed algorithm can be used for extracting fetal ECG from abdominal signals.

  12. [A De-Noising Algorithm for Fluorescence Detection Signal of Mineral Oil in Water by SWT].

    PubMed

    Wang, Yu-tian; Cheng, Peng-fei; Hou, Pei-guo; Yang, Zhe

    2015-05-01

    Fluorescence analysis is an important means of detecting mineral oil in water pollutants because of high sensitivity, selectivity, ease of design, etc. Noise generated from Photo detector will affect the sensitivity of fluorescence detection system, so the elimination of fluorescence signal noise has been a hot issue. For the fluorescence signal, due to the length increase of the branch set, it produces some boundary issues. The dbN wavelet family can flexibly balance the border issues, retain the useful signals and get. rid of noise, the de-noising effects of dbN families are compared, the db7 wavelet is chosen as the optimal wavelet. The noisy fluorescence signal is statically decomposed into 5 levels via db7 wavelet, and the thresholds are chosen adaptively based on the wavelet entropy theory. The pure fluorescence signal is obtained after the approximation coefficients and detail coefficients quantified by thresholds reconstructed. Compared with the DWT, the signal de-noised via SWT has the advantage of information integrity and time translation invariance.

  13. Research on denoising in WDM laser inter-satellites communication system

    NASA Astrophysics Data System (ADS)

    Wen, Chuanhua; Su, Yang; Li, Yuquan; Zhou, Li

    2006-09-01

    This paper proposes a method of wavelet analysis for de-noising at receiver system in WDM laser inter-satellites communication. Background noises such as galactic noise, sunlight and etc make the received power reduce. The noisy signal is decomposed using wavelets and wavelet packets; then is transformed into wavelet coefficients and the lower order coefficients are removed by applying a soft threshold. De-noised signal is obtained by reconstructing with the remaining coefficients. In this paper, we evaluate different wavelet analysis for de-noising at receiver system in inter-satellites laser communication. Simulation results indicate that if the wavelet de-noising method is used with different wavelet analyzing functions, it will improves the signal to noise ratio (SNR) about 2 dB when the signal frequency is 1.5 GHz.

  14. An image adaptive, wavelet-based watermarking of digital images

    NASA Astrophysics Data System (ADS)

    Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia

    2007-12-01

    In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.

  15. Isotropic boundary adapted wavelets for coherent vorticity extraction in turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Farge, Marie; Sakurai, Teluo; Yoshimatsu, Katsunori; Schneider, Kai; Morishita, Koji; Ishihara, Takashi

    2015-11-01

    We present a construction of isotropic boundary adapted wavelets, which are orthogonal and yield a multi-resolution analysis. We analyze DNS data of turbulent channel flow computed at a friction-velocity based Reynolds number of 395 and investigate the role of coherent vorticity. Thresholding of the wavelet coefficients allows to split the flow into two parts, coherent and incoherent vorticity. The statistics of the former, i.e., energy and enstrophy spectra, are close to the ones of the total flow, and moreover the nonlinear energy budgets are well preserved. The remaining incoherent part, represented by the large majority of the weak wavelet coefficients, corresponds to a structureless, i.e., noise-like, background flow and exhibits an almost equi-distribution of energy.

  16. Wavelet analysis deformation monitoring data of high-speed railway bridge

    NASA Astrophysics Data System (ADS)

    Tang, ShiHua; Huang, Qing; Zhou, Conglin; Xu, HongWei; Liu, YinTao; Li, FeiDa

    2015-12-01

    Deformation monitoring data of high-speed railway bridges will inevitably be affected because of noise pollution, A deformation monitoring point of high-speed railway bridge was measurd by using sokkia SDL30 electronic level for a long time,which got a large number of deformation monitoring data. Based on the characteristics of the deformation monitoring data of high-speed railway bridge, which contain lots of noise. Based on the MATLAB software platform, 120 groups of deformation monitoring data were applied to analysis of wavelet denoising.sym6,db6 wavelet basis function were selected to analyze and remove the noise.The original signal was broken into three layers wavelet,which contain high frequency coefficients and low frequency coefficients.However, high frequency coefficient have plenty of noise.Adaptive method of soft and hard threshold were used to handle in the high frequency coefficient.Then,high frequency coefficient that was removed much of noise combined with low frequency coefficient to reconstitute and obtain reconstruction wavelet signal.Root Mean Square Error (RMSE) and Signal-To-Noise Ratio (SNR) were regarded as evaluation index of denoising,The smaller the root mean square error and the greater signal-to-noise ratio indicate that them have a good effect in denoising. We can surely draw some conclusions in the experimental analysis:the db6 wavelet basis function has a good effect in wavelet denoising by using a adaptive soft threshold method,which root mean square error is minimum and signal-to-noise ratio is maximum.Moreover,the reconstructed image are more smooth than original signal denoising after wavelet denoising, which removed noise and useful signal are obtained in the original signal.Compared to the other three methods, this method has a good effect in denoising, which not only retain useful signal in the original signal, but aiso reach the goal of removing noise. So, it has a strong practical value in a actual deformation monitoring

  17. A study of interceptor attitude control based on adaptive wavelet neural networks

    NASA Astrophysics Data System (ADS)

    Li, Da; Wang, Qing-chao

    2005-12-01

    This paper engages to study the 3-DOF attitude control problem of the kinetic interceptor. When the kinetic interceptor enters into terminal guidance it has to maneuver with large angles. The characteristic of interceptor attitude system is nonlinearity, strong-coupling and MIMO. A kind of inverse control approach based on adaptive wavelet neural networks was proposed in this paper. Instead of using one complex neural network as the controller, the nonlinear dynamics of the interceptor can be approximated by three independent subsystems applying exact feedback-linearization firstly, and then controllers for each subsystem are designed using adaptive wavelet neural networks respectively. This method avoids computing a large amount of the weights and bias in one massive neural network and the control parameters can be adaptive changed online. Simulation results betray that the proposed controller performs remarkably well.

  18. Adaptive inpainting algorithm based on DCT induced wavelet regularization.

    PubMed

    Li, Yan-Ran; Shen, Lixin; Suter, Bruce W

    2013-02-01

    In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting.

  19. Multi-focus image fusion algorithm based on adaptive PCNN and wavelet transform

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-guo; Wang, Ming-jia; Han, Guang-liang

    2011-08-01

    Being an efficient method of information fusion, image fusion has been used in many fields such as machine vision, medical diagnosis, military applications and remote sensing. In this paper, Pulse Coupled Neural Network (PCNN) is introduced in this research field for its interesting properties in image processing, including segmentation, target recognition et al. and a novel algorithm based on PCNN and Wavelet Transform for Multi-focus image fusion is proposed. First, the two original images are decomposed by wavelet transform. Then, based on the PCNN, a fusion rule in the Wavelet domain is given. This algorithm uses the wavelet coefficient in each frequency domain as the linking strength, so that its value can be chosen adaptively. Wavelet coefficients map to the range of image gray-scale. The output threshold function attenuates to minimum gray over time. Then all pixels of image get the ignition. So, the output of PCNN in each iteration time is ignition wavelet coefficients of threshold strength in different time. At this moment, the sequences of ignition of wavelet coefficients represent ignition timing of each neuron. The ignition timing of PCNN in each neuron is mapped to corresponding image gray-scale range, which is a picture of ignition timing mapping. Then it can judge the targets in the neuron are obvious features or not obvious. The fusion coefficients are decided by the compare-selection operator with the firing time gradient maps and the fusion image is reconstructed by wavelet inverse transform. Furthermore, by this algorithm, the threshold adjusting constant is estimated by appointed iteration number. Furthermore, In order to sufficient reflect order of the firing time, the threshold adjusting constant αΘ is estimated by appointed iteration number. So after the iteration achieved, each of the wavelet coefficient is activated. In order to verify the effectiveness of proposed rules, the experiments upon Multi-focus image are done. Moreover

  20. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  1. Multiresolution Bilateral Filtering for Image Denoising

    PubMed Central

    Zhang, Ming; Gunturk, Bahadir K.

    2008-01-01

    The bilateral filter is a nonlinear filter that does spatial averaging without smoothing edges; it has shown to be an effective image denoising technique. An important issue with the application of the bilateral filter is the selection of the filter parameters, which affect the results significantly. There are two main contributions of this paper. The first contribution is an empirical study of the optimal bilateral filter parameter selection in image denoising applications. The second contribution is an extension of the bilateral filter: multiresolution bilateral filter, where bilateral filtering is applied to the approximation (low-frequency) subbands of a signal decomposed using a wavelet filter bank. The multiresolution bilateral filter is combined with wavelet thresholding to form a new image denoising framework, which turns out to be very effective in eliminating noise in real noisy images. Experimental results with both simulated and real data are provided. PMID:19004705

  2. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    SciTech Connect

    Pipek, Janos; Nagy, Szilvia

    2006-11-07

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.

  3. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  4. A new time-adaptive discrete bionic wavelet transform for enhancing speech from adverse noise environment

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui

    2012-04-01

    Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.

  5. A robust adaptive denoising framework for real-time artifact removal in scalp EEG measurements

    NASA Astrophysics Data System (ADS)

    Kilicarslan, Atilla; Grossman, Robert G.; Contreras-Vidal, Jose Luis

    2016-04-01

    Objective. Non-invasive measurement of human neural activity based on the scalp electroencephalogram (EEG) allows for the development of biomedical devices that interface with the nervous system for scientific, diagnostic, therapeutic, or restorative purposes. However, EEG recordings are often considered as prone to physiological and non-physiological artifacts of different types and frequency characteristics. Among them, ocular artifacts and signal drifts represent major sources of EEG contamination, particularly in real-time closed-loop brain-machine interface (BMI) applications, which require effective handling of these artifacts across sessions and in natural settings. Approach. We extend the usage of a robust adaptive noise cancelling (ANC) scheme ({H}∞ filtering) for removal of eye blinks, eye motions, amplitude drifts and recording biases simultaneously. We also characterize the volume conduction, by estimating the signal propagation levels across all EEG scalp recording areas due to ocular artifact generators. We find that the amplitude and spatial distribution of ocular artifacts vary greatly depending on the electrode location. Therefore, fixed filtering parameters for all recording areas would naturally hinder the true overall performance of an ANC scheme for artifact removal. We treat each electrode as a separate sub-system to be filtered, and without the loss of generality, they are assumed to be uncorrelated and uncoupled. Main results. Our results show over 95-99.9% correlation between the raw and processed signals at non-ocular artifact regions, and depending on the contamination profile, 40-70% correlation when ocular artifacts are dominant. We also compare our results with the offline independent component analysis and artifact subspace reconstruction methods, and show that some local quantities are handled better by our sample-adaptive real-time framework. Decoding performance is also compared with multi-day experimental data from 2 subjects

  6. A method of adaptive wavelet filtering of the peripheral blood flow oscillations under stationary and non-stationary conditions.

    PubMed

    Tankanag, Arina V; Chemeris, Nikolay K

    2009-10-01

    The paper describes an original method for analysis of the peripheral blood flow oscillations measured with the laser Doppler flowmetry (LDF) technique. The method is based on the continuous wavelet transform and adaptive wavelet theory and applies an adaptive wavelet filtering to the LDF data. The method developed allows one to examine the dynamics of amplitude oscillations in a wide frequency range (from 0.007 to 2 Hz) and to process both stationary and non-stationary short (6 min) signals. The capabilities of the method have been demonstrated by analyzing LDF signals registered in the state of rest and upon humeral occlusion. The paper shows the main advantage of the method proposed, which is the significant reduction of 'border effects', as compared to the traditional wavelet analysis. It was found that the low-frequency amplitudes obtained by adaptive wavelets are significantly higher than those obtained by non-adaptive ones. The method suggested would be useful for the analysis of low-frequency components of the short-living transitional processes under the conditions of functional tests. The method of adaptive wavelet filtering can be used to process stationary and non-stationary biomedical signals (cardiograms, encephalograms, myograms, etc), as well as signals studied in the other fields of science and engineering.

  7. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    SciTech Connect

    Webster, Clayton G; Zhang, Guannan; Gunzburger, Max D

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  8. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems. PMID:25955990

  9. Adaptive Threshold Neural Spike Detector Using Stationary Wavelet Transform in CMOS.

    PubMed

    Yang, Yuning; Boling, C Sam; Kamboh, Awais M; Mason, Andrew J

    2015-11-01

    Spike detection is an essential first step in the analysis of neural recordings. Detection at the frontend eases the bandwidth requirement for wireless data transfer of multichannel recordings to extra-cranial processing units. In this work, a low power digital integrated spike detector based on the lifting stationary wavelet transform is presented and developed. By monitoring the standard deviation of wavelet coefficients, the proposed detector can adaptively set a threshold value online for each channel independently without requiring user intervention. A prototype 16-channel spike detector was designed and tested in an FPGA. The method enables spike detection with nearly 90% accuracy even when the signal-to-noise ratio is as low as 2. The design was mapped to 130 nm CMOS technology and shown to occupy 0.014 mm(2) of area and dissipate 1.7 μW of power per channel, making it suitable for implantable multichannel neural recording systems.

  10. Adaptive Wavelet Techniques, Wigner Distributions and the Direct Simulation of the Vlasov Equation

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros; Douglas, Melissa; Spielman, Rick

    2000-10-01

    The formal analogy between the quantum Liouville equation satisfied by the Wigner function in Quantum Mechanics, and the Vlasov equation satisfied by the single particle distribution function in plasma physics is exploited in order to study the long term evolution of nonlinear electrostatic wave phenomena dictated by the Vlasov-Poisson equations. Adaptive wavelet techniques are used to tile phase space in an optimal manner so as to minimize computational domain sizes and simultaneously to retain accuracy over disparate scales. Traditional MHD calculations will also be analyzed with our wavelet techniques to show the favorable data compression and feature extraction capabilities of multiresolution analysis. Specifically Z51 and Z179 will be compared to show the nature of the improvements in double wire array (Z179) implosions on Z to those obtained with a single wire array (Z51).

  11. Design of adaptive fuzzy wavelet neural sliding mode controller for uncertain nonlinear systems.

    PubMed

    Shahriari kahkeshi, Maryam; Sheikholeslam, Farid; Zekri, Maryam

    2013-05-01

    This paper proposes novel adaptive fuzzy wavelet neural sliding mode controller (AFWN-SMC) for a class of uncertain nonlinear systems. The main contribution of this paper is to design smooth sliding mode control (SMC) for a class of high-order nonlinear systems while the structure of the system is unknown and no prior knowledge about uncertainty is available. The proposed scheme composed of an Adaptive Fuzzy Wavelet Neural Controller (AFWNC) to construct equivalent control term and an Adaptive Proportional-Integral (A-PI) controller for implementing switching term to provide smooth control input. Asymptotical stability of the closed loop system is guaranteed, using the Lyapunov direct method. To show the efficiency of the proposed scheme, some numerical examples are provided. To validate the results obtained by proposed approach, some other methods are adopted from the literature and applied for comparison. Simulation results show superiority and capability of the proposed controller to improve the steady state performance and transient response specifications by using less numbers of fuzzy rules and on-line adaptive parameters in comparison to other methods. Furthermore, control effort has considerably decreased and chattering phenomenon has been completely removed.

  12. Translation invariant directional framelet transform combined with Gabor filters for image denoising.

    PubMed

    Shi, Yan; Yang, Xiaoyuan; Guo, Yuhua

    2014-01-01

    This paper is devoted to the study of a directional lifting transform for wavelet frames. A nonsubsampled lifting structure is developed to maintain the translation invariance as it is an important property in image denoising. Then, the directionality of the lifting-based tight frame is explicitly discussed, followed by a specific translation invariant directional framelet transform (TIDFT). The TIDFT has two framelets ψ1, ψ2 with vanishing moments of order two and one respectively, which are able to detect singularities in a given direction set. It provides an efficient and sparse representation for images containing rich textures along with properties of fast implementation and perfect reconstruction. In addition, an adaptive block-wise orientation estimation method based on Gabor filters is presented instead of the conventional minimization of residuals. Furthermore, the TIDFT is utilized to exploit the capability of image denoising, incorporating the MAP estimator for multivariate exponential distribution. Consequently, the TIDFT is able to eliminate the noise effectively while preserving the textures simultaneously. Experimental results show that the TIDFT outperforms some other frame-based denoising methods, such as contourlet and shearlet, and is competitive to the state-of-the-art denoising approaches.

  13. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  14. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  15. Lifting wavelet method of target detection

    NASA Astrophysics Data System (ADS)

    Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin

    2009-11-01

    Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.

  16. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  17. Adaptive wavelet simulation of global ocean dynamics using a new Brinkman volume penalization

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-12-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one-dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  18. An image denoising application using shearlets

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt

    2013-10-01

    Medical imaging is a multidisciplinary field related to computer science, electrical/electronic engineering, physics, mathematics and medicine. There has been dramatic increase in variety, availability and resolution of medical imaging devices for the last half century. For proper medical imaging highly trained technicians and clinicians are needed to pull out clinically pertinent information from medical data correctly. Artificial systems must be designed to analyze medical data sets either in a partially or even a fully automatic manner to fulfil the need. For this purpose there has been numerous ongoing research for finding optimal representations in image processing and computer vision [1, 18]. Medical images almost always contain artefacts and it is crucial to remove these artefacts to obtain healthy results. Out of many methods for denoising images, in this paper, two denoising methods, wavelets and shearlets, have been applied to mammography images. Comparing these two methods, shearlets give better results for denoising such data.

  19. [DR image denoising based on Laplace-Impact mixture model].

    PubMed

    Feng, Guo-Dong; He, Xiang-Bin; Zhou, He-Qin

    2009-07-01

    A novel DR image denoising algorithm based on Laplace-Impact mixture model in dual-tree complex wavelet domain is proposed in this paper. It uses local variance to build probability density function of Laplace-Impact model fitted to the distribution of high-frequency subband coefficients well. Within Laplace-Impact framework, this paper describes a novel method for image denoising based on designing minimum mean squared error (MMSE) estimators, which relies on strong correlation between amplitudes of nearby coefficients. The experimental results show that the algorithm proposed in this paper outperforms several state-of-art denoising methods such as Bayes least squared Gaussian scale mixture and Laplace prior.

  20. Determination and Visualization of pH Values in Anaerobic Digestion of Water Hyacinth and Rice Straw Mixtures Using Hyperspectral Imaging with Wavelet Transform Denoising and Variable Selection

    PubMed Central

    Zhang, Chu; Ye, Hui; Liu, Fei; He, Yong; Kong, Wenwen; Sheng, Kuichuan

    2016-01-01

    Biomass energy represents a huge supplement for meeting current energy demands. A hyperspectral imaging system covering the spectral range of 874–1734 nm was used to determine the pH value of anaerobic digestion liquid produced by water hyacinth and rice straw mixtures used for methane production. Wavelet transform (WT) was used to reduce noises of the spectral data. Successive projections algorithm (SPA), random frog (RF) and variable importance in projection (VIP) were used to select 8, 15 and 20 optimal wavelengths for the pH value prediction, respectively. Partial least squares (PLS) and a back propagation neural network (BPNN) were used to build the calibration models on the full spectra and the optimal wavelengths. As a result, BPNN models performed better than the corresponding PLS models, and SPA-BPNN model gave the best performance with a correlation coefficient of prediction (rp) of 0.911 and root mean square error of prediction (RMSEP) of 0.0516. The results indicated the feasibility of using hyperspectral imaging to determine pH values during anaerobic digestion. Furthermore, a distribution map of the pH values was achieved by applying the SPA-BPNN model. The results in this study would help to develop an on-line monitoring system for biomass energy producing process by hyperspectral imaging. PMID:26901202

  1. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov Maxwell system

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain; Sonnendrücker, Eric; Bertrand, Pierre

    2008-08-01

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  2. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    SciTech Connect

    Goffin, Mark A.; Buchan, Andrew G.; Dargaville, Steven; Pain, Christopher C.; Smith, Paul N.; Smedley-Stevenson, Richard P.

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specified functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.

  3. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  4. Iterated oversampled filter banks and wavelet frames

    NASA Astrophysics Data System (ADS)

    Selesnick, Ivan W.; Sendur, Levent

    2000-12-01

    This paper takes up the design of wavelet tight frames that are analogous to Daubechies orthonormal wavelets - that is, the design of minimal length wavelet filters satisfying certain polynomial properties, but now in the oversampled case. The oversampled dyadic DWT considered in this paper is based on a single scaling function and tow distinct wavelets. Having more wavelets than necessary gives a closer spacing between adjacent wavelets within the same scale. As a result, the transform is nearly shift-invariant, and can be used to improve denoising. Because the associated time- frequency lattice preserves the dyadic structure of the critically sampled DWT it can be used with tree-based denoising algorithms that exploit parent-child correlation.

  5. A Wavelet-Based ECG Delineation Method: Adaptation to an Experimental Electrograms with Manifested Global Ischemia.

    PubMed

    Hejč, Jakub; Vítek, Martin; Ronzhina, Marina; Nováková, Marie; Kolářová, Jana

    2015-09-01

    We present a novel wavelet-based ECG delineation method with robust classification of P wave and T wave. The work is aimed on an adaptation of the method to long-term experimental electrograms (EGs) measured on isolated rabbit heart and to evaluate the effect of global ischemia in experimental EGs on delineation performance. The algorithm was tested on a set of 263 rabbit EGs with established reference points and on human signals using standard Common Standards for Quantitative Electrocardiography Standard Database (CSEDB). On CSEDB, standard deviation (SD) of measured errors satisfies given criterions in each point and the results are comparable to other published works. In rabbit signals, our QRS detector reached sensitivity of 99.87% and positive predictivity of 99.89% despite an overlay of spectral components of QRS complex, P wave and power line noise. The algorithm shows great performance in suppressing J-point elevation and reached low overall error in both, QRS onset (SD = 2.8 ms) and QRS offset (SD = 4.3 ms) delineation. T wave offset is detected with acceptable error (SD = 12.9 ms) and sensitivity nearly 99%. Variance of the errors during global ischemia remains relatively stable, however more failures in detection of T wave and P wave occur. Due to differences in spectral and timing characteristics parameters of rabbit based algorithm have to be highly adaptable and set more precisely than in human ECG signals to reach acceptable performance. PMID:26577367

  6. Study on De-noising Technology of Radar Life Signal

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei

    2016-05-01

    Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..

  7. Accurate single-trial detection of movement intention made possible using adaptive wavelet transform.

    PubMed

    Chamanzar, Alireza; Malekmohammadi, Alireza; Bahrani, Masih; Shabany, Mahdi

    2015-01-01

    The outlook of brain-computer interfacing (BCI) is very bright. The real-time, accurate detection of a motor movement task is critical in BCI systems. The poor signal-to-noise-ratio (SNR) of EEG signals and the ambiguity of noise generator sources in brain renders this task quite challenging. In this paper, we demonstrate a novel algorithm for precise detection of the onset of a motor movement through identification of event-related-desynchronization (ERD) patterns. Using an adaptive matched filter technique implemented based on an optimized continues Wavelet transform by selecting an appropriate basis, we can detect single-trial ERDs. Moreover, we use a maximum-likelihood (ML), electrooculography (EOG) artifact removal method to remove eye-related artifacts to significantly improve the detection performance. We have applied this technique to our locally recorded Emotiv(®) data set of 6 healthy subjects, where an average detection selectivity of 85 ± 6% and sensitivity of 88 ± 7.7% is achieved with a temporal precision in the range of -1250 to 367 ms in onset detections of single-trials.

  8. CW-THz image contrast enhancement using wavelet transform and Retinex

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Zhang, Min; Hu, Qi-fan; Huang, Ying-Xue; Liang, Hua-Wei

    2015-10-01

    To enhance continuous wave terahertz (CW-THz) scanning images contrast and denoising, a method based on wavelet transform and Retinex theory was proposed. In this paper, the factors affecting the quality of CW-THz images were analysed. Second, an approach of combination of the discrete wavelet transform (DWT) and a designed nonlinear function in wavelet domain for the purpose of contrast enhancing was applied. Then, we combine the Retinex algorithm for further contrast enhancement. To evaluate the effectiveness of the proposed method in qualitative and quantitative, it was compared with the adaptive histogram equalization method, the homomorphic filtering method and the SSR(Single-Scale-Retinex) method. Experimental results demonstrated that the presented algorithm can effectively enhance the contrast of CW-THZ image and obtain better visual effect.

  9. Adaptive Wavelet-Based Direct Numerical Simulations of Rayleigh-Taylor Instability

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott J.

    The compressible Rayleigh-Taylor instability (RTI) occurs when a fluid of low molar mass supports a fluid of higher molar mass against a gravity-like body force or in the presence of an accelerating front. Intrinsic to the problem are highly stratified background states, acoustic waves, and a wide range of physical scales. The objective of this thesis is to develop a specialized computational framework that addresses these challenges and to apply the advanced methodologies for direct numerical simulations of compressible RTI. Simulations are performed using the Parallel Adaptive Wavelet Collocation Method (PAWCM). Due to the physics-based adaptivity and direct error control of the method, PAWCM is ideal for resolving the wide range of scales present in RTI growth. Characteristics-based non-reflecting boundary conditions are developed for highly stratified systems to be used in conjunction with PAWCM. This combination allows for extremely long domains, which is necessary for observing the late time growth of compressible RTI. Initial conditions that minimize acoustic disturbances are also developed. The initialization is consistent with linear stability theory, where the background state consists of two diffusively mixed stratified fluids of differing molar masses. The compressibility effects on the departure from the linear growth, the onset of strong non-linear interactions, and the late-time behavior of the fluid structures are investigated. It is discovered that, for the thermal equilibrium case, the background stratification acts to suppress the instability growth when the molar mass difference is small. A reversal in this monotonic behavior is observed for large molar mass differences, where stratification enhances the bubble growth. Stratification also affects the vortex creation and the associated induced velocities. The enhancement and suppression of the RTI growth has important consequences for a detailed understanding of supernovae flame front

  10. Combining interior and exterior characteristics for remote sensing image denoising

    NASA Astrophysics Data System (ADS)

    Peng, Ni; Sun, Shujin; Wang, Runsheng; Zhong, Ping

    2016-04-01

    Remote sensing image denoising faces many challenges since a remote sensing image usually covers a wide area and thus contains complex contents. Using the patch-based statistical characteristics is a flexible method to improve the denoising performance. There are usually two kinds of statistical characteristics available: interior and exterior characteristics. Different statistical characteristics have their own strengths to restore specific image contents. Combining different statistical characteristics to use their strengths together may have the potential to improve denoising results. This work proposes a method combining statistical characteristics to adaptively select statistical characteristics for different image contents. The proposed approach is implemented through a new characteristics selection criterion learned over training data. Moreover, with the proposed combination method, this work develops a denoising algorithm for remote sensing images. Experimental results show that our method can make full use of the advantages of interior and exterior characteristics for different image contents and thus improve the denoising performance.

  11. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  12. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  13. Surface quality monitoring for process control by on-line vibration analysis using an adaptive spline wavelet algorithm

    NASA Astrophysics Data System (ADS)

    Luo, G. Y.; Osypiw, D.; Irle, M.

    2003-05-01

    The dynamic behaviour of wood machining processes affects the surface finish quality of machined workpieces. In order to meet the requirements of increased production efficiency and improved product quality, surface quality information is needed for enhanced process control. However, current methods using high price devices or sophisticated designs, may not be suitable for industrial real-time application. This paper presents a novel approach of surface quality evaluation by on-line vibration analysis using an adaptive spline wavelet algorithm, which is based on the excellent time-frequency localization of B-spline wavelets. A series of experiments have been performed to extract the feature, which is the correlation between the relevant frequency band(s) of vibration with the change of the amplitude and the surface quality. The graphs of the experimental results demonstrate that the change of the amplitude in the selective frequency bands with variable resolution (linear and non-linear) reflects the quality of surface finish, and the root sum square of wavelet power spectrum is a good indication of surface quality. Thus, surface quality can be estimated and quantified at an average level in real time. The results can be used to regulate and optimize the machine's feed speed, maintaining a constant spindle motor speed during cutting. This will lead to higher level control and machining rates while keeping dimensional integrity and surface finish within specification.

  14. Image Denoising via Bayesian Estimation of Statistical Parameter Using Generalized Gamma Density Prior in Gaussian Noise Model

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-03-01

    The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.

  15. Portal imaging: Performance improvement in noise reduction by means of wavelet processing.

    PubMed

    González-López, Antonio; Morales-Sánchez, Juan; Larrey-Ruiz, Jorge; Bastida-Jumilla, María-Consuelo; Verdú-Monedero, Rafael

    2016-01-01

    This paper discusses the suitability, in terms of noise reduction, of various methods which can be applied to an image type often used in radiation therapy: the portal image. Among these methods, the analysis focuses on those operating in the wavelet domain. Wavelet-based methods tested on natural images--such as the thresholding of the wavelet coefficients, the minimization of the Stein unbiased risk estimator on a linear expansion of thresholds (SURE-LET), and the Bayes least-squares method using as a prior a Gaussian scale mixture (BLS-GSM method)--are compared with other methods that operate on the image domain--an adaptive Wiener filter and a nonlocal mean filter (NLM). For the assessment of the performance, the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), the Pearson correlation coefficient, and the Spearman rank correlation (ρ) coefficient are used. The performance of the wavelet filters and the NLM method are similar, but wavelet filters outperform the Wiener filter in terms of portal image denoising. It is shown how BLS-GSM and NLM filters produce the smoothest image, while keeping soft-tissue and bone contrast. As for the computational cost, filters using a decimated wavelet transform (decimated thresholding and SURE-LET) turn out to be the most efficient, with calculation times around 1 s. PMID:26602966

  16. A new performance evaluation scheme for jet engine vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Sadooghi, Mohammad Saleh; Esmaeilzadeh Khadem, Siamak

    2016-08-01

    Denoising of a cargo-plane jet engine compressor vibration signal is investigated in this article. Discrete wavelet transform and two families of Donoho-Johnston and parameter method thresholding, are applied to vibration signal. Eighty four combinations of wavelet thresholding and mother wavelet are evaluated. A new performance evaluation scheme for optimal selection of mother wavelet and thresholding method combination is proposed in this paper, which is make a trade off between four performance criteria of signal to noise ratio, percentage root mean square difference, Cross-correlation, and mean square error. Dmeyer mother wavelet (dmey) combined with Rigorous SURE thresholding has the maximum trade off value and was selected as the most appropriate combination for denoising of the signal. It was shown that inappropriate combination leads to data losing. Also higher performance of proposed trade off with respect to other criteria was proven graphically.

  17. A new study on mammographic image denoising using multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Dong, Min; Guo, Ya-Nan; Ma, Yi-De; Ma, Yu-run; Lu, Xiang-yu; Wang, Ke-ju

    2015-12-01

    Mammography is the most simple and effective technology for early detection of breast cancer. However, the lesion areas of breast are difficult to detect which due to mammograms are mixed with noise. This work focuses on discussing various multiresolution denoising techniques which include the classical methods based on wavelet and contourlet; moreover the emerging multiresolution methods are also researched. In this work, a new denoising method based on dual tree contourlet transform (DCT) is proposed, the DCT possess the advantage of approximate shift invariant, directionality and anisotropy. The proposed denoising method is implemented on the mammogram, the experimental results show that the emerging multiresolution method succeeded in maintaining the edges and texture details; and it can obtain better performance than the other methods both on visual effects and in terms of the Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR) and Structure Similarity (SSIM) values.

  18. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry

    NASA Astrophysics Data System (ADS)

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-01

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  19. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  20. Wavelets in medical imaging

    SciTech Connect

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  1. Adaptive dynamic inversion robust control for BTT missile based on wavelet neural network

    NASA Astrophysics Data System (ADS)

    Li, Chuanfeng; Wang, Yongji; Deng, Zhixiang; Wu, Hao

    2009-10-01

    A new nonlinear control strategy incorporated the dynamic inversion method with wavelet neural networks is presented for the nonlinear coupling system of Bank-to-Turn(BTT) missile in reentry phase. The basic control law is designed by using the dynamic inversion feedback linearization method, and the online learning wavelet neural network is used to compensate the inversion error due to aerodynamic parameter errors, modeling imprecise and external disturbance in view of the time-frequency localization properties of wavelet transform. Weights adjusting laws are derived according to Lyapunov stability theory, which can guarantee the boundedness of all signals in the whole system. Furthermore, robust stability of the closed-loop system under this tracking law is proved. Finally, the six degree-of-freedom(6DOF) simulation results have shown that the attitude angles can track the anticipant command precisely under the circumstances of existing external disturbance and in the presence of parameter uncertainty. It means that the dependence on model by dynamic inversion method is reduced and the robustness of control system is enhanced by using wavelet neural network(WNN) to reconstruct inversion error on-line.

  2. Anatomically-adapted graph wavelets for improved group-level fMRI activation mapping.

    PubMed

    Behjat, Hamid; Leonardi, Nora; Sörnmo, Leif; Van De Ville, Dimitri

    2015-12-01

    A graph based framework for fMRI brain activation mapping is presented. The approach exploits the spectral graph wavelet transform (SGWT) for the purpose of defining an advanced multi-resolutional spatial transformation for fMRI data. The framework extends wavelet based SPM (WSPM), which is an alternative to the conventional approach of statistical parametric mapping (SPM), and is developed specifically for group-level analysis. We present a novel procedure for constructing brain graphs, with subgraphs that separately encode the structural connectivity of the cerebral and cerebellar gray matter (GM), and address the inter-subject GM variability by the use of template GM representations. Graph wavelets tailored to the convoluted boundaries of GM are then constructed as a means to implement a GM-based spatial transformation on fMRI data. The proposed approach is evaluated using real as well as semi-synthetic multi-subject data. Compared to SPM and WSPM using classical wavelets, the proposed approach shows superior type-I error control. The results on real data suggest a higher detection sensitivity as well as the capability to capture subtle, connected patterns of brain activity.

  3. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.

  4. A wavelet-based Projector Augmented-Wave (PAW) method: Reaching frozen-core all-electron precision with a systematic, adaptive and localized wavelet basis set

    NASA Astrophysics Data System (ADS)

    Rangel, T.; Caliste, D.; Genovese, L.; Torrent, M.

    2016-11-01

    We present a Projector Augmented-Wave (PAW) method based on a wavelet basis set. We implemented our wavelet-PAW method as a PAW library in the ABINIT package [http://www.abinit.org] and into BigDFT [http://www.bigdft.org]. We test our implementation in prototypical systems to illustrate the potential usage of our code. By using the wavelet-PAW method, we can simulate charged and special boundary condition systems with frozen-core all-electron precision. Furthermore, our work paves the way to large-scale and potentially order- N simulations within a PAW method.

  5. A wavelet-MRA-based adaptive semi-Lagrangian method for the relativistic Vlasov-Maxwell system

    SciTech Connect

    Besse, Nicolas Latu, Guillaume Ghizzo, Alain Sonnendruecker, Eric Bertrand, Pierre

    2008-08-10

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strong laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to

  6. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  7. Image denoising filter based on patch-based difference refinement

    NASA Astrophysics Data System (ADS)

    Park, Sang Wook; Kang, Moon Gi

    2012-06-01

    In the denoising literature, research based on the nonlocal means (NLM) filter has been done and there have been many variations and improvements regarding weight function and parameter optimization. Here, a NLM filter with patch-based difference (PBD) refinement is presented. PBD refinement, which is the weighted average of the PBD values, is performed with respect to the difference images of all the locations in a refinement kernel. With refined and denoised PBD values, pattern adaptive smoothing threshold and noise suppressed NLM filter weights are calculated. Owing to the refinement of the PBD values, the patterns are divided into flat regions and texture regions by comparing the sorted values in the PBD domain to the threshold value including the noise standard deviation. Then, two different smoothing thresholds are utilized for each region denoising, respectively, and the NLM filter is applied finally. Experimental results of the proposed scheme are shown in comparison with several state-of-the-arts NLM based denoising methods.

  8. A new method based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for recognition of urine cells from microscopic images independent of rotation and scaling.

    PubMed

    Avci, Derya; Leblebicioglu, Mehmet Kemal; Poyraz, Mustafa; Dogantekin, Esin

    2014-02-01

    So far, analysis and classification of urine cells number has become an important topic for medical diagnosis of some diseases. Therefore, in this study, we suggest a new technique based on Adaptive Discrete Wavelet Entropy Energy and Neural Network Classifier (ADWEENN) for Recognition of Urine Cells from Microscopic Images Independent of Rotation and Scaling. Some digital image processing methods such as noise reduction, contrast enhancement, segmentation, and morphological process are used for feature extraction stage of this ADWEENN in this study. Nowadays, the image processing and pattern recognition topics have come into prominence. The image processing concludes operation and design of systems that recognize patterns in data sets. In the past years, very difficulty in classification of microscopic images was the deficiency of enough methods to characterize. Lately, it is seen that, multi-resolution image analysis methods such as Gabor filters, discrete wavelet decompositions are superior to other classic methods for analysis of these microscopic images. In this study, the structure of the ADWEENN method composes of four stages. These are preprocessing stage, feature extraction stage, classification stage and testing stage. The Discrete Wavelet Transform (DWT) and adaptive wavelet entropy and energy is used for adaptive feature extraction in feature extraction stage to strengthen the premium features of the Artificial Neural Network (ANN) classifier in this study. Efficiency of the developed ADWEENN method was tested showing that an avarage of 97.58% recognition succes was obtained.

  9. Wavelet-based pavement image compression and noise reduction

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2005-08-01

    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  10. A wavelet-based Markov random field segmentation model in segmenting microarray experiments.

    PubMed

    Athanasiadis, Emmanouil; Cavouras, Dionisis; Kostopoulos, Spyros; Glotsos, Dimitris; Kalatzis, Ioannis; Nikiforidis, George

    2011-12-01

    In the present study, an adaptation of the Markov Random Field (MRF) segmentation model, by means of the stationary wavelet transform (SWT), applied to complementary DNA (cDNA) microarray images is proposed (WMRF). A 3-level decomposition scheme of the initial microarray image was performed, followed by a soft thresholding filtering technique. With the inverse process, a Denoised image was created. In addition, by using the Amplitudes of the filtered wavelet Horizontal and Vertical images at each level, three different Magnitudes were formed. These images were combined with the Denoised one to create the proposed SMRF segmentation model. For numerical evaluation of the segmentation accuracy, the segmentation matching factor (SMF), the Coefficient of Determination (r(2)), and the concordance correlation (p(c)) were calculated on the simulated images. In addition, the SMRF performance was contrasted to the Fuzzy C Means (FCM), Gaussian Mixture Models (GMM), Fuzzy GMM (FGMM), and the conventional MRF techniques. Indirect accuracy performances were also tested on the experimental images by means of the Mean Absolute Error (MAE) and the Coefficient of Variation (CV). In the latter case, SPOT and SCANALYZE software results were also tested. In the former case, SMRF attained the best SMF, r(2), and p(c) (92.66%, 0.923, and 0.88, respectively) scores, whereas, in the latter case scored MAE and CV, 497 and 0.88, respectively. The results and support the performance superiority of the SMRF algorithm in segmenting cDNA images.

  11. The Assessment of Muscular Effort, Fatigue, and Physiological Adaptation Using EMG and Wavelet Analysis

    PubMed Central

    Graham, Ryan B.; Wachowiak, Mark P.; Gurd, Brendon J.

    2015-01-01

    Peroxisome proliferator-activated receptor gamma coactivator 1 alpha (PGC-1α) is a transcription factor co-activator that helps coordinate mitochondrial biogenesis within skeletal muscle following exercise. While evidence gleaned from submaximal exercise suggests that intracellular pathways associated with the activation of PGC-1α, as well as the expression of PGC-1α itself are activated to a greater extent following higher intensities of exercise, we have recently shown that this effect does not extend to supramaximal exercise, despite corresponding increases in muscle activation amplitude measured with electromyography (EMG). Spectral analyses of EMG data may provide a more in-depth assessment of changes in muscle electrophysiology occurring across different exercise intensities, and therefore the goal of the present study was to apply continuous wavelet transforms (CWTs) to our previous data to comprehensively evaluate: 1) differences in muscle electrophysiological properties at different exercise intensities (i.e. 73%, 100%, and 133% of peak aerobic power), and 2) muscular effort and fatigue across a single interval of exercise at each intensity, in an attempt to shed mechanistic insight into our previous observations that the increase in PGC-1α is dissociated from exercise intensity following supramaximal exercise. In general, the CWTs revealed that localized muscle fatigue was only greater than the 73% condition in the 133% exercise intensity condition, which directly matched the work rate results. Specifically, there were greater drop-offs in frequency, larger changes in burst power, as well as greater changes in burst area under this intensity, which were already observable during the first interval. As a whole, the results from the present study suggest that supramaximal exercise causes extreme localized muscular fatigue, and it is possible that the blunted PGC-1α effects observed in our previous study are the result of fatigue-associated increases in

  12. Relevant modes selection method based on Spearman correlation coefficient for laser signal denoising using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Duan, Yabo; Song, Chengtian

    2016-10-01

    Empirical mode decomposition (EMD) is a recently proposed nonlinear and nonstationary laser signal denoising method. A noisy signal is broken down using EMD into oscillatory components that are called intrinsic mode functions (IMFs). Thresholding-based denoising and correlation-based partial reconstruction of IMFs are the two main research directions for EMD-based denoising. Similar to other decomposition-based denoising approaches, EMD-based denoising methods require a reliable threshold to determine which IMFs are noise components and which IMFs are noise-free components. In this work, we propose a new approach in which each IMF is first denoised using EMD interval thresholding (EMD-IT), and then a robust thresholding process based on Spearman correlation coefficient is used for relevant modes selection. The proposed method tackles the problem using a thresholding-based denoising approach coupled with partial reconstruction of the relevant IMFs. Other traditional denoising methods, including correlation-based EMD partial reconstruction (EMD-Correlation), discrete Fourier transform and wavelet-based methods, are investigated to provide a comparison with the proposed technique. Simulation and test results demonstrate the superior performance of the proposed method when compared with the other methods.

  13. Voxel-Wise Functional Connectomics Using Arterial Spin Labeling Functional Magnetic Resonance Imaging: The Role of Denoising.

    PubMed

    Liang, Xiaoyun; Connelly, Alan; Calamante, Fernando

    2015-11-01

    The objective of this study was to investigate voxel-wise functional connectomics using arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI). Since ASL signal has an intrinsically low signal-to-noise ratio (SNR), the role of denoising is evaluated; in particular, a novel denoising method, dual-tree complex wavelet transform (DT-CWT) combined with the nonlocal means (NLM) algorithm is implemented and evaluated. Simulations were conducted to evaluate the performance of the proposed method in denoising images and in detecting functional networks from noisy data (including the accuracy and sensitivity of detection). In addition, denoising was applied to in vivo ASL datasets, followed by network analysis using graph theoretical approaches. Efficiencies cost was used to evaluate the performance of denoising in detecting functional networks from in vivo ASL fMRI data. Simulations showed that denoising is effective in detecting voxel-wise functional networks from low SNR data and/or from data with small total number of time points. The capability of denoised voxel-wise functional connectivity analysis was also demonstrated with in vivo data. We concluded that denoising is important for voxel-wise functional connectivity using ASL fMRI and that the proposed DT-CWT-NLM method should be a useful ASL preprocessing step.

  14. Modeling and control of nonlinear systems using novel fuzzy wavelet networks: The output adaptive control approach

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyyed Hossein; Noroozi, Navid; Safavi, Ali Akbar; Ebadat, Afrooz

    2011-09-01

    This paper proposes an observer based self-structuring robust adaptive fuzzy wave-net (FWN) controller for a class of nonlinear uncertain multi-input multi-output systems. The control signal is comprised of two parts. The first part arises from an adaptive fuzzy wave-net based controller that approximates the system structural uncertainties. The second part comes from a robust H∞ based controller that is used to attenuate the effect of function approximation error and disturbance. Moreover, a new self structuring algorithm is proposed to determine the location of basis functions. Simulation results are provided for a two DOF robot to show the effectiveness of the proposed method.

  15. The use of ensemble empirical mode decomposition as a novel denoising technique

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2016-04-01

    Denoising is of a high importance in geophysical data processing. This paper suggests a new denoising technique based on the Ensemble Empirical mode decomposition (EEMD). This technique has been compared with the discrete wavelet transform (DWT) thresholding. Firstly, both methods have been implemented on synthetic signals with diverse waveforms ('blocks', 'heavy sine', 'Doppler', and 'mishmash'). The EEMD denoising method is proved to be the most efficient for 'blocks', 'heavy sine' and 'mishmash' signals for all the considered signal-to-noise ratio (SNR) values. However, the results obtained using the DWT thresholding are the most reliable for 'Doppler' signal, and the difference between the calculated mean square error (MSE) values using the studied methods is slight and decreases as the SNR value gets smaller values. Secondly, the denoising methods have been applied on real seismic traces recorded in the Algerian Sahara. It is shown that the proposed technique outperforms the DWT thresholding. In conclusion, the EEMD technique can provide a powerful tool for denoising seismic signals. Keywords: Ensemble Empirical mode decomposition (EEMD), Discrete wavelet transform (DWT), seismic signal.

  16. Multicomponent MR Image Denoising

    PubMed Central

    Manjón, José V.; Thacker, Neil A.; Lull, Juan J.; Garcia-Martí, Gracian; Martí-Bonmatí, Luís; Robles, Montserrat

    2009-01-01

    Magnetic Resonance images are normally corrupted by random noise from the measurement process complicating the automatic feature extraction and analysis of clinical data. It is because of this reason that denoising methods have been traditionally applied to improve MR image quality. Many of these methods use the information of a single image without taking into consideration the intrinsic multicomponent nature of MR images. In this paper we propose a new filter to reduce random noise in multicomponent MR images by spatially averaging similar pixels using information from all available image components to perform the denoising process. The proposed algorithm also uses a local Principal Component Analysis decomposition as a postprocessing step to remove more noise by using information not only in the spatial domain but also in the intercomponent domain dealing in a higher noise reduction without significantly affecting the original image resolution. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical multicomponent MR images showing an improved performance in all cases analyzed. PMID:19888431

  17. Empirical Mode Decomposition Technique with Conditional Mutual Information for Denoising Operational Sensor Data

    SciTech Connect

    Omitaomu, Olufemi A; Protopopescu, Vladimir A; Ganguly, Auroop R

    2011-01-01

    A new approach is developed for denoising signals using the Empirical Mode Decomposition (EMD) technique and the Information-theoretic method. The EMD technique is applied to decompose a noisy sensor signal into the so-called intrinsic mode functions (IMFs). These functions are of the same length and in the same time domain as the original signal. Therefore, the EMD technique preserves varying frequency in time. Assuming the given signal is corrupted by high-frequency Gaussian noise implies that most of the noise should be captured by the first few modes. Therefore, our proposition is to separate the modes into high-frequency and low-frequency groups. We applied an information-theoretic method, namely mutual information, to determine the cut-off for separating the modes. A denoising procedure is applied only to the high-frequency group using a shrinkage approach. Upon denoising, this group is combined with the original low-frequency group to obtain the overall denoised signal. We illustrate our approach with simulated and real-world data sets. The results are compared to two popular denoising techniques in the literature, namely discrete Fourier transform (DFT) and discrete wavelet transform (DWT). We found that our approach performs better than DWT and DFT in most cases, and comparatively to DWT in some cases in terms of: (i) mean square error, (ii) recomputed signal-to-noise ratio, and (iii) visual quality of the denoised signals.

  18. Patch-based and multiresolution optimum bilateral filters for denoising images corrupted by Gaussian noise

    NASA Astrophysics Data System (ADS)

    Kishan, Harini; Seelamantula, Chandra Sekhar

    2015-09-01

    We propose optimal bilateral filtering techniques for Gaussian noise suppression in images. To achieve maximum denoising performance via optimal filter parameter selection, we adopt Stein's unbiased risk estimate (SURE)-an unbiased estimate of the mean-squared error (MSE). Unlike MSE, SURE is independent of the ground truth and can be used in practical scenarios where the ground truth is unavailable. In our recent work, we derived SURE expressions in the context of the bilateral filter and proposed SURE-optimal bilateral filter (SOBF). We selected the optimal parameters of SOBF using the SURE criterion. To further improve the denoising performance of SOBF, we propose variants of SOBF, namely, SURE-optimal multiresolution bilateral filter (SMBF), which involves optimal bilateral filtering in a wavelet framework, and SURE-optimal patch-based bilateral filter (SPBF), where the bilateral filter parameters are optimized on small image patches. Using SURE guarantees automated parameter selection. The multiresolution and localized denoising in SMBF and SPBF, respectively, yield superior denoising performance when compared with the globally optimal SOBF. Experimental validations and comparisons show that the proposed denoisers perform on par with some state-of-the-art denoising techniques.

  19. Identification of a self-paced hitting task in freely moving rats based on adaptive spike detection from multi-unit M1 cortical signals

    PubMed Central

    Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie

    2013-01-01

    Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254

  20. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  1. Multiscale viscoacoustic waveform inversion with the second generation wavelet transform and adaptive time-space domain finite-difference method

    NASA Astrophysics Data System (ADS)

    Ren, Zhiming; Liu, Yang; Zhang, Qunshan

    2014-05-01

    Full waveform inversion (FWI) has the potential to provide preferable subsurface model parameters. The main barrier of its applications to real seismic data is heavy computational amount. Numerical modelling methods are involved in both forward modelling and backpropagation of wavefield residuals, which spend most of computational time in FWI. We develop a time-space domain finite-difference (FD) method and adaptive variable-length spatial operator scheme in numerical simulation of viscoacoustic equation and extend them into the viscoacoustic FWI. Compared with conventional FD methods, different operator lengths are adopted for different velocities and quality factors, which can reduce the amount of computation without reducing accuracy. Inversion algorithms also play a significant role in FWI. In conventional single-scale methods, it is likely to converge to local minimums especially when the initial model is far from the real model. To tackle the problem, we introduce the second generation wavelet transform to implement the multiscale FWI. Compared to other multiscale methods, our method has advantages of ease of implementation and better time-frequency local analysis ability. The L2 norm is widely used in FWI and gives invalid model estimates when the data is contaminated with strong non-uniform noises. We apply the L1-norm and the Huber-norm criteria in the time-domain FWI to improve its antinoise ability. Our strategies have been successfully applied in synthetic experiments to both onshore and offshore reflection seismic data. The results of the viscoacoustic Marmousi example indicate that our new FWI scheme consumes smaller computer resources. In addition, the viscoacoustic Overthrust example shows its better convergence and more reasonable velocity and quality factor structures. All these results demonstrate that our method can improve inversion accuracy and computational efficiency of FWI.

  2. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    SciTech Connect

    Pita-Machado, Reinado; Perez-Diaz, Marlen Lorenzo-Ginori, Juan V. Bravo-Pino, Rolando

    2014-11-07

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which are not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions.

  3. Wavelets and electromagnetics

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.

    1992-01-01

    Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.

  4. Comparison of de-noising techniques for FIRST images

    SciTech Connect

    Fodor, I K; Kamath, C

    2001-01-22

    Data obtained through scientific observations are often contaminated by noise and artifacts from various sources. As a result, a first step in mining these data is to isolate the signal of interest by minimizing the effects of the contaminations. Once the data has been cleaned or de-noised, data mining can proceed as usual. In this paper, we describe our work in denoising astronomical images from the Faint Images of the Radio Sky at Twenty-Centimeters (FIRST) survey. We are mining this survey to detect radio-emitting galaxies with a bent-double morphology. This task is made difficult by the noise in the images caused by the processing of the sensor data. We compare three different approaches to de-noising: thresholding of wavelet coefficients advocated in the statistical community, traditional Altering methods used in the image processing community, and a simple thresholding scheme proposed by FIRST astronomers. While each approach has its merits and pitfalls, we found that for our purpose, the simple thresholding scheme worked relatively well for the FIRST dataset.

  5. Patch-ordering-based wavelet frame and its use in inverse problems.

    PubMed

    Ram, Idan; Cohen, Israel; Elad, Michael

    2014-07-01

    In our previous work [1] we have introduced a redundant tree-based wavelet transform (RTBWT), originally designed to represent functions defined on high dimensional data clouds and graphs. We have further shown that RTBWT can be used as a highly effective image-adaptive redundant transform that operates on an image using orderings of its overlapped patches. The resulting transform is robust to corruptions in the image, and thus able to efficiently represent the unknown target image even when it is calculated from its corrupted version. In this paper, we utilize this redundant transform as a powerful sparsity-promoting regularizer in inverse problems in image processing. We show that the image representation obtained with this transform is a frame expansion, and derive the analysis and synthesis operators associated with it. We explore the use of this frame operators to image denoising and deblurring, and demonstrate in both these cases state-of-the-art results.

  6. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  7. Denoising human cardiac diffusion tensor magnetic resonance images using sparse representation combined with segmentation.

    PubMed

    Bao, L J; Zhu, Y M; Liu, W Y; Croisille, P; Pu, Z B; Robini, M; Magnin, I E

    2009-03-21

    Cardiac diffusion tensor magnetic resonance imaging (DT-MRI) is noise sensitive, and the noise can induce numerous systematic errors in subsequent parameter calculations. This paper proposes a sparse representation-based method for denoising cardiac DT-MRI images. The method first generates a dictionary of multiple bases according to the features of the observed image. A segmentation algorithm based on nonstationary degree detector is then introduced to make the selection of atoms in the dictionary adapted to the image's features. The denoising is achieved by gradually approximating the underlying image using the atoms selected from the generated dictionary. The results on both simulated image and real cardiac DT-MRI images from ex vivo human hearts show that the proposed denoising method performs better than conventional denoising techniques by preserving image contrast and fine structures. PMID:19218737

  8. Simultaneous Fusion and Denoising of Panchromatic and Multispectral Satellite Images

    NASA Astrophysics Data System (ADS)

    Ragheb, Amr M.; Osman, Heba; Abbas, Alaa M.; Elkaffas, Saleh M.; El-Tobely, Tarek A.; Khamis, S.; Elhalawany, Mohamed E.; Nasr, Mohamed E.; Dessouky, Moawad I.; Al-Nuaimy, Waleed; Abd El-Samie, Fathi E.

    2012-12-01

    To identify objects in satellite images, multispectral (MS) images with high spectral resolution and low spatial resolution, and panchromatic (Pan) images with high spatial resolution and low spectral resolution need to be fused. Several fusion methods such as the intensity-hue-saturation (IHS), the discrete wavelet transform, the discrete wavelet frame transform (DWFT), and the principal component analysis have been proposed in recent years to obtain images with both high spectral and spatial resolutions. In this paper, a hybrid fusion method for satellite images comprising both the IHS transform and the DWFT is proposed. This method tries to achieve the highest possible spectral and spatial resolutions with as small distortion in the fused image as possible. A comparison study between the proposed hybrid method and the traditional methods is presented in this paper. Different MS and Pan images from Landsat-5, Spot, Landsat-7, and IKONOS satellites are used in this comparison. The effect of noise on the proposed hybrid fusion method as well as the traditional fusion methods is studied. Experimental results show the superiority of the proposed hybrid method to the traditional methods. The results show also that a wavelet denoising step is required when fusion is performed at low signal-to-noise ratios.

  9. A de-noising algorithm to improve SNR of segmented gamma scanner for spectrum analysis

    NASA Astrophysics Data System (ADS)

    Li, Huailiang; Tuo, Xianguo; Shi, Rui; Zhang, Jinzhao; Henderson, Mark Julian; Courtois, Jérémie; Yan, Minhao

    2016-05-01

    An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

  10. Efficient denoising algorithms for large experimental datasets and their applications in Fourier transform ion cyclotron resonance mass spectrometry

    PubMed Central

    Chiron, Lionel; van Agthoven, Maria A.; Kieffer, Bruno; Rolando, Christian; Delsuc, Marc-André

    2014-01-01

    Modern scientific research produces datasets of increasing size and complexity that require dedicated numerical methods to be processed. In many cases, the analysis of spectroscopic data involves the denoising of raw data before any further processing. Current efficient denoising algorithms require the singular value decomposition of a matrix with a size that scales up as the square of the data length, preventing their use on very large datasets. Taking advantage of recent progress on random projection and probabilistic algorithms, we developed a simple and efficient method for the denoising of very large datasets. Based on the QR decomposition of a matrix randomly sampled from the data, this approach allows a gain of nearly three orders of magnitude in processing time compared with classical singular value decomposition denoising. This procedure, called urQRd (uncoiled random QR denoising), strongly reduces the computer memory footprint and allows the denoising algorithm to be applied to virtually unlimited data size. The efficiency of these numerical tools is demonstrated on experimental data from high-resolution broadband Fourier transform ion cyclotron resonance mass spectrometry, which has applications in proteomics and metabolomics. We show that robust denoising is achieved in 2D spectra whose interpretation is severely impaired by scintillation noise. These denoising procedures can be adapted to many other data analysis domains where the size and/or the processing time are crucial. PMID:24390542

  11. Removal of correlated noise by modeling the signal of interest in the wavelet domain.

    PubMed

    Goossens, Bart; Pizurica, Aleksandra; Philips, Wilfried

    2009-06-01

    Images, captured with digital imaging devices, often contain noise. In literature, many algorithms exist for the removal of white uncorrelated noise, but they usually fail when applied to images with correlated noise. In this paper, we design a new denoising method for the removal of correlated noise, by modeling the significance of the noise-free wavelet coefficients in a local window using a new significance measure that defines the "signal of interest" and that is applicable to correlated noise. We combine the intrascale model with a hidden Markov tree model to capture the interscale dependencies between the wavelet coefficients. We propose a denoising method based on the combined model and a less redundant wavelet transform. We present results that show that the new method performs as well as the state-of-the-art wavelet-based methods, while having a lower computational complexity.

  12. Wavelet analysis on vibration modal frequency measurement at a low level of strain of the turbine blade using FBG sensors

    NASA Astrophysics Data System (ADS)

    Huang, Xue-feng; Liu, Yan; Luo, Dan; Wang, Guan-qing; Ding, Ning; Xu, Jiang-rong; Huang, Xue-jing

    2010-01-01

    Vibration modal frequency measurement of a turbine blade was investigated by using fiber Bragg grating (FBG) sensors for their adaptation for a low level of strain at high frequency. However, the signal-to-noise ratio was so low that it was very difficult to identify dominant modal frequency from a raw signal acquired. An attempt was made to solve this problem. First, a bi-linear transform elliptic filter with pass-band and stop-band was proposed to remove electromagnetic interference noise. Second, discrete stationary wavelet transform with a soft threshold was utilized to de-noise the high-frequency part. Third, wavelet packet transform was exploited to decompose the time signal and to obtain the power spectrum and identification of dominant modal frequency. Experimental and analytic results demonstrated that four stages of dominant modal frequencies of the turbine blade without any constraint condition (free vibration) and three stages of dominant modal frequencies with a constraint condition (A-type vibration) were obtained, respectively. To testify to the validity of analytic results, a professional computer random signal and vibration analysis system (CRAS) was used to measure modal frequency. The results illustrated that modal frequencies obtained by the CRAS platform yielded close agreement with those based on FBG sensors. Obviously, wavelet analysis is successfully employed to decode vibration modal frequency from a raw signal of FBG sensors.

  13. A novel de-noising method for B ultrasound images

    NASA Astrophysics Data System (ADS)

    Tian, Da-Yong; Mo, Jia-qing; Yu, Yin-Feng; Lv, Xiao-Yi; Yu, Xiao; Jia, Zhen-Hong

    2015-12-01

    B ultrasound as a kind of ultrasonic imaging, which has become the indispensable diagnosis method in clinical medicine. However, the presence of speckle noise in ultrasound image greatly reduces the image quality and interferes with the accuracy of the diagnosis. Therefore, how to construct a method which can eliminate the speckle noise effectively, and at the same time keep the image details effectively is the research target of the current ultrasonic image de-noising. This paper is intended to remove the inherent speckle noise of B ultrasound image. The novel algorithm proposed is based on both wavelet transformation of B ultrasound images and data fusion of B ultrasound images, with a smaller mean squared error (MSE) and greater signal to noise ratio (SNR) compared with other algorithms. The results of this study can effectively remove speckle noise from B ultrasound images, and can well preserved the details and edge information which will produce better visual effects.

  14. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    SciTech Connect

    Merlin, Thibaut; Visvikis, Dimitris; Fernandez, Philippe; Lamare, Frederic

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a

  15. Sonar target enhancement by shrinkage of incoherent wavelet coefficients.

    PubMed

    Hunter, Alan J; van Vossen, Robbert

    2014-01-01

    Background reverberation can obscure useful features of the target echo response in broadband low-frequency sonar images, adversely affecting detection and classification performance. This paper describes a resolution and phase-preserving means of separating the target response from the background reverberation noise using a coherence-based wavelet shrinkage method proposed recently for de-noising magnetic resonance images. The algorithm weights the image wavelet coefficients in proportion to their coherence between different looks under the assumption that the target response is more coherent than the background. The algorithm is demonstrated successfully on experimental synthetic aperture sonar data from a broadband low-frequency sonar developed for buried object detection.

  16. Orthogonal wavelet moments and their multifractal invariants

    NASA Astrophysics Data System (ADS)

    Uchaev, Dm. V.; Uchaev, D. V.; Malinnikov, V. A.

    2015-02-01

    This paper introduces a new family of moments, namely orthogonal wavelet moments (OWMs), which are orthogonal realization of wavelet moments (WMs). In contrast to WMs with nonorthogonal kernel function, these moments can be used for multiresolution image representation and image reconstruction. The paper also introduces multifractal invariants (MIs) of OWMs which can be used instead of OWMs. Some reconstruction tests performed with noise-free and noisy images demonstrate that MIs of OWMs can also be used for image smoothing, sharpening and denoising. It is established that the reconstruction quality for MIs of OWMs can be better than corresponding orthogonal moments (OMs) and reduces to the reconstruction quality for the OMs if we use the zero scale level.

  17. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    SciTech Connect

    Sheng, Ke Qi, Sharon X.; Gou, Shuiping; Wu, Jiaolong

    2014-10-15

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion

  18. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  19. Noise reduction of FBG sensor signal by using a wavelet transform

    NASA Astrophysics Data System (ADS)

    Cho, Yo-Han; Song, Minho

    2011-05-01

    We constructed a FBG (fiber Bragg grating) sensor system based on a fiber-optic Sagnac interferometer. A fiber-optic laser source is used as a strong light source to attain high signal-to-noise ratio. However the unstable output power and coherence noises of the fiber laser made it hard to separate the FBG signals from the interference signals of the fiber coils. To reduce noises and extract FBG sensor signals, we used a Gaussian curve-fitting and a wavelet transform. The wavelet transform is a useful tool for analyzing and denoising output signals. The feasibility of the wavelet transform denoising process is presented with the preliminary experimental results, which showed much better accuracy than the case with only the Gaussian curve-fitting algorithm.

  20. A wavelet method for modeling and despiking motion artifacts from resting-state fMRI time series

    PubMed Central

    Patel, Ameera X.; Kundu, Prantik; Rubinov, Mikail; Jones, P. Simon; Vértes, Petra E.; Ersche, Karen D.; Suckling, John; Bullmore, Edward T.

    2014-01-01

    The impact of in-scanner head movement on functional magnetic resonance imaging (fMRI) signals has long been established as undesirable. These effects have been traditionally corrected by methods such as linear regression of head movement parameters. However, a number of recent independent studies have demonstrated that these techniques are insufficient to remove motion confounds, and that even small movements can spuriously bias estimates of functional connectivity. Here we propose a new data-driven, spatially-adaptive, wavelet-based method for identifying, modeling, and removing non-stationary events in fMRI time series, caused by head movement, without the need for data scrubbing. This method involves the addition of just one extra step, the Wavelet Despike, in standard pre-processing pipelines. With this method, we demonstrate robust removal of a range of different motion artifacts and motion-related biases including distance-dependent connectivity artifacts, at a group and single-subject level, using a range of previously published and new diagnostic measures. The Wavelet Despike is able to accommodate the substantial spatial and temporal heterogeneity of motion artifacts and can consequently remove a range of high and low frequency artifacts from fMRI time series, that may be linearly or non-linearly related to physical movements. Our methods are demonstrated by the analysis of three cohorts of resting-state fMRI data, including two high-motion datasets: a previously published dataset on children (N = 22) and a new dataset on adults with stimulant drug dependence (N = 40). We conclude that there is a real risk of motion-related bias in connectivity analysis of fMRI data, but that this risk is generally manageable, by effective time series denoising strategies designed to attenuate synchronized signal transients induced by abrupt head movements. The Wavelet Despiking software described in this article is freely available for download at www

  1. Improved extreme value weighted sparse representational image denoising with random perturbation

    NASA Astrophysics Data System (ADS)

    Xuan, Shibin; Han, Yulan

    2015-11-01

    Research into the removal of mixed noise is a hot topic in the field of image denoising. Currently, weighted encoding with sparse nonlocal regularization represents an excellent mixed noise removal method. To make the fitting function closer to the requirements of a robust estimation technique, an extreme value technique is used that allows the fitting function to satisfy three conditions of robust estimation on a larger interval. Moreover, a random disturbance sequence is integrated into the denoising model to prevent the iterative solving process from falling into local optima. A radon transform-based noise detection algorithm and an adaptive median filter are used to obtain a high-quality initial solution for the iterative procedure of the image denoising model. Experimental results indicate that this improved method efficiently enhances the weighted encoding with a sparse nonlocal regularization model. The proposed method can effectively remove mixed noise from corrupted images, while better preserving the edges and details of the processed image.

  2. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  3. Inversion of monthly GRACE potentials for climate-based mass transports by a novel technique with locally adapted and increased resolution

    NASA Astrophysics Data System (ADS)

    Michel, Volker; Fischer, Doreen

    2013-04-01

    We show the applicability of a novel method called the Regularized Functional Matching Pursuit (RFMP) to the local analysis of mass transports. We consider monthly GRACE potentials for South America during one year and subtract a temporal mean. The resulting difference fields are denoised with Freeden's spherical wavelets. Finally, the obtained monthly potential anomalies are inverted for volumetric mass density anomalies with the RFMP. The calculated results clearly show seasonal variations in the mass density distribution in the Amazon area. For another application, we consider the detection of droughts and a flood in the summers of 2005 to 2010. The novel technique combines the advantages of global basis functions (spherical harmonics) and local trial functions (splines or wavelets) and yields a resolution which is locally adapted to the detail structure of the solution. We believe that this can contribute to an increased spatial resolution of the result.

  4. ECG Signal Analysis and Arrhythmia Detection using Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Kaur, Inderbir; Rajni, Rajni; Marwaha, Anupma

    2016-06-01

    Electrocardiogram (ECG) is used to record the electrical activity of the heart. The ECG signal being non-stationary in nature, makes the analysis and interpretation of the signal very difficult. Hence accurate analysis of ECG signal with a powerful tool like discrete wavelet transform (DWT) becomes imperative. In this paper, ECG signal is denoised to remove the artifacts and analyzed using Wavelet Transform to detect the QRS complex and arrhythmia. This work is implemented in MATLAB software for MIT/BIH Arrhythmia database and yields the sensitivity of 99.85 %, positive predictivity of 99.92 % and detection error rate of 0.221 % with wavelet transform. It is also inferred that DWT outperforms principle component analysis technique in detection of ECG signal.

  5. Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function.

    PubMed

    Lahmiri, Salim

    2016-03-01

    Hybridisation of the bi-dimensional empirical mode decomposition (BEMD) with denoising techniques has been proposed in the literature as an effective approach for image denoising. In this Letter, the Student's probability density function is introduced in the computation of the mean envelope of the data during the BEMD sifting process to make it robust to values that are far from the mean. The resulting BEMD is denoted tBEMD. In order to show the effectiveness of the tBEMD, several image denoising techniques in tBEMD domain are employed; namely, fourth order partial differential equation (PDE), linear complex diffusion process (LCDP), non-linear complex diffusion process (NLCDP), and the discrete wavelet transform (DWT). Two biomedical images and a standard digital image were considered for experiments. The original images were corrupted with additive Gaussian noise with three different levels. Based on peak-signal-to-noise ratio, the experimental results show that PDE, LCDP, NLCDP, and DWT all perform better in the tBEMD than in the classical BEMD domain. It is also found that tBEMD is faster than classical BEMD when the noise level is low. When it is high, the computational cost in terms of processing time is similar. The effectiveness of the presented approach makes it promising for clinical applications. PMID:27222723

  6. OPTICAL COHERENCE TOMOGRAPHY HEART TUBE IMAGE DENOISING BASED ON CONTOURLET TRANSFORM.

    PubMed

    Guo, Qing; Sun, Shuifa; Dong, Fangmin; Gao, Bruce Z; Wang, Rui

    2012-01-01

    Optical Coherence Tomography(OCT) gradually becomes a very important imaging technology in the Biomedical field for its noninvasive, nondestructive and real-time properties. However, the interpretation and application of the OCT images are limited by the ubiquitous noise. In this paper, a denoising algorithm based on contourlet transform for the OCT heart tube image is proposed. A bivariate function is constructed to model the joint probability density function (pdf) of the coefficient and its cousin in contourlet domain. A bivariate shrinkage function is deduced to denoise the image by the maximum a posteriori (MAP) estimation. Three metrics, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and equivalent number of look (ENL), are used to evaluate the denoised image using the proposed algorithm. The results show that the signal-to-noise ratio is improved while the edges of object are preserved by the proposed algorithm. Systemic comparisons with other conventional algorithms, such as mean filter, median filter, RKT filter, Lee filter, as well as bivariate shrinkage function for wavelet-based algorithm are conducted. The advantage of the proposed algorithm over these methods is illustrated. PMID:25364626

  7. Detection of motor imagery of swallow EEG signals based on the dual-tree complex wavelet transform and adaptive model selection

    NASA Astrophysics Data System (ADS)

    Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai

    2014-06-01

    Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.

  8. Iterative Regularization Denoising Method Based on OSV Model for BioMedical Image Denoising

    NASA Astrophysics Data System (ADS)

    Guan-nan, Chen; Rong, Chen; Zu-fang, Huang; Ju-qiang, Lin; Shang-yuan, Feng; Yong-zeng, Li; Zhong-jian, Teng

    2011-01-01

    Biomedical image denoising algorithm based on gradient dependent energy functional often compromised the biomedical image features like textures or certain details. This paper proposes an iterative regularization denoising method based on OSV model for biomedical image denoising. By using iterative regularization, the oscillating patterns of texture and detail are added back to fit and compute the original OSV model,and the iterative behavior avoids overfull smoothing while denoising the features of textures and details to a certain extent. In addition, the iterative procedure is proposed in this paper, and the proposed algorithm also be proved the convergence property. Experimental results show that the proposed method can achieve a batter result in preserving not only the features of textures for biomedical image denoising but also the details for biomedical image.

  9. Customized maximal-overlap multiwavelet denoising with data-driven group threshold for condition monitoring of rolling mill drivetrain

    NASA Astrophysics Data System (ADS)

    Chen, Jinglong; Wan, Zhiguo; Pan, Jun; Zi, Yanyang; Wang, Yu; Chen, Binqiang; Sun, Hailiang; Yuan, Jing; He, Zhengjia

    2016-02-01

    Fault identification timely of rolling mill drivetrain is significant for guaranteeing product quality and realizing long-term safe operation. So, condition monitoring system of rolling mill drivetrain is designed and developed. However, because compound fault and weak fault feature information is usually sub-merged in heavy background noise, this task still faces challenge. This paper provides a possibility for fault identification of rolling mills drivetrain by proposing customized maximal-overlap multiwavelet denoising method. The effectiveness of wavelet denoising method mainly relies on the appropriate selections of wavelet base, transform strategy and threshold rule. First, in order to realize exact matching and accurate detection of fault feature, customized multiwavelet basis function is constructed via symmetric lifting scheme and then vibration signal is processed by maximal-overlap multiwavelet transform. Next, based on spatial dependency of multiwavelet transform coefficients, spatial neighboring coefficient data-driven group threshold shrinkage strategy is developed for denoising process by choosing the optimal group length and threshold via the minimum of Stein's Unbiased Risk Estimate. The effectiveness of proposed method is first demonstrated through compound fault identification of reduction gearbox on rolling mill. Then it is applied for weak fault identification of dedusting fan bearing on rolling mill and the results support its feasibility.

  10. Translation-invariant multiwavelet denoising using improved neighbouring coefficients and its application on rolling bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Hailiang, Sun; Yanyang, Zi; Zhengjia, He; Xiaodong, Wang; Jing, Yuan

    2011-07-01

    The deficiencies of conventional neighbouring coefficients denoising are the invariant neighbouring window size and the global threshold; therefore, it cannot accurately represent local concentrated energy of the collected signals in engineering application. The improved neighbouring coefficients named Neighbouring Coefficients Dependent on Level (NCDL) is proposed. The size of neighbouring window varies with different decomposition levels and the threshold is chosen according to the neighbourhood. Translation invariant method can effectively weaken some visual artifacts, for example Gibbs phenomena in the neighbourhood of discontinuities. Multiwavelets have two or more scaling and wavelet functions. Compared with scalar wavelet, multiwavelets offer several excellent properties such as symmetric, orthogonal, compactly support and higher order of vanishing moment. A novel denoising method - translation invariant multiwavelet denoising with improved neighbouring coefficients is presented. The simulation signal proves the validity of the presented method. This method is then applied to the fault diagnosis of a locomotive rolling bearing. The results show that the present method can effectively extract the fault characteristic frequency of a slight scrape on the outer race of the rolling bearing.

  11. Discrete shearlet transform on GPU with applications in anomaly detection and denoising

    NASA Astrophysics Data System (ADS)

    Gibert, Xavier; Patel, Vishal M.; Labate, Demetrio; Chellappa, Rama

    2014-12-01

    Shearlets have emerged in recent years as one of the most successful methods for the multiscale analysis of multidimensional signals. Unlike wavelets, shearlets form a pyramid of well-localized functions defined not only over a range of scales and locations, but also over a range of orientations and with highly anisotropic supports. As a result, shearlets are much more effective than traditional wavelets in handling the geometry of multidimensional data, and this was exploited in a wide range of applications from image and signal processing. However, despite their desirable properties, the wider applicability of shearlets is limited by the computational complexity of current software implementations. For example, denoising a single 512 × 512 image using a current implementation of the shearlet-based shrinkage algorithm can take between 10 s and 2 min, depending on the number of CPU cores, and much longer processing times are required for video denoising. On the other hand, due to the parallel nature of the shearlet transform, it is possible to use graphics processing units (GPU) to accelerate its implementation. In this paper, we present an open source stand-alone implementation of the 2D discrete shearlet transform using CUDA C++ as well as GPU-accelerated MATLAB implementations of the 2D and 3D shearlet transforms. We have instrumented the code so that we can analyze the running time of each kernel under different GPU hardware. In addition to denoising, we describe a novel application of shearlets for detecting anomalies in textured images. In this application, computation times can be reduced by a factor of 50 or more, compared to multicore CPU implementations.

  12. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  13. Local Wavelet-Based Filtering of Electromyographic Signals to Eliminate the Electrocardiographic-Induced Artifacts in Patients with Spinal Cord Injury

    PubMed Central

    Nitzken, Matthew; Bajaj, Nihit; Aslan, Sevda; Gimel’farb, Georgy; Ovechkin, Alexander

    2013-01-01

    Surface Electromyography (EMG) is a standard method used in clinical practice and research to assess motor function in order to help with the diagnosis of neuromuscular pathology in human and animal models. EMG recorded from trunk muscles involved in the activity of breathing can be used as a direct measure of respiratory motor function in patients with spinal cord injury (SCI) or other disorders associated with motor control deficits. However, EMG potentials recorded from these muscles are often contaminated with heart-induced electrocardiographic (ECG) signals. Elimination of these artifacts plays a critical role in the precise measure of the respiratory muscle electrical activity. This study was undertaken to find an optimal approach to eliminate the ECG artifacts from EMG recordings. Conventional global filtering can be used to decrease the ECG-induced artifact. However, this method can alter the EMG signal and changes physiologically relevant information. We hypothesize that, unlike global filtering, localized removal of ECG artifacts will not change the original EMG signals. We develop an approach to remove the ECG artifacts without altering the amplitude and frequency components of the EMG signal by using an externally recorded ECG signal as a mask to locate areas of the ECG spikes within EMG data. These segments containing ECG spikes were decomposed into 128 sub-wavelets by a custom-scaled Morlet Wavelet Transform. The ECG-related sub-wavelets at the ECG spike location were removed and a de-noised EMG signal was reconstructed. Validity of the proposed method was proven using mathematical simulated synthetic signals and EMG obtained from SCI patients. We compare the Root-mean Square Error and the Relative Change in Variance between this method, global, notch and adaptive filters. The results show that the localized wavelet-based filtering has the benefit of not introducing error in the native EMG signal and accurately removing ECG artifacts from EMG signals

  14. Computed tomography perfusion imaging denoising using Gaussian process regression

    NASA Astrophysics Data System (ADS)

    Zhu, Fan; Carpenter, Trevor; Rodriguez Gonzalez, David; Atkinson, Malcolm; Wardlaw, Joanna

    2012-06-01

    Brain perfusion weighted images acquired using dynamic contrast studies have an important clinical role in acute stroke diagnosis and treatment decisions. However, computed tomography (CT) images suffer from low contrast-to-noise ratios (CNR) as a consequence of the limitation of the exposure to radiation of the patient. As a consequence, the developments of methods for improving the CNR are valuable. The majority of existing approaches for denoising CT images are optimized for 3D (spatial) information, including spatial decimation (spatially weighted mean filters) and techniques based on wavelet and curvelet transforms. However, perfusion imaging data is 4D as it also contains temporal information. Our approach using Gaussian process regression (GPR), which takes advantage of the temporal information, to reduce the noise level. Over the entire image, GPR gains a 99% CNR improvement over the raw images and also improves the quality of haemodynamic maps allowing a better identification of edges and detailed information. At the level of individual voxel, GPR provides a stable baseline, helps us to identify key parameters from tissue time-concentration curves and reduces the oscillations in the curve. GPR is superior to the comparable techniques used in this study.

  15. Denoising of Ultrasound Cervix Image Using Improved Anisotropic Diffusion Filter

    PubMed Central

    Rose, R Jemila; Allwin, S

    2015-01-01

    ABSTRACT Objective: The purpose of this study was to evaluate an improved oriented speckle reducing anisotropic diffusion (IADF) filter that suppress the speckle noise from ultrasound B-mode images and shows better result than previous filters such as anisotropic diffusion, wavelet denoising and local statistics. Methods: The clinical ultrasound images of the cervix were obtained by ATL HDI 5000 ultrasound machine from the Regional Cancer Centre, Medical College campus, Thiruvananthapuram. The standardized ways of organizing and storing the image were in the format of bmp and the dimensions of 256 × 256 with the help of an improved oriented speckle reducing anisotropic diffusion filter. For analysis, 24 ultrasound cervix images were tested and the performance measured. Results: This provides quality metrics in the case of maximum peak signal-to-noise ratio (PSNR) of 31 dB, structural similarity index map (SSIM) of 0.88 and edge preservation accuracy of 88%. Conclusion: The IADF filter is the optimal method and it is capable of strong speckle suppression with less computational complexity. PMID:26624591

  16. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  17. Magnetic resonance image denoising using multiple filters

    NASA Astrophysics Data System (ADS)

    Ai, Danni; Wang, Jinjuan; Miwa, Yuichi

    2013-07-01

    We introduced and compared ten denoisingfilters which are all proposed during last fifteen years. Especially, the state-of-art denoisingalgorithms, NLM and BM3D, have attracted much attention. Several expansions are proposed to improve the noise reduction based on these two algorithms. On the other hand, optimal dictionaries, sparse representations and appropriate shapes of the transform's support are also considered for the image denoising. The comparison among variousfiltersis implemented by measuring the SNR of a phantom image and denoising effectiveness of a clinical image. The computational time is finally evaluated.

  18. Analysis the application of several denoising algorithm in the astronomical image denoising

    NASA Astrophysics Data System (ADS)

    Jiang, Chao; Geng, Ze-xun; Bao, Yong-qiang; Wei, Xiao-feng; Pan, Ying-feng

    2014-02-01

    Image denoising is an important method of preprocessing, it is one of the forelands in the field of Computer Graphic and Computer Vision. Astronomical target imaging are most vulnerable to atmospheric turbulence and noise interference, in order to reconstruct the high quality image of the target, we need to restore the high frequency signal of image, but noise also belongs to the high frequency signal, so there will be noise amplification in the reconstruction process. In order to avoid this phenomenon, join image denoising in the process of reconstruction is a feasible solution. This paper mainly research on the principle of four classic denoising algorithm, which are TV, BLS - GSM, NLM and BM3D, we use simulate data for image denoising to analysis the performance of the four algorithms, experiments demonstrate that the four algorithms can remove the noise, the BM3D algorithm not only have high quality of denosing, but also have the highest efficiency at the same time.

  19. Fuzzy logic recursive change detection for tracking and denoising of video sequences

    NASA Astrophysics Data System (ADS)

    Zlokolica, Vladimir; De Geyter, Matthias; Schulte, Stefan; Pizurica, Aleksandra; Philips, Wilfried; Kerre, Etienne

    2005-03-01

    In this paper we propose a fuzzy logic recursive scheme for motion detection and temporal filtering that can deal with the Gaussian noise and unsteady illumination conditions both in temporal and spatial direction. Our focus is on applications concerning tracking and denoising of image sequences. We process an input noisy sequence with fuzzy logic motion detection in order to determine the degree of motion confidence. The proposed motion detector combines the membership degree appropriately using defined fuzzy rules, where the membership degree of motion for each pixel in a 2D-sliding-window is determined by the proposed membership function. Both fuzzy membership function and fuzzy rules are defined in such a way that the performance of the motion detector is optimized in terms of its robustness to noise and unsteady lighting conditions. We perform simultaneously tracking and recursive adaptive temporal filtering, where the amount of filtering is inversely proportional to the confidence with respect to the existence of motion. Finally, temporally filtered frames are further processed by the proposed spatial filter in order to obtain denoised image sequence. The main contribution of this paper is the robust novel fuzzy recursive scheme for motion detection and temporal filtering. We evaluate the proposed motion detection algorithm using two criteria: robustness to noise and changing illumination conditions and motion blur in temporal recursive denoising. Additionally, we make comparisons in terms of noise reduction with other state of the art video denoising techniques.

  20. [Denoising and assessing method of additive noise in the ultraviolet spectrum of SO2 in flue gas].

    PubMed

    Zhou, Tao; Sun, Chang-Ku; Liu, Bin; Zhao, Yu-Mei

    2009-11-01

    The problem of denoising and assessing method of the spectrum of SO2 in flue gas was studied based on DOAS. The denoising procedure of the additive noise in the spectrum was divided into two parts: reducing the additive noise and enhancing the useful signal. When obtaining the absorption feature of measured gas, a multi-resolution preprocessing method of original spectrum was adopted for denoising by DWT (discrete wavelet transform). The signal energy operators in different scales were used to choose the denoising threshold and separate the useful signal from the noise. On the other hand, because there was no sudden change in the spectra of flue gas in time series, the useful signal component was enhanced according to the signal time dependence. And the standard absorption cross section was used to build the ideal absorption spectrum with the measured gas temperature and pressure. This ideal spectrum was used as the desired signal instead of the original spectrum in the assessing method to modify the SNR (signal-noise ratio). There were two different environments to do the proof test-in the lab and at the scene. In the lab, SO2 was measured several times with the system using this method mentioned above. The average deviation was less than 1.5%, while the repeatability was less than 1%. And the short range experiment data were better than the large range. In the scene of a power plant whose concentration of flue gas had a large variation range, the maximum deviation of this method was 2.31% in the 18 groups of contrast data. The experimental results show that the denoising effect of the scene spectrum was better than that of the lab spectrum. This means that this method can improve the SNR of the spectrum effectively, which is seriously polluted by additive noise. PMID:20101989

  1. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  2. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  3. Review of wavelet transforms for pattern recognitions

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1996-03-01

    After relating the adaptive wavelet transform to the human visual and hearing systems, we exploit the synergism between such a smart sensor processing with brain-style neural network computing. The freedom of choosing an appropriate kernel of a linear transform, which is given to us by the recent mathematical foundation of the wavelet transform, is exploited fully and is generally called the adaptive wavelet transform (WT). However, there are several levels of adaptivity: (1) optimum coefficients: adjustable transform coefficients chosen with respect to a fixed mother kernel for better invariant signal representation, (2) super-mother: grouping different scales of daughter wavelets of same or different mother wavelets at different shift location into a new family called a superposition mother kernel for better speech signal classification, (3) variational calculus to determine ab initio a constraint optimization mother for a specific task. The tradeoff between the mathematical rigor of the complete orthonormality and the speed of order (N) with the adaptive flexibility is finally up to the user's needs. Then, to illustrate (1), a new invariant optoelectronic architecture of a wedge- shape filter in the WT domain is given for scale-invariant signal classification by neural networks.

  4. Predicting apple tree leaf nitrogen content based on hyperspectral applying wavelet and wavelet packet analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Zheng, Lihua; Li, Minzan; Deng, Xiaolei; Sun, Hong

    2012-11-01

    The visible and NIR spectral reflectance were measured for apple leaves by using a spectrophotometer in fruit-bearing, fruit-falling and fruit-maturing period respectively, and the nitrogen content of each sample was measured in the lab. The analysis of correlation between nitrogen content of apple tree leaves and their hyperspectral data was conducted. Then the low frequency signal and high frequency noise reduction signal were extracted by using wavelet packet decomposition algorithm. At the same time, the original spectral reflectance was denoised taking advantage of the wavelet filtering technology. And then the principal components spectra were collected after PCA (Principal Component Analysis). It was known that the model built based on noise reduction principal components spectra reached higher accuracy than the other three ones in fruit-bearing period and physiological fruit-maturing period. Their calibration R2 reached 0.9529 and 0.9501, and validation R2 reached 0.7285 and 0.7303 respectively. While in the fruit-falling period the model based on low frequency principal components spectra reached the highest accuracy, and its calibration R2 reached 0.9921 and validation R2 reached 0.6234. The results showed that it was an effective way to improve ability of predicting apple tree nitrogen content based on hyperspectral analysis by using wavelet packet algorithm.

  5. Performance evaluation and optimization of BM4D-AV denoising algorithm for cone-beam CT images

    NASA Astrophysics Data System (ADS)

    Huang, Kuidong; Tian, Xiaofei; Zhang, Dinghua; Zhang, Hua

    2015-12-01

    The broadening application of cone-beam Computed Tomography (CBCT) in medical diagnostics and nondestructive testing, necessitates advanced denoising algorithms for its 3D images. The block-matching and four dimensional filtering algorithm with adaptive variance (BM4D-AV) is applied to the 3D image denoising in this research. To optimize it, the key filtering parameters of the BM4D-AV algorithm are assessed firstly based on the simulated CBCT images and a table of optimized filtering parameters is obtained. Then, considering the complexity of the noise in realistic CBCT images, possible noise standard deviations in BM4D-AV are evaluated to attain the chosen principle for the realistic denoising. The results of corresponding experiments demonstrate that the BM4D-AV algorithm with optimized parameters presents excellent denosing effect on the realistic 3D CBCT images.

  6. Digital audio signal filtration based on the dual-tree wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, A. S.; Pavlov, A. N.

    2015-07-01

    A new method of digital audio signal filtration based on the dual-tree wavelet transform is described. An adaptive approach is proposed that allows the automatic adjustment of parameters of the wavelet filter to be optimized. A significant improvement of the quality of signal filtration is demonstrated in comparison to the traditionally used filters based on the discrete wavelet transform.

  7. Rotation and Scale Invariant Wavelet Feature for Content-Based Texture Image Retrieval.

    ERIC Educational Resources Information Center

    Lee, Moon-Chuen; Pun, Chi-Man

    2003-01-01

    Introduces a rotation and scale invariant log-polar wavelet texture feature for image retrieval. The underlying feature extraction process involves a log-polar transform followed by an adaptive row shift invariant wavelet packet transform. Experimental results show that this rotation and scale invariant wavelet feature is quite effective for image…

  8. Sonar target enhancement by shrinkage of incoherent wavelet coefficients.

    PubMed

    Hunter, Alan J; van Vossen, Robbert

    2014-01-01

    Background reverberation can obscure useful features of the target echo response in broadband low-frequency sonar images, adversely affecting detection and classification performance. This paper describes a resolution and phase-preserving means of separating the target response from the background reverberation noise using a coherence-based wavelet shrinkage method proposed recently for de-noising magnetic resonance images. The algorithm weights the image wavelet coefficients in proportion to their coherence between different looks under the assumption that the target response is more coherent than the background. The algorithm is demonstrated successfully on experimental synthetic aperture sonar data from a broadband low-frequency sonar developed for buried object detection. PMID:24437766

  9. Wavelet based image visibility enhancement of IR images

    NASA Astrophysics Data System (ADS)

    Jiang, Qin; Owechko, Yuri; Blanton, Brendan

    2016-05-01

    Enhancing the visibility of infrared images obtained in a degraded visibility environment is very important for many applications such as surveillance, visual navigation in bad weather, and helicopter landing in brownout conditions. In this paper, we present an IR image visibility enhancement system based on adaptively modifying the wavelet coefficients of the images. In our proposed system, input images are first filtered by a histogram-based dynamic range filter in order to remove sensor noise and convert the input images into 8-bit dynamic range for efficient processing and display. By utilizing a wavelet transformation, we modify the image intensity distribution and enhance image edges simultaneously. In the wavelet domain, low frequency wavelet coefficients contain original image intensity distribution while high frequency wavelet coefficients contain edge information for the original images. To modify the image intensity distribution, an adaptive histogram equalization technique is applied to the low frequency wavelet coefficients while to enhance image edges, an adaptive edge enhancement technique is applied to the high frequency wavelet coefficients. An inverse wavelet transformation is applied to the modified wavelet coefficients to obtain intensity images with enhanced visibility. Finally, a Gaussian filter is used to remove blocking artifacts introduced by the adaptive techniques. Since wavelet transformation uses down-sampling to obtain low frequency wavelet coefficients, histogram equalization of low-frequency coefficients is computationally more efficient than histogram equalization of the original images. We tested the proposed system with degraded IR images obtained from a helicopter landing in brownout conditions. Our experimental results show that the proposed system is effective for enhancing the visibility of degraded IR images.

  10. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  11. Local denoising of digital speckle pattern interferometry fringes by multiplicative correlation and weighted smoothing splines.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2005-05-10

    We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.

  12. Nonlocal Markovian models for image denoising

    NASA Astrophysics Data System (ADS)

    Salvadeo, Denis H. P.; Mascarenhas, Nelson D. A.; Levada, Alexandre L. M.

    2016-01-01

    Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.

  13. Efficient fourier-wavelet super-resolution.

    PubMed

    Robinson, M Dirk; Toth, Cynthia A; Lo, Joseph Y; Farsiu, Sina

    2010-10-01

    Super-resolution (SR) is the process of combining multiple aliased low-quality images to produce a high-resolution high-quality image. Aside from registration and fusion of low-resolution images, a key process in SR is the restoration and denoising of the fused images. We present a novel extension of the combined Fourier-wavelet deconvolution and denoising algorithm ForWarD to the multiframe SR application. Our method first uses a fast Fourier-base multiframe image restoration to produce a sharp, yet noisy estimate of the high-resolution image. Our method then applies a space-variant nonlinear wavelet thresholding that addresses the nonstationarity inherent in resolution-enhanced fused images. We describe a computationally efficient method for implementing this space-variant processing that leverages the efficiency of the fast Fourier transform (FFT) to minimize complexity. Finally, we demonstrate the effectiveness of this algorithm for regular imagery as well as in digital mammography. PMID:20460208

  14. Identification of formation interfaces by using wavelet and Fourier transforms

    NASA Astrophysics Data System (ADS)

    Mukherjee, Bappa; Srivardhan, V.; Roy, P. N. S.

    2016-05-01

    The identification of formation interfaces is of prime importance from well log data. The interfaces are not clearly discernible due to the presence of high and low frequency noise in the log response. Accurate bed boundary information is very crucial in hydrocarbon exploration and the problem has received considerable attention and many techniques have been proposed. Frequency spectrum based filtering techniques aids us in interpretation, but usually leads to inaccurate amplification of unwanted components of the log response. Wavelet transform is very effective in denoising the log response and can be carried out to filter low and high frequency components of signal. The use of Fourier and Wavelet transform in denoising the log data for obtaining formation interfaces is demonstrated in this work. The feasibility of the proposed technique is tested so that it can be used in the industry to decipher formation interfaces. The work flow is demonstrated by testing on wells belonging to the Upper Assam Basin, which are self-potential, gamma ray, and resistivity log responses.

  15. Speckle filtering of medical ultrasonic images using wavelet and guided filter.

    PubMed

    Zhang, Ju; Lin, Guangkuo; Wu, Lili; Cheng, Yun

    2016-02-01

    Speckle noise is an inherent yet ineffectual residual artifact in medical ultrasound images, which significantly degrades quality and restricts accuracy in automatic diagnostic techniques. Speckle reduction is therefore an important step prior to the analysis and processing of the ultrasound images. A new de-noising method based on an improved wavelet filter and guided filter is proposed in this paper. According to the characteristics of medical ultrasound images in the wavelet domain, an improved threshold function based on the universal wavelet threshold function is developed. The wavelet coefficients of speckle noise and noise-free signal are modeled as Rayleigh distribution and generalized Gaussian distribution respectively. The Bayesian maximum a posteriori estimation is applied to obtain a new wavelet shrinkage algorithm. The coefficients of the low frequency sub-band in the wavelet domain are filtered by guided filter. The filtered image is then obtained by using the inverse wavelet transformation. Experiments with the comparison of the other seven de-speckling filters are conducted. The results show that the proposed method not only has a strong de-speckling ability, but also keeps the image details, such as the edge of a lesion. PMID:26489484

  16. [A novel denoising approach to SVD filtering based on DCT and PCA in CT image].

    PubMed

    Feng, Fuqiang; Wang, Jun

    2013-10-01

    Because of various effects of the imaging mechanism, noises are inevitably introduced in medical CT imaging process. Noises in the images will greatly degrade the quality of images and bring difficulties to clinical diagnosis. This paper presents a new method to improve singular value decomposition (SVD) filtering performance in CT image. Filter based on SVD can effectively analyze characteristics of the image in horizontal (and/or vertical) directions. According to the features of CT image, we can make use of discrete cosine transform (DCT) to extract the region of interest and to shield uninterested region so as to realize the extraction of structure characteristics of the image. Then we transformed SVD to the image after DCT, constructing weighting function for image reconstruction adaptively weighted. The algorithm for the novel denoising approach in this paper was applied in CT image denoising, and the experimental results showed that the new method could effectively improve the performance of SVD filtering.

  17. Infrared image denoising by nonlocal means filtering

    NASA Astrophysics Data System (ADS)

    Dee-Noor, Barak; Stern, Adrian; Yitzhaky, Yitzhak; Kopeika, Natan

    2012-05-01

    The recently introduced non-local means (NLM) image denoising technique broke the traditional paradigm according to which image pixels are processed by their surroundings. Non-local means technique was demonstrated to outperform state-of-the art denoising techniques when applied to images in the visible. This technique is even more powerful when applied to low contrast images, which makes it tractable for denoising infrared (IR) images. In this work we investigate the performance of NLM applied to infrared images. We also present a new technique designed to speed-up the NLM filtering process. The main drawback of the NLM is the large computational time required by the process of searching similar patches. Several techniques were developed during the last years to reduce the computational burden. Here we present a new techniques designed to reduce computational cost and sustain optimal filtering results of NLM technique. We show that the new technique, which we call Multi-Resolution Search NLM (MRS-NLM), reduces significantly the computational cost of the filtering process and we present a study of its performance on IR images.

  18. Denoising techniques combined to Monte Carlo simulations for the prediction of high-resolution portal images in radiotherapy treatment verification

    NASA Astrophysics Data System (ADS)

    Lazaro, D.; Barat, E.; Le Loirec, C.; Dautremer, T.; Montagu, T.; Guérin, L.; Batalla, A.

    2013-05-01

    This work investigates the possibility of combining Monte Carlo (MC) simulations to a denoising algorithm for the accurate prediction of images acquired using amorphous silicon (a-Si) electronic portal imaging devices (EPIDs). An accurate MC model of the Siemens OptiVue1000 EPID was first developed using the penelope code, integrating a non-uniform backscatter modelling. Two already existing denoising algorithms were then applied on simulated portal images, namely the iterative reduction of noise (IRON) method and the locally adaptive Savitzky-Golay (LASG) method. A third denoising method, based on a nonparametric Bayesian framework and called DPGLM (for Dirichlet process generalized linear model) was also developed. Performances of the IRON, LASG and DPGLM methods, in terms of smoothing capabilities and computation time, were compared for portal images computed for different values of the RMS pixel noise (up to 10%) in three different configurations, a heterogeneous phantom irradiated by a non-conformal 15 × 15 cm2 field, a conformal beam from a pelvis treatment plan, and an IMRT beam from a prostate treatment plan. For all configurations, DPGLM outperforms both IRON and LASG by providing better smoothing performances and demonstrating a better robustness with respect to noise. Additionally, no parameter tuning is required by DPGLM, which makes the denoising step very generic and easy to handle for any portal image. Concerning the computation time, the denoising of 1024 × 1024 images takes about 1 h 30 min, 2 h and 5 min using DPGLM, IRON, and LASG, respectively. This paper shows the feasibility to predict within a few hours and with the same resolution as real images accurate portal images, combining MC simulations with the DPGLM denoising algorithm.

  19. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  20. Lidar signal de-noising by singular value decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Huanxue; Liu, Jianguo; Zhang, Tianshu

    2014-11-01

    Signal de-noising remains an important problem in lidar signal processing. This paper presents a de-noising method based on singular value decomposition. Experimental results on lidar simulated signal and real signal show that the proposed algorithm not only improves the signal-to-noise ratio effectively, but also preserves more detail information.

  1. Evaluation of optical properties for real photonic crystal fiber based on total variation in wavelet domain

    NASA Astrophysics Data System (ADS)

    Shen, Yan; Wang, Xin; Lou, Shuqin; Lian, Zhenggang; Zhao, Tongtong

    2016-09-01

    An evaluation method based on the total variation model (TV) in wavelet domain is proposed for modeling optical properties of real photonic crystal fibers (PCFs). The TV model in wavelet domain is set up to suppress the noise of the original image effectively and rebuild the cross section images of real PCFs with high accuracy. The optical properties of three PCFs are evaluated, including two kinds of PCFs that supplied from the Crystal Fiber A/S and a homemade side-leakage PCF, by using the combination of the proposed model and finite element method. Numerical results demonstrate that the proposed method can obtain high noise suppression ratio and effectively reduce the noise of cross section images of PCFs, which leads to an accurate evaluation of optical properties of real PCFs. To the best of our knowledge, it is the first time to denoise the cross section images of PCFs with the TV model in the wavelet domain.

  2. [Application of kalman filtering based on wavelet transform in ICP-AES].

    PubMed

    Qin, Xia; Shen, Lan-sun

    2002-12-01

    Kalman filtering is a recursive algorithm, which has been proposed as an attractive alternative to correct overlapping interferences in ICP-AES. However, the noise in ICP-AES contaminates the signal arising from the analyte and hence limits the accuracy of kalman filtering. Wavelet transform is a powerful technique in signal denoising due to its multi-resolution characteristics. In this paper, first, the effect of noise on kalman filtering is discussed. Then we apply the wavelet-transform-based soft-thresholding as the pre-processing of kalman filtering. The simulation results show that the kalman filtering based on wavelet transform can effectively reduce the noise and increase the accuracy of the analysis. PMID:12914186

  3. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  4. A New Image Denoising Algorithm that Preserves Structures of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Bressert, Eli; Edmonds, P.; Kowal Arcand, K.

    2007-05-01

    We have processed numerous x-ray data sets using several well-known algorithms such as Gaussian and adaptive smoothing for public related image releases. These algorithms are used to denoise/smooth images and retain the overall structure of observed objects. Recently, a new PDE algorithm and program, provided by Dr. David Tschumperle and referred to as GREYCstoration, has been tested and is in the progress of being implemented into the Chandra EPO imaging group. Results of GREYCstoration will be presented and compared to the currently used methods for x-ray and multiple wavelength images. What demarcates Tschumperle's algorithm from the current algorithms used by the EPO imaging group is its ability to preserve the main structures of an image strongly, while reducing noise. In addition to denoising images, GREYCstoration can be used to erase artifacts accumulated during observation and mosaicing stages. GREYCstoration produces results that are comparable and in some cases more preferable than the current denoising/smoothing algorithms. From our early stages of testing, the results of the new algorithm will provide insight on the algorithm's initial capabilities on multiple wavelength astronomy data sets.

  5. Noise Smoothing for Structural Vibration Test Signals Using an Improved Wavelet Thresholding Technique

    PubMed Central

    Yi, Ting-Hua; Li, Hong-Nan; Zhao, Xiao-Yan

    2012-01-01

    In structural vibration tests, one of the main factors which disturb the reliability and accuracy of the results are the noise signals encountered. To overcome this deficiency, this paper presents a discrete wavelet transform (DWT) approach to denoise the measured signals. The denoising performance of DWT is discussed by several processing parameters, including the type of wavelet, decomposition level, thresholding method, and threshold selection rules. To overcome the disadvantages of the traditional hard- and soft-thresholding methods, an improved thresholding technique called the sigmoid function-based thresholding scheme is presented. The procedure is validated by using four benchmarks signals with three degrees of degradation as well as a real measured signal obtained from a three-story reinforced concrete scale model shaking table experiment. The performance of the proposed method is evaluated by computing the signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) after denoising. Results reveal that the proposed method offers superior performance than the traditional methods no matter whether the signals have heavy or light noises embedded. PMID:23112652

  6. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  7. An adaptive integrated algorithm for noninvasive fetal ECG separation and noise reduction based on ICA-EEMD-WS.

    PubMed

    Liu, Guangchen; Luan, Yihui

    2015-11-01

    High-resolution fetal electrocardiogram (FECG) plays an important role in assisting physicians to detect fetal changes in the womb and to make clinical decisions. However, in real situations, clear FECG is difficult to extract because it is usually overwhelmed by the dominant maternal ECG and other contaminated noise such as baseline wander, high-frequency noise. In this paper, we proposed a novel integrated adaptive algorithm based on independent component analysis (ICA), ensemble empirical mode decomposition (EEMD), and wavelet shrinkage (WS) denoising, denoted as ICA-EEMD-WS, for FECG separation and noise reduction. First, ICA algorithm was used to separate the mixed abdominal ECG signal and to obtain the noisy FECG. Second, the noise in FECG was reduced by a three-step integrated algorithm comprised of EEMD, useful subcomponents statistical inference and WS processing, and partial reconstruction for baseline wander reduction. Finally, we evaluate the proposed algorithm using simulated data sets. The results indicated that the proposed ICA-EEMD-WS outperformed the conventional algorithms in signal denoising. PMID:26429348

  8. Application of wavelet packet transform to compressing Raman spectra data

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.

  9. Periodized Daubechies wavelets

    SciTech Connect

    Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.

    1996-03-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.

  10. EEG Artifact Removal Using a Wavelet Neural Network

    NASA Technical Reports Server (NTRS)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  11. A wavelet approach to binary blackholes with asynchronous multitasking

    NASA Astrophysics Data System (ADS)

    Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.

  12. Spike detection in human muscle sympathetic nerve activity using the kurtosis of stationary wavelet transform coefficients

    PubMed Central

    Brychta, Robert J.; Shiavi, Richard; Robertson, David; Diedrich, André

    2007-01-01

    The accurate assessment of autonomic sympathetic function is important in the diagnosis and study of various autonomic and cardiovascular disorders. Sympathetic function in humans can be assessed by recording the muscle sympathetic nerve activity, which is characterized by synchronous neuronal discharges separated by periods of neural silence dominated by colored Gaussian noise. The raw nerve activity is generally rectified, integrated, and quantified using the integrated burst rate or area. We propose an alternative quantification involving spike detection using a two-stage stationary wavelet transform (SWT) de-noising method. The SWT coefficients are first separated into noise-related and burst-related coefficients on the basis of their local kurtosis. The noise-related coefficients are then used to establish a threshold to identify spikes within the bursts. This method demonstrated better detection performance than an unsupervised amplitude discriminator and similar wavelet-based methods when confronted with simulated data of varying burst rate and signal to noise ratio. Additional validation on data acquired during a graded head-up tilt protocol revealed a strong correlation between the mean spike rate and the mean integrate burst rate (r = 0.85) and burst area rate (r = 0.91). In conclusion, the kurtosis-based wavelet de-noising technique is a potentially useful method of studying sympathetic nerve activity in humans. PMID:17083982

  13. Spike detection in human muscle sympathetic nerve activity using the kurtosis of stationary wavelet transform coefficients.

    PubMed

    Brychta, Robert J; Shiavi, Richard; Robertson, David; Diedrich, André

    2007-03-15

    The accurate assessment of autonomic sympathetic function is important in the diagnosis and study of various autonomic and cardiovascular disorders. Sympathetic function in humans can be assessed by recording the muscle sympathetic nerve activity, which is characterized by synchronous neuronal discharges separated by periods of neural silence dominated by colored Gaussian noise. The raw nerve activity is generally rectified, integrated, and quantified using the integrated burst rate or area. We propose an alternative quantification involving spike detection using a two-stage stationary wavelet transform (SWT) de-noising method. The SWT coefficients are first separated into noise-related and burst-related coefficients on the basis of their local kurtosis. The noise-related coefficients are then used to establish a threshold to identify spikes within the bursts. This method demonstrated better detection performance than an unsupervised amplitude discriminator and similar wavelet-based methods when confronted with simulated data of varying burst rate and signal to noise ratio. Additional validation on data acquired during a graded head-up tilt protocol revealed a strong correlation between the mean spike rate and the mean integrate burst rate (r=0.85) and burst area rate (r=0.91). In conclusion, the kurtosis-based wavelet de-noising technique is a potentially useful method of studying sympathetic nerve activity in humans.

  14. Stacked Denoising Autoencoders Applied to Star/Galaxy Classification

    NASA Astrophysics Data System (ADS)

    Qin, H. R.; Lin, J. M.; Wang, J. Y.

    2016-05-01

    In recent years, the deep learning has been becoming more and more popular because it is well-adapted, and has a high accuracy and complex structure, but it has not been used in astronomy. In order to resolve the question that the classification accuracy of star/galaxy is high on the bright set, but low on the faint set of the Sloan Digital Sky Survey (SDSS), we introduce the new deep learning SDA (stacked denoising autoencoders) and dropout technology, which can greatly improve robustness and anti-noise performance. We randomly selected the bright source set and faint source set from DR12 and DR7 with spectroscopic measurements, and preprocessed them. Afterwards, we randomly selected the training set and testing set without replacement from the bright set and faint set. At last, we used the obtained training set to train the SDA model of SDSS-DR7 and SDSS-DR12. We compared the testing result with the results of Library for Support Vector Machines (LibSVM), J48, Logistic Model Trees (LMT), Support Vector Machine (SVM), Logistic Regression, and Decision Stump algorithm on the SDSS-DR12 testing set, and the results of six kinds of decision trees on the SDSS-DR7 testing set. The simulation shows that SDA has a better classification accuracy than other machine learning algorithms. When we use completeness function as the test parameter, the test accuracy rate is improved by about 15% on the faint set of SDSS-DR7.

  15. A new method for mobile phone image denoising

    NASA Astrophysics Data System (ADS)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  16. Edge-preserving image denoising via optimal color space projection.

    PubMed

    Lian, Nai-Xiang; Zagorodnov, Vitali; Tan, Yap-Peng

    2006-09-01

    Denoising of color images can be done on each color component independently. Recent work has shown that exploiting strong correlation between high-frequency content of different color components can improve the denoising performance. We show that for typical color images high correlation also means similarity, and propose to exploit this strong intercolor dependency using an optimal luminance/color-difference space projection. Experimental results confirm that performing denoising on the projected color components yields superior denoising performance, both in peak signal-to-noise ratio and visual quality sense, compared to that of existing solutions. We also develop a novel approach to estimate directly from the noisy image data the image and noise statistics, which are required to determine the optimal projection.

  17. Short-term forecasting of groundwater levels under conditions of mine-tailings recharge using wavelet ensemble neural network models

    NASA Astrophysics Data System (ADS)

    Khalil, Bahaa; Broda, Stefan; Adamowski, Jan; Ozga-Zielinski, Bogdan; Donohoe, Amanda

    2015-02-01

    Several groundwater-level forecasting studies have shown that data-driven models are simpler, faster to develop, and provide more accurate and precise results than physical or numerical-based models. Five data-driven models were examined for the forecasting of groundwater levels as a result of recharge via tailings from an abandoned mine in Quebec, Canada, for lead times of 1 day, 1 week and 1 month. The five models are: a multiple linear regression (MLR); an artificial neural network (ANN); two models that are based on de-noising the model predictors using the wavelet-transform (W-MLR, W-ANN); and a W-ensemble ANN (W-ENN) model. The tailing recharge, total precipitation, and mean air temperature were used as predictors. The ANN models performed better than the MLR models, and both MLR and ANN models performed significantly better after de-noising the predictors using wavelet-transforms. Overall, the W-ENN model performed best for each of the three lead times. These results highlight the ability of wavelet-transforms to decompose non-stationary data into discrete wavelet-components, highlighting cyclic patterns and trends in the time-series at varying temporal scales, rendering the data readily usable in forecasting. The good performance of the W-ENN model highlights the usefulness of ensemble modeling, which ensures model robustness along with improved reliability by reducing variance.

  18. Wavelets meet genetic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ping

    2005-08-01

    Genetic image analysis is an interdisciplinary area, which combines microscope image processing techniques with the use of biochemical probes for the detection of genetic aberrations responsible for cancers and genetic diseases. Recent years have witnessed parallel and significant progress in both image processing and genetics. On one hand, revolutionary multiscale wavelet techniques have been developed in signal processing and applied mathematics in the last decade, providing sophisticated tools for genetic image analysis. On the other hand, reaping the fruit of genome sequencing, high resolution genetic probes have been developed to facilitate accurate detection of subtle and cryptic genetic aberrations. In the meantime, however, they bring about computational challenges for image analysis. In this paper, we review the fruitful interaction between wavelets and genetic imaging. We show how wavelets offer a perfect tool to address a variety of chromosome image analysis problems. In fact, the same word "subband" has been used in the nomenclature of cytogenetics to describe the multiresolution banding structure of the chromosome, even before its appearance in the wavelet literature. The application of wavelets to chromosome analysis holds great promise in addressing several computational challenges in genetics. A variety of real world examples such as the chromosome image enhancement, compression, registration and classification will be demonstrated. These examples are drawn from fluorescence in situ hybridization (FISH) and microarray (gene chip) imaging experiments, which indicate the impact of wavelets on the diagnosis, treatments and prognosis of cancers and genetic diseases.

  19. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  20. Principal component analysis in the wavelet domain: new features for underwater object recognition

    NASA Astrophysics Data System (ADS)

    Okimoto, Gordon S.; Lemonds, David W.

    1999-08-01

    Principal component analysis (PCA) in the wavelet domain provides powerful features for underwater object recognition applications. The multiresolution analysis of the Morlet wavelet transform (MWT) is used to pre-process echo returns from targets ensonified by biologically motivated broadband signal. PCA is then used to compress and denoise the resulting time-scale signal representation for presentation to a hierarchical neural network for object classification. Wavelet/PCA features combined with multi-aspect data fusion and neural networks have resulted in impressive underwater object recognition performance using backscatter data generated by simulate dolphin echolocation clicks and bat- like linear frequency modulated upsweeps. For example, wavelet/PCA features extracted from LFM echo returns have resulted in correct classification rates of 98.6 percent over a six target suite, which includes two mine simulators and four clutter objects. For the same data, ROC analysis of the two-class mine-like versus non-mine-like problem resulted in a probability of detection of 0.981 and a probability of false alarm of 0.032 at the 'optimal' operating point. The wavelet/PCA feature extraction algorithm is currently being implemented in VLSI for use in small, unmanned underwater vehicles designed for mine- hunting operations in shallow water environments.

  1. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  2. An online novel adaptive filter for denoising time series measurements.

    PubMed

    Willis, Andrew J

    2006-04-01

    A nonstationary form of the Wiener filter based on a principal components analysis is described for filtering time series data possibly derived from noisy instrumentation. The theory of the filter is developed, implementation details are presented and two examples are given. The filter operates online, approximating the maximum a posteriori optimal Bayes reconstruction of a signal with arbitrarily distributed and non stationary statistics. PMID:16649562

  3. Denoising of B{sub 1}{sup +} field maps for noise-robust image reconstruction in electrical properties tomography

    SciTech Connect

    Michel, Eric; Hernandez, Daniel; Cho, Min Hyoung; Lee, Soo Yeol

    2014-10-15

    Purpose: To validate the use of adaptive nonlinear filters in reconstructing conductivity and permittivity images from the noisy B{sub 1}{sup +} maps in electrical properties tomography (EPT). Methods: In EPT, electrical property images are computed by taking Laplacian of the B{sub 1}{sup +} maps. To mitigate the noise amplification in computing the Laplacian, the authors applied adaptive nonlinear denoising filters to the measured complex B{sub 1}{sup +} maps. After the denoising process, they computed the Laplacian by central differences. They performed EPT experiments on phantoms and a human brain at 3 T along with corresponding EPT simulations on finite-difference time-domain models. They evaluated the EPT images comparing them with the ones obtained by previous EPT reconstruction methods. Results: In both the EPT simulations and experiments, the nonlinear filtering greatly improved the EPT image quality when evaluated in terms of the mean and standard deviation of the electrical property values at the regions of interest. The proposed method also improved the overall similarity between the reconstructed conductivity images and the true shapes of the conductivity distribution. Conclusions: The nonlinear denoising enabled us to obtain better-quality EPT images of the phantoms and the human brain at 3 T.

  4. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  5. Wavelet Approach for Operational Gamma Spectral Peak Detection - Preliminary Assessment

    SciTech Connect

    ,

    2012-02-01

    Gamma spectroscopy for radionuclide identifications typically involves locating spectral peaks and matching the spectral peaks with known nuclides in the knowledge base or database. Wavelet analysis, due to its ability for fitting localized features, offers the potential for automatic detection of spectral peaks. Past studies of wavelet technologies for gamma spectra analysis essentially focused on direct fitting of raw gamma spectra. Although most of those studies demonstrated the potentials of peak detection using wavelets, they often failed to produce new benefits to operational adaptations for radiological surveys. This work presents a different approach with the operational objective being to detect only the nuclides that do not exist in the environment (anomalous nuclides). With this operational objective, the raw-count spectrum collected by a detector is first converted to a count-rate spectrum and is then followed by background subtraction prior to wavelet analysis. The experimental results suggest that this preprocess is independent of detector type and background radiation, and is capable of improving the peak detection rates using wavelets. This process broadens the doors for a practical adaptation of wavelet technologies for gamma spectral surveying devices.

  6. A fast optimization transfer algorithm for image inpainting in wavelet domains.

    PubMed

    Chan, Raymond H; Wen, You-Wei; Yip, Andy M

    2009-07-01

    A wavelet inpainting problem refers to the problem of filling in missing wavelet coefficients in an image. A variational approach was used by Chan et al. The resulting functional was minimized by the gradient descent method. In this paper, we use an optimization transfer technique which involves replacing their univariate functional by a bivariate functional by adding an auxiliary variable. Our bivariate functional can be minimized easily by alternating minimization: for the auxiliary variable, the minimum has a closed form solution, and for the original variable, the minimization problem can be formulated as a classical total variation (TV) denoising problem and, hence, can be solved efficiently using a dual formulation. We show that our bivariate functional is equivalent to the original univariate functional. We also show that our alternating minimization is convergent. Numerical results show that the proposed algorithm is very efficient and outperforms that of Chan et al.

  7. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  8. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  9. Evaluation of denoising algorithms for biological electron tomography.

    PubMed

    Narasimha, Rajesh; Aganj, Iman; Bennett, Adam E; Borgnia, Mario J; Zabransky, Daniel; Sapiro, Guillermo; McLaughlin, Steven W; Milne, Jacqueline L S; Subramaniam, Sriram

    2008-10-01

    Tomograms of biological specimens derived using transmission electron microscopy can be intrinsically noisy due to the use of low electron doses, the presence of a "missing wedge" in most data collection schemes, and inaccuracies arising during 3D volume reconstruction. Before tomograms can be interpreted reliably, for example, by 3D segmentation, it is essential that the data be suitably denoised using procedures that can be individually optimized for specific data sets. Here, we implement a systematic procedure to compare various nonlinear denoising techniques on tomograms recorded at room temperature and at cryogenic temperatures, and establish quantitative criteria to select a denoising approach that is most relevant for a given tomogram. We demonstrate that using an appropriate denoising algorithm facilitates robust segmentation of tomograms of HIV-infected macrophages and Bdellovibrio bacteria obtained from specimens at room and cryogenic temperatures, respectively. We validate this strategy of automated segmentation of optimally denoised tomograms by comparing its performance with manual extraction of key features from the same tomograms.

  10. Signal quality enhancement using higher order wavelets for ultrasonic TOFD signals from austenitic stainless steel welds.

    PubMed

    Praveen, Angam; Vijayarekha, K; Abraham, Saju T; Venkatraman, B

    2013-09-01

    Time of flight diffraction (TOFD) technique is a well-developed ultrasonic non-destructive testing (NDT) method and has been applied successfully for accurate sizing of defects in metallic materials. This technique was developed in early 1970s as a means for accurate sizing and positioning of cracks in nuclear components became very popular in the late 1990s and is today being widely used in various industries for weld inspection. One of the main advantages of TOFD is that, apart from fast technique, it provides higher probability of detection for linear defects. Since TOFD is based on diffraction of sound waves from the extremities of the defect compared to reflection from planar faces as in pulse echo and phased array, the resultant signal would be quite weak and signal to noise ratio (SNR) low. In many cases the defect signal is submerged in this noise making it difficult for detection, positioning and sizing. Several signal processing methods such as digital filtering, Split Spectrum Processing (SSP), Hilbert Transform and Correlation techniques have been developed in order to suppress unwanted noise and enhance the quality of the defect signal which can thus be used for characterization of defects and the material. Wavelet Transform based thresholding techniques have been applied largely for de-noising of ultrasonic signals. However in this paper, higher order wavelets are used for analyzing the de-noising performance for TOFD signals obtained from Austenitic Stainless Steel welds. It is observed that higher order wavelets give greater SNR improvement compared to the lower order wavelets.

  11. Non-local MRI denoising using random sampling.

    PubMed

    Hu, Jinrong; Zhou, Jiliu; Wu, Xi

    2016-09-01

    In this paper, we propose a random sampling non-local mean (SNLM) algorithm to eliminate noise in 3D MRI datasets. Non-local means (NLM) algorithms have been implemented efficiently for MRI denoising, but are always limited by high computational complexity. Compared to conventional methods, which raster through the entire search window when computing similarity weights, the proposed SNLM algorithm randomly selects a small subset of voxels which dramatically decreases the computational burden, together with competitive denoising result. Moreover, structure tensor which encapsulates high-order information was introduced as an optimal sampling pattern for further improvement. Numerical experiments demonstrated that the proposed SNLM method can get a good balance between denoising quality and computation efficiency. At a relative sampling ratio (i.e. ξ=0.05), SNLM can remove noise as effectively as full NLM, meanwhile the running time can be reduced to 1/20 of NLM's. PMID:27114338

  12. Total Variation Denoising and Support Localization of the Gradient

    NASA Astrophysics Data System (ADS)

    Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.

    2016-10-01

    This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.

  13. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    NASA Astrophysics Data System (ADS)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  14. Quantitative improvement in geology interpretation from remotely sensed data by denoising the haze

    NASA Astrophysics Data System (ADS)

    Yang, Hai-ping; Liu, Xiu-guo; Liu, Fu-jiang; Wu, Guo-ping; Shi, Jin-ping; Mei, Lin-lu

    2008-12-01

    The development of remote geological interpretation technology is booming during recent years. However, there is a significant obstacle for extracting geology information from remote sensing imagery--the presence of clouds and their shadows. Diverse techniques have been proposed including different algorithms such as filtering algorithm and multi-temporal cloud removing algorithm to solve the problem. This paper presents a modified solution to denoise the haze, based on ETM+ imagery. First of all, wavelet transform is applied to Band1, Band2 and Band3 imagery to determine the clear region and different levels of cloud regions. Then all pixels of the ETM+ imagery are classified to specific cover types after the cluster analysis of band4, Band5 and Band7. At last, the mean reflectance matching is performed in the first three bands separately according to different cover types in both clear region and cloud region. Above all, the method is implemented by IDL. The results show that this modified method not only can quantitatively determine the cloud area but also can remove cloud from imagery efficiently. Moreover, compared with the homomorphic filtering method, the experiment results of the proposed method is much more satisfying in Geology Interpretation.

  15. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  16. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-05-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.

  17. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  18. Nonlocal transform-domain filter for volumetric data denoising and reconstruction.

    PubMed

    Maggioni, Matteo; Katkovnik, Vladimir; Egiazarian, Karen; Foi, Alessandro

    2013-01-01

    We present an extension of the BM3D filter to volumetric data. The proposed algorithm, BM4D, implements the grouping and collaborative filtering paradigm, where mutually similar d-dimensional patches are stacked together in a (d+1)-dimensional array and jointly filtered in transform domain. While in BM3D the basic data patches are blocks of pixels, in BM4D we utilize cubes of voxels, which are stacked into a 4-D "group." The 4-D transform applied on the group simultaneously exploits the local correlation present among voxels in each cube and the nonlocal correlation between the corresponding voxels of different cubes. Thus, the spectrum of the group is highly sparse, leading to very effective separation of signal and noise through coefficient shrinkage. After inverse transformation, we obtain estimates of each grouped cube, which are then adaptively aggregated at their original locations. We evaluate the algorithm on denoising of volumetric data corrupted by Gaussian and Rician noise, as well as on reconstruction of volumetric phantom data with non-zero phase from noisy and incomplete Fourier-domain (k-space) measurements. Experimental results demonstrate the state-of-the-art denoising performance of BM4D, and its effectiveness when exploited as a regularizer in volumetric data reconstruction. PMID:22868570

  19. [A novel method to determine the redshifts of active galaxies based on wavelet transform].

    PubMed

    Tu, Liang-Ping; Luo, A-Li; Jiang, Bin; Wei, Peng; Zhao, Yong-Heng; Liu, Rong

    2012-10-01

    Automatically determining redshifts of galaxies is very important for astronomical research on large samples, such as large-scale structure of cosmological significance. Galaxies are generally divided into normal galaxies and active galaxies, and the spectra of active galaxies mostly have more obvious emission lines. In the present paper, the authors present a novel method to determine spectral redshifts of active galaxies rapidly based on wavelet transformation mainly, and it does not need to extract line information accurately. This method includes the following steps: Firstly, we denoised a spectrum to be processed; Secondly, the low-frequency spectrum was extracted based on wavelet transform, and then we could get the residual spectrum through the denoised spectrum subtracting the low-frequency spectrum; Thirdly, the authors calculated the standard deviation of the residual spectrum and determined a threshold value T, then retained the wavelength set whose corresponding flux was greater than T; Fourthly, according to the wavelength form of all the standard lines, we calculated all the candidate redshifts; Finally, utilizing the density estimation method based on Parzen window, we determined the redshift point with maximum density, and the average value of its neighborhood would be the final redshift of this spectrum. The experiments on simulated data and real data from SDSS-DR7 show that this method is robust and its correct rate is encouraging. And it can be expected to be applied in the project of LAMOST.

  20. GPU-Accelerated Denoising in 3D (GD3D)

    2013-10-01

    The raw computational power GPU Accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. This software addresses two facets of this promising application: what tuning is necessary to achieve optimal performance on a modern GPU? And what parameters yield the best denoising results in practice? To answer the first question, the software performs an autotuning step to empirically determine optimal memory blocking on the GPU. To answer themore » second, it performs a sweep of algorithm parameters to determine the combination that best reduces the mean squared error relative to a noiseless reference image.« less

  1. Iterative PET Image Reconstruction Using Translation Invariant Wavelet Transform

    PubMed Central

    Zhou, Jian; Senhadji, Lotfi; Coatrieux, Jean-Louis; Luo, Limin

    2009-01-01

    The present work describes a Bayesian maximum a posteriori (MAP) method using a statistical multiscale wavelet prior model. Rather than using the orthogonal discrete wavelet transform (DWT), this prior is built on the translation invariant wavelet transform (TIWT). The statistical modeling of wavelet coefficients relies on the generalized Gaussian distribution. Image reconstruction is performed in spatial domain with a fast block sequential iteration algorithm. We study theoretically the TIWT MAP method by analyzing the Hessian of the prior function to provide some insights on noise and resolution properties of image reconstruction. We adapt the key concept of local shift invariance and explore how the TIWT MAP algorithm behaves with different scales. It is also shown that larger support wavelet filters do not offer better performance in contrast recovery studies. These theoretical developments are confirmed through simulation studies. The results show that the proposed method is more attractive than other MAP methods using either the conventional Gibbs prior or the DWT-based wavelet prior. PMID:21869846

  2. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  3. Pixon Based Image Denoising Scheme by Preserving Exact Edge Locations

    NASA Astrophysics Data System (ADS)

    Srikrishna, Atluri; Reddy, B. Eswara; Pompapathi, Manasani

    2016-09-01

    Denoising of an image is an essential step in many image processing applications. In any image de-noising algorithm, it is a major concern to keep interesting structures of the image like abrupt changes in image intensity values (edges). In this paper an efficient algorithm for image de-noising is proposed that obtains integrated and consecutive original image from noisy image using diffusion equations in pixon domain. The process mainly consists of two steps. In first step, the pixons for noisy image are obtained by using K-means clustering process and next step includes applying diffusion equations on the pixonal model of the image to obtain new intensity values for the restored image. The process has been applied on a variety of standard images and the objective fidelity has been compared with existing algorithms. The experimental results show that the proposed algorithm has a better performance by preserving edge details compared in terms of Figure of Merit and improved Peak-to-Signal-Noise-Ratio value. The proposed method brings out a denoising technique which preserves edge details.

  4. Denoising and dimensionality reduction of genomic data

    NASA Astrophysics Data System (ADS)

    Capobianco, Enrico

    2005-05-01

    Genomics represents a challenging research field for many quantitative scientists, and recently a vast variety of statistical techniques and machine learning algorithms have been proposed and inspired by cross-disciplinary work with computational and systems biologists. In genomic applications, the researcher deals with noisy and complex high-dimensional feature spaces; a wealth of genes whose expression levels are experimentally measured, can often be observed for just a few time points, thus limiting the available samples. This unbalanced combination suggests that it might be hard for standard statistical inference techniques to come up with good general solutions, likewise for machine learning algorithms to avoid heavy computational work. Thus, one naturally turns to two major aspects of the problem: sparsity and intrinsic dimensionality. These two aspects are studied in this paper, where for both denoising and dimensionality reduction, a very efficient technique, i.e., Independent Component Analysis, is used. The numerical results are very promising, and lead to a very good quality of gene feature selection, due to the signal separation power enabled by the decomposition technique. We investigate how the use of replicates can improve these results, and deal with noise through a stabilization strategy which combines the estimated components and extracts the most informative biological information from them. Exploiting the inherent level of sparsity is a key issue in genetic regulatory networks, where the connectivity matrix needs to account for the real links among genes and discard many redundancies. Most experimental evidence suggests that real gene-gene connections represent indeed a subset of what is usually mapped onto either a huge gene vector or a typically dense and highly structured network. Inferring gene network connectivity from the expression levels represents a challenging inverse problem that is at present stimulating key research in biomedical

  5. Dictionary-based image denoising for dual energy computed tomography

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Allner, Sebastian; Mei, Kai; Pfeiffer, Franz; Noël, Peter B.

    2016-03-01

    Compared to conventional computed tomography (CT), dual energy CT allows for improved material decomposition by conducting measurements at two distinct energy spectra. Since radiation exposure is a major concern in clinical CT, there is a need for tools to reduce the noise level in images while preserving diagnostic information. One way to achieve this goal is the application of image-based denoising algorithms after an analytical reconstruction has been performed. We have developed a modified dictionary denoising algorithm for dual energy CT aimed at exploiting the high spatial correlation between between images obtained from different energy spectra. Both the low-and high energy image are partitioned into small patches which are subsequently normalized. Combined patches with improved signal-to-noise ratio are formed by a weighted addition of corresponding normalized patches from both images. Assuming that corresponding low-and high energy image patches are related by a linear transformation, the signal in both patches is added coherently while noise is neglected. Conventional dictionary denoising is then performed on the combined patches. Compared to conventional dictionary denoising and bilateral filtering, our algorithm achieved superior performance in terms of qualitative and quantitative image quality measures. We demonstrate, in simulation studies, that this approach can produce 2d-histograms of the high- and low-energy reconstruction which are characterized by significantly improved material features and separation. Moreover, in comparison to other approaches that attempt denoising without simultaneously using both energy signals, superior similarity to the ground truth can be found with our proposed algorithm.

  6. Market turning points forecasting using wavelet analysis

    NASA Astrophysics Data System (ADS)

    Bai, Limiao; Yan, Sen; Zheng, Xiaolian; Chen, Ben M.

    2015-11-01

    Based on the system adaptation framework we previously proposed, a frequency domain based model is developed in this paper to forecast the major turning points of stock markets. This system adaptation framework has its internal model and adaptive filter to capture the slow and fast dynamics of the market, respectively. The residue of the internal model is found to contain rich information about the market cycles. In order to extract and restore its informative frequency components, we use wavelet multi-resolution analysis with time-varying parameters to decompose this internal residue. An empirical index is then proposed based on the recovered signals to forecast the market turning points. This index is successfully applied to US, UK and China markets, where all major turning points are well forecasted.

  7. Denoising surface renewal flux density measurements

    NASA Astrophysics Data System (ADS)

    Shapland, T.; Paw U, K.; Snyder, R. L.; McElrone, A.; Calderon Orellana, A.; Williams, L.

    2012-12-01

    When combined with net radiation and ground heat flux density measurements, surface renewal sensible heat flux density measurements can be used to obtain latent heat flux density, and therefore evapotranspiration, via the energy balance residual. Surface renewal is based on analyzing the energy and mass budget of air parcels that interact with plant canopies. The air parcels are manifested as ramp-like shapes in turbulent scalar time series data, and the amplitude and period of the ramps are used to calculate the flux densities. The root mean square error between calibrated surface renewal and eddy covariance is generally twice the root mean square error between two eddy covariance systems. In this presentation, we evaluate the efficacy of various methods for reducing the random error in surface renewal sensible heat flux density measurements. These methods include signal de-spiking, conventional low-pass filtering, wavelet-based filtering, ramp signal to noise thresholds, ramp period scaling, novel rearrangements of the Van Atta procedure (Arch Mech 29:161-171, 1977) for resolving the ramp amplitude and ramp period, sensor replication, and optimization of sensor placement.

  8. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  9. A generalized wavelet extrema representation

    SciTech Connect

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  10. Finite element-wavelet hybrid algorithm for atmospheric tomography.

    PubMed

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2014-03-01

    Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality.

  11. Finite element-wavelet hybrid algorithm for atmospheric tomography.

    PubMed

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2014-03-01

    Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality. PMID:24690653

  12. Wavelet transform based on the optimal wavelet pairs for tunable diode laser absorption spectroscopy signal processing.

    PubMed

    Li, Jingsong; Yu, Benli; Fischer, Horst

    2015-04-01

    This paper presents a novel methodology-based discrete wavelet transform (DWT) and the choice of the optimal wavelet pairs to adaptively process tunable diode laser absorption spectroscopy (TDLAS) spectra for quantitative analysis, such as molecular spectroscopy and trace gas detection. The proposed methodology aims to construct an optimal calibration model for a TDLAS spectrum, regardless of its background structural characteristics, thus facilitating the application of TDLAS as a powerful tool for analytical chemistry. The performance of the proposed method is verified using analysis of both synthetic and observed signals, characterized with different noise levels and baseline drift. In terms of fitting precision and signal-to-noise ratio, both have been improved significantly using the proposed method.

  13. Spectral Laplace-Beltrami wavelets with applications in medical images.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2015-05-01

    The spectral graph wavelet transform (SGWT) has recently been developed to compute wavelet transforms of functions defined on non-Euclidean spaces such as graphs. By capitalizing on the established framework of the SGWT, we adopt a fast and efficient computation of a discretized Laplace-Beltrami (LB) operator that allows its extension from arbitrary graphs to differentiable and closed 2-D manifolds (smooth surfaces embedded in the 3-D Euclidean space). This particular class of manifolds are widely used in bioimaging to characterize the morphology of cells, tissues, and organs. They are often discretized into triangular meshes, providing additional geometric information apart from simple nodes and weighted connections in graphs. In comparison with the SGWT, the wavelet bases constructed with the LB operator are spatially localized with a more uniform "spread" with respect to underlying curvature of the surface. In our experiments, we first use synthetic data to show that traditional applications of wavelets in smoothing and edge detectio can be done using the wavelet bases constructed with the LB operator. Second, we show that multi-resolutional capabilities of the proposed framework are applicable in the classification of Alzheimer's patients with normal subjects using hippocampal shapes. Wavelet transforms of the hippocampal shape deformations at finer resolutions registered higher sensitivity (96%) and specificity (90%) than the classification results obtained from the direct usage of hippocampal shape deformations. In addition, the Laplace-Beltrami method requires consistently a smaller number of principal components (to retain a fixed variance) at higher resolution as compared to the binary and weighted graph Laplacians, demonstrating the potential of the wavelet bases in adapting to the geometry of the underlying manifold.

  14. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  15. Wavelet Leaders: A new method to estimate the multifractal singularity spectra

    NASA Astrophysics Data System (ADS)

    Serrano, E.; Figliola, A.

    2009-07-01

    Wavelet Leaders is a novel alternative based on wavelet analysis for estimating the Multifractal Spectrum. It was proposed by Jaffard and co-workers improving the usual wavelet methods. In this work, we analyze and compare it with the well known Multifractal Detrended Fluctuation Analysis. The latter is a comprehensible and well adapted method for natural and weakly stationary signals. Alternatively, Wavelet Leaders exploits the wavelet self-similarity structures combined with the Multiresolution Analysis scheme. We give a brief introduction on the multifractal formalism and the particular implementation of the above methods and we compare their effectiveness. We expose several cases: Cantor measures, Binomial Multiplicative Cascades and also natural series from a tonic-clonic epileptic seizure. We analyze the results and extract the conclusions.

  16. An Introduction to Wavelet Theory and Analysis

    SciTech Connect

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  17. Wavelet application to the time series analysis of DORIS station coordinates

    NASA Astrophysics Data System (ADS)

    Bessissi, Zahia; Terbeche, Mekki; Ghezali, Boualem

    2009-06-01

    The topic developed in this article relates to the residual time series analysis of DORIS station coordinates using the wavelet transform. Several analysis techniques, already developed in other disciplines, were employed in the statistical study of the geodetic time series of stations. The wavelet transform allows one, on the one hand, to provide temporal and frequential parameter residual signals, and on the other hand, to determine and quantify systematic signals such as periodicity and tendency. Tendency is the change in short or long term signals; it is an average curve which represents the general pace of the signal evolution. On the other hand, periodicity is a process which is repeated, identical to itself, after a time interval called the period. In this context, the topic of this article consists, on the one hand, in determining the systematic signals by wavelet analysis of time series of DORIS station coordinates, and on the other hand, in applying the denoising signal to the wavelet packet, which makes it possible to obtain a well-filtered signal, smoother than the original signal. The DORIS data used in the treatment are a set of weekly residual time series from 1993 to 2004 from eight stations: DIOA, COLA, FAIB, KRAB, SAKA, SODB, THUB and SYPB. It is the ign03wd01 solution expressed in stcd format, which is derived by the IGN/JPL analysis center. Although these data are not very recent, the goal of this study is to detect the contribution of the wavelet analysis method on the DORIS data, compared to the other analysis methods already studied.

  18. Diffusion weighted image denoising using overcomplete local PCA.

    PubMed

    Manjón, José V; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  19. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  20. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  1. Point Set Denoising Using Bootstrap-Based Radial Basis Function

    PubMed Central

    Ramli, Ahmad; Abd. Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study. PMID:27315105

  2. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  3. Improvement of wavelet threshold filtered back-projection image reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-11-01

    Image reconstruction technique has been applied into many fields including some medical imaging, such as X ray computer tomography (X-CT), positron emission tomography (PET) and nuclear magnetic resonance imaging (MRI) etc, but the reconstructed effects are still not satisfied because original projection data are inevitably polluted by noises in process of image reconstruction. Although some traditional filters e.g., Shepp-Logan (SL) and Ram-Lak (RL) filter have the ability to filter some noises, Gibbs oscillation phenomenon are generated and artifacts leaded by back-projection are not greatly improved. Wavelet threshold denoising can overcome the noises interference to image reconstruction. Since some inherent defects exist in the traditional soft and hard threshold functions, an improved wavelet threshold function combined with filtered back-projection (FBP) algorithm was proposed in this paper. Four different reconstruction algorithms were compared in simulated experiments. Experimental results demonstrated that this improved algorithm greatly eliminated the shortcomings of un-continuity and large distortion of traditional threshold functions and the Gibbs oscillation. Finally, the availability of this improved algorithm was verified from the comparison of two evaluation criterions, i.e. mean square error (MSE), peak signal to noise ratio (PSNR) among four different algorithms, and the optimum dual threshold values of improved wavelet threshold function was gotten.

  4. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  5. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  6. Why are wavelets so effective

    SciTech Connect

    Resnikoff, H.L. )

    1993-01-01

    The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.

  7. Detection and extraction of orientation-and-scale-dependent information from two-dimensional GPR data with tuneable directional wavelet filters

    NASA Astrophysics Data System (ADS)

    Tzanis, Andreas

    2013-02-01

    The Ground Probing Radar (GPR) is a valuable tool for near surface geological, geotechnical, engineering, environmental, archaeological and other work. GPR images of the subsurface frequently contain geometric information (constant or variable-dip reflections) from various structures such as bedding, cracks, fractures, etc. Such features are frequently the target of the survey; however, they are usually not good reflectors and they are highly localized in time and in space. Their scale is therefore a factor significantly affecting their detectability. At the same time, the GPR method is very sensitive to broadband noise from buried small objects, electromagnetic anthropogenic activity and systemic factors, which frequently blurs the reflections from such targets. This paper introduces a method to de-noise GPR data and extract geometric information from scale-and-dip dependent structural features, based on one-dimensional B-Spline Wavelets, two-dimensional directional B-Spline Wavelet (BSW) Filters and two-dimensional Gabor Filters. A directional BSW Filter is built by sidewise arranging s identical one-dimensional wavelets of length L, tapering the s-parallel direction (span) with a suitable window function and rotating the resulting matrix to the desired orientation. The length L of the wavelet defines the temporal and spatial scale to be isolated and the span determines the length over which to smooth (spatial resolution). The Gabor Filter is generated by multiplying an elliptical Gaussian by a complex plane wave; at any orientation the temporal or spatial scale(s) to be isolated are determined by the wavelength. λ of the plane wave and the spatial resolution by the spatial aspect ratio γ, which specifies the ellipticity of the support of the Gabor function. At any orientation, both types of filter may be tuned at any frequency or spatial wavenumber by varying the length or the wavelength respectively. The filters can be applied directly to two

  8. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    NASA Astrophysics Data System (ADS)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The

  9. Wavelet entropy of stochastic processes

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-06-01

    We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.

  10. Wavelet theory and its applications

    SciTech Connect

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  11. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  12. Wavelet multiscale processing of remote sensing data

    NASA Astrophysics Data System (ADS)

    Bagmanov, Valeriy H.; Kharitonov, Svyatoslav V.; Meshkov, Ivan K.; Sultanov, Albert H.

    2008-12-01

    There is comparative analysis of methods for estimation and definition of Hoerst index (index of self-similarity) and comparative analysis of wavelet types using for image decomposition are offered. Five types of compared wavelets are used for analysis: Haar wavelets, Daubechies wavelets, Discrete Meyer wavelets, symplets and coiflets. Best quality of restored image Meyer and Haar wavelets demonstrate, because of they are characterised by minimal errors of recomposition. But compression index for these types smaller, than for Daubechies wavelets, symplets and coiflets. Contrariwise the latter obtain less precision of decompression. As it is necessary to take into consideration the complexity of realization some wavelet transformation on digital signal processors (DSP), simplest method is Haar wavelet transformation.

  13. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  14. Low-Oscillation Complex Wavelets

    NASA Astrophysics Data System (ADS)

    ADDISON, P. S.; WATSON, J. N.; FENG, T.

    2002-07-01

    In this paper we explore the use of two low-oscillation complex wavelets—Mexican hat and Morlet—as powerful feature detection tools for data analysis. These wavelets, which have been largely ignored to date in the scientific literature, allow for a decomposition which is more “temporal than spectral” in wavelet space. This is shown to be useful for the detection of small amplitude, short duration signal features which are masked by much larger fluctuations. Wavelet transform-based methods employing these wavelets (based on both wavelet ridges and modulus maxima) are developed and applied to sonic echo NDT signals used for the analysis of structural elements. A new mobility scalogram and associated reflectogram is defined for analysis of impulse response characteristics of structural elements and a novel signal compression technique is described in which the pertinent signal information is contained within a few modulus maxima coefficients. As an example of its usefulness, the signal compression method is employed as a pre-processor for a neural network classifier. The authors believe that low oscillation complex wavelets have wide applicability to other practical signal analysis problems. Their possible application to two such problems is discussed briefly—the interrogation of arrhythmic ECG signals and the detection and characterization of coherent structures in turbulent flow fields.

  15. Wavelet Transform for Real-Time Detection of Action Potentials in Neural Signals

    PubMed Central

    Quotb, Adam; Bornat, Yannick; Renaud, Sylvie

    2011-01-01

    We present a study on wavelet detection methods of neuronal action potentials (APs). Our final goal is to implement the selected algorithms on custom integrated electronics for on-line processing of neural signals; therefore we take real-time computing as a hard specification and silicon area as a price to pay. Using simulated neural signals including APs, we characterize an efficient wavelet method for AP extraction by evaluating its detection rate and its implementation cost. We compare software implementation for three methods: adaptive threshold, discrete wavelet transform (DWT), and stationary wavelet transform (SWT). We evaluate detection rate and implementation cost for detection functions dynamically comparing a signal with an adaptive threshold proportional to its SD, where the signal is the raw neural signal, respectively: (i) non-processed; (ii) processed by a DWT; (iii) processed by a SWT. We also use different mother wavelets and test different data formats to set an optimal compromise between accuracy and silicon cost. Detection accuracy is evaluated together with false negative and false positive detections. Simulation results show that for on-line AP detection implemented on a configurable digital integrated circuit, APs underneath the noise level can be detected using SWT with a well-selected mother wavelet, combined to an adaptive threshold. PMID:21811455

  16. Wavelet transform for real-time detection of action potentials in neural signals.

    PubMed

    Quotb, Adam; Bornat, Yannick; Renaud, Sylvie

    2011-01-01

    We present a study on wavelet detection methods of neuronal action potentials (APs). Our final goal is to implement the selected algorithms on custom integrated electronics for on-line processing of neural signals; therefore we take real-time computing as a hard specification and silicon area as a price to pay. Using simulated neural signals including APs, we characterize an efficient wavelet method for AP extraction by evaluating its detection rate and its implementation cost. We compare software implementation for three methods: adaptive threshold, discrete wavelet transform (DWT), and stationary wavelet transform (SWT). We evaluate detection rate and implementation cost for detection functions dynamically comparing a signal with an adaptive threshold proportional to its SD, where the signal is the raw neural signal, respectively: (i) non-processed; (ii) processed by a DWT; (iii) processed by a SWT. We also use different mother wavelets and test different data formats to set an optimal compromise between accuracy and silicon cost. Detection accuracy is evaluated together with false negative and false positive detections. Simulation results show that for on-line AP detection implemented on a configurable digital integrated circuit, APs underneath the noise level can be detected using SWT with a well-selected mother wavelet, combined to an adaptive threshold.

  17. Wavelet-based compression of medical images: filter-bank selection and evaluation.

    PubMed

    Saffor, A; bin Ramli, A R; Ng, K H

    2003-06-01

    Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

  18. Tests for Wavelets as a Basis Set

    NASA Astrophysics Data System (ADS)

    Baker, Thomas; Evenbly, Glen; White, Steven

    A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.

  19. Singularity detection by wavelet approach: application to electrocardiogram signal

    NASA Astrophysics Data System (ADS)

    Jalil, Bushra; Beya, Ouadi; Fauvet, Eric; Laligant, Olivier

    2010-01-01

    In signal processing, the region of abrupt changes contains the most of the useful information about the nature of the signal. The region or the points where these changes occurred are often termed as singular point or singular region. The singularity is considered to be an important character of the signal, as it refers to the discontinuity and interruption present in the signal and the main purpose of the detection of such singular point is to identify the existence, location and size of those singularities. Electrocardiogram (ECG) signal is used to analyze the cardiovascular activity in the human body. However the presence of noise due to several reasons limits the doctor's decision and prevents accurate identification of different pathologies. In this work we attempt to analyze the ECG signal with energy based approach and some heuristic methods to segment and identify different signatures inside the signal. ECG signal has been initially denoised by empirical wavelet shrinkage approach based on Steins Unbiased Risk Estimate (SURE). At the second stage, the ECG signal has been analyzed by Mallat approach based on modulus maximas and Lipschitz exponent computation. The results from both approaches has been discussed and important aspects has been highlighted. In order to evaluate the algorithm, the analysis has been done on MIT-BIH Arrhythmia database; a set of ECG data records sampled at a rate of 360 Hz with 11 bit resolution over a 10mv range. The results have been examined and approved by medical doctors.

  20. Blind source separation based x-ray image denoising from an image sequence.

    PubMed

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough. PMID:26429442

  1. Blind source separation based x-ray image denoising from an image sequence

    NASA Astrophysics Data System (ADS)

    Yu, Chun-Yu; Li, Yan; Fei, Bin; Li, Wei-Liang

    2015-09-01

    Blind source separation (BSS) based x-ray image denoising from an image sequence is proposed. Without priori knowledge, the useful image signal can be separated from an x-ray image sequence, for original images are supposed as different combinations of stable image signal and random image noise. The BSS algorithms such as fixed-point independent component analysis and second-order statistics singular value decomposition are used and compared with multi-frame averaging which is a common algorithm for improving image's signal-to-noise ratio (SNR). Denoising performance is evaluated in SNR, standard deviation, entropy, and runtime. Analysis indicates that BSS is applicable to image denoising; the denoised image's quality will get better when more frames are included in an x-ray image sequence, but it will cost more time; there should be trade-off between denoising performance and runtime, which means that the number of frames included in an image sequence is enough.

  2. Phase-aware candidate selection for time-of-flight depth map denoising

    NASA Astrophysics Data System (ADS)

    Hach, Thomas; Seybold, Tamara; Böttcher, Hendrik

    2015-03-01

    This paper presents a new pre-processing algorithm for Time-of-Flight (TOF) depth map denoising. Typically, denoising algorithms use the raw depth map as it comes from the sensor. Systematic artifacts due to the measurement principle are not taken into account which degrades the denoising results. For phase measurement TOF sensing, a major artifact is observed as salt-and-pepper noise caused by the measurement's ambiguity. Our pre-processing algorithm is able to isolate and unwrap affected pixels deploying the physical behavior of the capturing system yielding Gaussian noise. Using this pre-processing method before applying the denoising step clearly improves the parameter estimation for the denoising filter together with its final results.

  3. A New Method for Nonlocal Means Image Denoising Using Multiple Images

    PubMed Central

    Wang, Xingzheng; Wang, Haoqian; Yang, Jiangfeng; Zhang, Yongbing

    2016-01-01

    The basic principle of nonlocal means is to denoise a pixel using the weighted average of the neighbourhood pixels, while the weight is decided by the similarity of these pixels. The key issue of the nonlocal means method is how to select similar patches and design the weight of them. There are two main contributions of this paper: The first contribution is that we use two images to denoise the pixel. These two noised images are with the same noise deviation. Instead of using only one image, we calculate the weight from two noised images. After the first denoising process, we get a pre-denoised image and a residual image. The second contribution is combining the nonlocal property between residual image and pre-denoised image. The improved nonlocal means method pays more attention on the similarity than the original one, which turns out to be very effective in eliminating gaussian noise. Experimental results with simulated data are provided. PMID:27459293

  4. Research on power-law acoustic transient signal detection based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Han, Jian-hui; Yang, Ri-jie; Wang, Wei

    2007-11-01

    Aiming at the characteristics of acoustic transient signal emitted from antisubmarine weapon which is being dropped into water (torpedo, aerial sonobuoy and rocket assisted depth charge etc.), such as short duration, low SNR, abruptness and instability, based on traditional power-law detector, a new method to detect acoustic transient signal is proposed. Firstly wavelet transform is used to de-noise signal, removes random spectrum components and improves SNR. Then Power- Law detector is adopted to detect transient signal. The simulation results show the method can effectively extract envelop characteristic of transient signal on the condition of low SNR. The performance of WT-Power-Law markedly outgoes that of traditional Power-Law detection method.

  5. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  6. Prediction of total volatile basic nitrogen contents using wavelet features from visible/near-infrared hyperspectral images of prawn (Metapenaeus ensis).

    PubMed

    Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zhu, Zhiwei; Pu, Hongbin

    2016-04-15

    A visible/near-infrared hyperspectral imaging (HSI) system (400-1000 nm) coupled with wavelet analysis was used to determine the total volatile basic nitrogen (TVB-N) contents of prawns during cold storage. Spectral information was denoised by conducting wavelet analysis and uninformative variable elimination (UVE) algorithm, and then three wavelet features (energy, entropy and modulus maxima) were extracted. Quantitative models were established between the wavelet features and the reference TVB-N contents by using three regression algorithms. As a result, the LS-SVM model with modulus maxima features was considered as the best model for determining the TVB-N contents of prawns, with an excellent RP(2) of 0.9547, RMSEP=0.7213 mg N/100g and RPD=4.799. Finally, an image processing algorithm was developed for generating a TVB-N distribution map. This study demonstrated the possibility of applying the HSI imaging system in combination with wavelet analysis to the monitoring of TVB-N values in prawns. PMID:26616948

  7. A novel method for image denoising of fluorescence molecular imaging based on fuzzy C-Means clustering

    NASA Astrophysics Data System (ADS)

    An, Yu; Liu, Jie; Ye, Jinzuo; Mao, Yamin; Yang, Xin; Jiang, Shixin; Chi, Chongwei; Tian, Jie

    2015-03-01

    As an important molecular imaging modality, fluorescence molecular imaging (FMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorophore, FMI can noninvasively obtain the distribution of fluorophore in-vivo. However, due to the fact that the spectrum of fluorescence is in the section of the visible light range, there are mass of autofluorescence on the surface of the bio-tissues, which is a major disturbing factor in FMI. Meanwhile, the high-level of dark current for charge-coupled device (CCD) camera and other influencing factor can also produce a lot of background noise. In this paper, a novel method for image denoising of FMI based on fuzzy C-Means clustering (FCM) is proposed, because the fluorescent signal is the major component of the fluorescence images, and the intensity of autofluorescence and other background signals is relatively lower than the fluorescence signal. First, the fluorescence image is smoothed by sliding-neighborhood operations to initially eliminate the noise. Then, the wavelet transform (WLT) is performed on the fluorescence images to obtain the major component of the fluorescent signals. After that, the FCM method is adopt to separate the major component and background of the fluorescence images. Finally, the proposed method was validated using the original data obtained by in vivo implanted fluorophore experiment, and the results show that our proposed method can effectively obtain the fluorescence signal while eliminate the background noise, which could increase the quality of fluorescence images.

  8. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  9. PDE-based Non-Linear Diffusion Techniques for Denoising Scientific and Industrial Images: An Empirical Study

    SciTech Connect

    Weeratunga, S K; Kamath, C

    2001-12-20

    Removing noise from data is often the first step in data analysis. Denoising techniques should not only reduce the noise, but do so without blurring or changing the location of the edges. Many approaches have been proposed to accomplish this; in this paper, they focus on one such approach, namely the use of non-linear diffusion operators. This approach has been studied extensively from a theoretical viewpoint ever since the 1987 work of Perona and Malik showed that non-linear filters outperformed the more traditional linear Canny edge detector. They complement this theoretical work by investigating the performance of several isotropic diffusion operators on test images from scientific domains. They explore the effects of various parameters such as the choice of diffusivity function, explicit and implicit methods for the discretization of the PDE, and approaches for the spatial discretization of the non-linear operator etc. They also compare these schemes with simple spatial filters and the more complex wavelet-based shrinkage techniques. The empirical results show that, with an appropriate choice of parameters, diffusion-based schemes can be as effective as competitive techniques.

  10. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  11. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures.

    PubMed

    Kaloop, Mosbeh R; Hu, Jong Wan

    2015-09-22

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge's short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements' contents.

  12. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures.

    PubMed

    Kaloop, Mosbeh R; Hu, Jong Wan

    2015-01-01

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge's short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements' contents. PMID:26402687

  13. Optimizing the De-Noise Neural Network Model for GPS Time-Series Monitoring of Structures

    PubMed Central

    Kaloop, Mosbeh R.; Hu, Jong Wan

    2015-01-01

    The Global Positioning System (GPS) is recently used widely in structures and other applications. Notwithstanding, the GPS accuracy still suffers from the errors afflicting the measurements, particularly the short-period displacement of structural components. Previously, the multi filter method is utilized to remove the displacement errors. This paper aims at using a novel application for the neural network prediction models to improve the GPS monitoring time series data. Four prediction models for the learning algorithms are applied and used with neural network solutions: back-propagation, Cascade-forward back-propagation, adaptive filter and extended Kalman filter, to estimate which model can be recommended. The noise simulation and bridge’s short-period GPS of the monitoring displacement component of one Hz sampling frequency are used to validate the four models and the previous method. The results show that the Adaptive neural networks filter is suggested for de-noising the observations, specifically for the GPS displacement components of structures. Also, this model is expected to have significant influence on the design of structures in the low frequency responses and measurements’ contents. PMID:26402687

  14. Inconsistent Denoising and Clustering Algorithms for Amplicon Sequence Data.

    PubMed

    Koskinen, Kaisa; Auvinen, Petri; Björkroth, K Johanna; Hultman, Jenni

    2015-08-01

    Natural microbial communities have been studied for decades using the 16S rRNA gene as a marker. In recent years, the application of second-generation sequencing technologies has revolutionized our understanding of the structure and function of microbial communities in complex environments. Using these highly parallel techniques, a detailed description of community characteristics are constructed, and even the rare biosphere can be detected. The new approaches carry numerous advantages and lack many features that skewed the results using traditional techniques, but we are still facing serious bias, and the lack of reliable comparability of produced results. Here, we contrasted publicly available amplicon sequence data analysis algorithms by using two different data sets, one with defined clone-based structure, and one with food spoilage community with well-studied communities. We aimed to assess which software and parameters produce results that resemble the benchmark community best, how large differences can be detected between methods, and whether these differences are statistically significant. The results suggest that commonly accepted denoising and clustering methods used in different combinations produce significantly different outcome: clustering method impacts greatly on the number of operational taxonomic units (OTUs) and denoising algorithm influences more on taxonomic affiliations. The magnitude of the OTU number difference was up to 40-fold and the disparity between results seemed highly dependent on the community structure and diversity. Statistically significant differences in taxonomies between methods were seen even at phylum level. However, the application of effective denoising method seemed to even out the differences produced by clustering. PMID:25525895

  15. Group-normalized wavelet packet signal processing

    NASA Astrophysics Data System (ADS)

    Shi, Zhuoer; Bao, Zheng

    1997-04-01

    Since the traditional wavelet and wavelet packet coefficients do not exactly represent the strength of signal components at the very time(space)-frequency tilling, group- normalized wavelet packet transform (GNWPT), is presented for nonlinear signal filtering and extraction from the clutter or noise, together with the space(time)-frequency masking technique. The extended F-entropy improves the performance of GNWPT. For perception-based image, soft-logic masking is emphasized to remove the aliasing with edge preserved. Lawton's method for complex valued wavelets construction is extended to generate the complex valued compactly supported wavelet packets for radar signal extraction. This kind of wavelet packets are symmetry and unitary orthogonal. Well-defined wavelet packets are chosen by the analysis remarks on their time-frequency characteristics. For real valued signal processing, such as images and ECG signal, the compactly supported spline or bi- orthogonal wavelet packets are preferred for perfect de- noising and filtering qualities.

  16. A Mellin transform approach to wavelet analysis

    NASA Astrophysics Data System (ADS)

    Alotta, Gioacchino; Di Paola, Mario; Failla, Giuseppe

    2015-11-01

    The paper proposes a fractional calculus approach to continuous wavelet analysis. Upon introducing a Mellin transform expression of the mother wavelet, it is shown that the wavelet transform of an arbitrary function f(t) can be given a fractional representation involving a suitable number of Riesz integrals of f(t), and corresponding fractional moments of the mother wavelet. This result serves as a basis for an original approach to wavelet analysis of linear systems under arbitrary excitations. In particular, using the proposed fractional representation for the wavelet transform of the excitation, it is found that the wavelet transform of the response can readily be computed by a Mellin transform expression, with fractional moments obtained from a set of algebraic equations whose coefficient matrix applies for any scale a of the wavelet transform. Robustness and computationally efficiency of the proposed approach are shown in the paper.

  17. [Application of a modified method of wavelet noise removing to noisy ICP-AES spectra].

    PubMed

    Ma, Xiao-guo; Zhang, Zhan-xia

    2003-06-01

    A new method for noise removal from signal by the wavelet transform was developed. Compared with analytical signal, noise has higher frequency and smaller amplitude. By the new wavelet filtering method, the high frequency components were first removed, and then the small ones in the remaining transformed vectors were discarded. The proposed approach was evaluated by the processing of simulated and experimental noisy ICP-AES spectra. Different amounts of noise were added to a Gaussian peak to obtain a series of noisy ICP spectra. The simulated noisy spectra with R (signal to noise ratio) = 6 and N (data number) = 51, and with R = 6 and N = 17 were used to illustrate the feasibility of the proposed method. The performances of noise removal by the wavelet smoothing, the wavelet denoising and the proposed technique were compared. It was found that using the new approach, the relative errors of peak height would be no more than 5% for spectra with normal sampling points and R > or = 2. Moreover, the baseline could be easily defined, which was helpful to the accurate measurement of peak height. Experimental spectra of Al and V at low concentrations were processed by the proposed method. Intense noises were efficiently removed and the spectra became smoother without underestimating the analytical signal. The distortion of V 303.310 nm line was substantially rectified. The linear correlation coefficients between the peak heights in the reconstructed spectra and the concentrations were found to be 0.9953 for Al and 0.9836 for V, respectively. PMID:12953539

  18. Wavelet-based Evapotranspiration Forecasts

    NASA Astrophysics Data System (ADS)

    Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

    2012-12-01

    Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

  19. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  20. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  1. Wavelets for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-10-01

    Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.

  2. Vibration Sensor Data Denoising Using a Time-Frequency Manifold for Machinery Fault Diagnosis

    PubMed Central

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2014-01-01

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods. PMID:24379045

  3. Parameter optimization for image denoising based on block matching and 3D collaborative filtering

    NASA Astrophysics Data System (ADS)

    Pedada, Ramu; Kugu, Emin; Li, Jiang; Yue, Zhanfeng; Shen, Yuzhong

    2009-02-01

    Clinical MRI images are generally corrupted by random noise during acquisition with blurred subtle structure features. Many denoising methods have been proposed to remove noise from corrupted images at the expense of distorted structure features. Therefore, there is always compromise between removing noise and preserving structure information for denoising methods. For a specific denoising method, it is crucial to tune it so that the best tradeoff can be obtained. In this paper, we define several cost functions to assess the quality of noise removal and that of structure information preserved in the denoised image. Strength Pareto Evolutionary Algorithm 2 (SPEA2) is utilized to simultaneously optimize the cost functions by modifying parameters associated with the denoising methods. The effectiveness of the algorithm is demonstrated by applying the proposed optimization procedure to enhance the image denoising results using block matching and 3D collaborative filtering. Experimental results show that the proposed optimization algorithm can significantly improve the performance of image denoising methods in terms of noise removal and structure information preservation.

  4. Vibration sensor data denoising using a time-frequency manifold for machinery fault diagnosis.

    PubMed

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2013-12-27

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods.

  5. Denoising Stimulated Raman Spectroscopic Images by Total Variation Minimization

    PubMed Central

    Liao, Chien-Sheng; Choi, Joon Hee; Zhang, Delong; Chan, Stanley H.; Cheng, Ji-Xin

    2016-01-01

    High-speed coherent Raman scattering imaging is opening a new avenue to unveiling the cellular machinery by visualizing the spatio-temporal dynamics of target molecules or intracellular organelles. By extracting signals from the laser at MHz modulation frequency, current stimulated Raman scattering (SRS) microscopy has reached shot noise limited detection sensitivity. The laser-based local oscillator in SRS microscopy not only generates high levels of signal, but also delivers a large shot noise which degrades image quality and spectral fidelity. Here, we demonstrate a denoising algorithm that removes the noise in both spatial and spectral domains by total variation minimization. The signal-to-noise ratio of SRS spectroscopic images was improved by up to 57 times for diluted dimethyl sulfoxide solutions and by 15 times for biological tissues. Weak Raman peaks of target molecules originally buried in the noise were unraveled. Coupling the denoising algorithm with multivariate curve resolution allowed discrimination of fat stores from protein-rich organelles in C. elegans. Together, our method significantly improved detection sensitivity without frame averaging, which can be useful for in vivo spectroscopic imaging. PMID:26955400

  6. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR. PMID:25312930

  7. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  8. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. PMID:23074149

  9. Recent advances in wavelet technology

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.

  10. Automatic Denoising and Unmixing in Hyperspectral Image Processing

    NASA Astrophysics Data System (ADS)

    Peng, Honghong

    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein

  11. A parallel splitting wavelet method for 2D conservation laws

    NASA Astrophysics Data System (ADS)

    Schmidt, Alex A.; Kozakevicius, Alice J.; Jakobsson, Stefan

    2016-06-01

    The current work presents a parallel formulation using the MPI protocol for an adaptive high order finite difference scheme to solve 2D conservation laws. Adaptivity is achieved at each time iteration by the application of an interpolating wavelet transform in each space dimension. High order approximations for the numerical fluxes are computed by ENO and WENO schemes. Since time evolution is made by a TVD Runge-Kutta space splitting scheme, the problem is naturally suitable for parallelization. Numerical simulations and speedup results are presented for Euler equations in gas dynamics problems.

  12. A flexible patch based approach for combined denoising and contrast enhancement of digital X-ray images.

    PubMed

    Irrera, Paolo; Bloch, Isabelle; Delplanque, Maurice

    2016-02-01

    Denoising and contrast enhancement play key roles in optimizing the trade-off between image quality and X-ray dose. However, these tasks present multiple challenges raised by noise level, low visibility of fine anatomical structures, heterogeneous conditions due to different exposure parameters, and patient characteristics. This work proposes a new method to address these challenges. We first introduce a patch-based filter adapted to the properties of the noise corrupting X-ray images. The filtered images are then used as oracles to define non parametric noise containment maps that, when applied in a multiscale contrast enhancement framework, allow optimizing the trade-off between improvement of the visibility of anatomical structures and noise reduction. A significant amount of tests on both phantoms and clinical images has shown that the proposed method is better suited than others for visual inspection for diagnosis, even when compared to an algorithm used to process low dose images in clinical routine. PMID:26716719

  13. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  14. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  15. Computer-assisted counting of retinal cells by automatic segmentation after TV denoising

    PubMed Central

    2013-01-01

    Background Quantitative evaluation of mosaics of photoreceptors and neurons is essential in studies on development, aging and degeneration of the retina. Manual counting of samples is a time consuming procedure while attempts to automatization are subject to various restrictions from biological and preparation variability leading to both over- and underestimation of cell numbers. Here we present an adaptive algorithm to overcome many of these problems. Digital micrographs were obtained from cone photoreceptor mosaics visualized by anti-opsin immuno-cytochemistry in retinal wholemounts from a variety of mammalian species including primates. Segmentation of photoreceptors (from background, debris, blood vessels, other cell types) was performed by a procedure based on Rudin-Osher-Fatemi total variation (TV) denoising. Once 3 parameters are manually adjusted based on a sample, similarly structured images can be batch processed. The module is implemented in MATLAB and fully documented online. Results The object recognition procedure was tested on samples with a typical range of signal and background variations. We obtained results with error ratios of less than 10% in 16 of 18 samples and a mean error of less than 6% compared to manual counts. Conclusions The presented method provides a traceable module for automated acquisition of retinal cell density data. Remaining errors, including addition of background items, splitting or merging of objects might be further reduced by introduction of additional parameters. The module may be integrated into extended environments with features such as 3D-acquisition and recognition. PMID:24138794

  16. Noise reduction of nuclear magnetic resonance (NMR) transversal data using improved wavelet transform and exponentially weighted moving average (EWMA)

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Fan, Yiren; Li, Jiangtao; Wang, Yang; Deng, Shaogui

    2015-02-01

    NMR logging and core NMR signals acts as an effective way of pore structure evaluation and fluid discrimination, but it is greatly contaminated by noise for samples with low magnetic resonance intensity. Transversal relaxation time (T2) spectrum obtained by inversion of decay signals intrigued by Carr-Purcell-Meiboom-Gill (CPMG) sequence may deviate from the truth if the signal-to-noise ratio (SNR) is imperfect. A method of combing the improved wavelet thresholding with the EWMA is proposed for noise reduction of decay data. The wavelet basis function and decomposition level are optimized in consideration of information entropy and white noise estimation firstly. Then a hybrid threshold function is developed to avoid drawbacks of hard and soft threshold functions. To achieve the best thresholding values of different levels, a nonlinear objective function based on SNR and mean square error (MSE) is constructed, transforming the problem to a task of finding optimal solutions. Particle swarm optimization (PSO) is used to ensure the stability and global convergence. EWMA is carried out to eliminate unwanted peaks and sawtooths of the wavelet denoised signal. With validations of numerical simulations and experiments, it is demonstrated that the proposed approach can reduce the noise of T2 decay data perfectly.

  17. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  18. Independent component analysis (ICA) using wavelet subband orthogonality

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Yamakawa, Takeshi

    1998-03-01

    There are two kinds of RRP: (1) invertible ones, such as global Fourier transform (FT), local wavelet transform (WT), and adaptive wavelet transform (AWT); and (2) non-invertible ones, e.g. ICA including the global principle component analysis (PCA). The invertible FT and WT can be related to the non-invertible ICA when the continuous transforms are approximate din discrete matrix-vector operations. The landmark accomplishment of ICA is to obtain, by unsupervised learning algorithm, the edge-map as image feature ayields, shown by Helsinki researchers using fourth order statistics of nyields -- Kurosis K(uyields), and derived from information- theoretical first principle is augmented by the orthogonality property of the DWT subband used necessarily for usual image compression. If we take the advantage of the subband decorrelation, we have potentially an efficient utilization of a pari of communication channels if we could send several more mixed subband images through the pair of channels.

  19. Segmentation of complementary DNA microarray images by wavelet-based Markov random field model.

    PubMed

    Athanasiadis, Emmanouil I; Cavouras, Dionisis A; Glotsos, Dimitris Th; Georgiadis, Pantelis V; Kalatzis, Ioannis K; Nikiforidis, George C

    2009-11-01

    A wavelet-based modification of the Markov random field (WMRF) model is proposed for segmenting complementary DNA (cDNA) microarray images. For evaluation purposes, five simulated and a set of five real microarray images were used. The one-level stationary wavelet transform (SWT) of each microarray image was used to form two images, a denoised image, using hard thresholding filter, and a magnitude image, from the amplitudes of the horizontal and vertical components of SWT. Elements from these two images were suitably combined to form the WMRF model for segmenting spots from their background. The WMRF was compared against the conventional MRF and the Fuzzy C means (FCM) algorithms on simulated and real microarray images and their performances were evaluated by means of the segmentation matching factor (SMF) and the coefficient of determination (r2). Additionally, the WMRF was compared against the SPOT and SCANALYZE, and performances were evaluated by the mean absolute error (MAE) and the coefficient of variation (CV). The WMRF performed more accurately than the MRF and FCM (SMF: 92.66, 92.15, and 89.22, r2 : 0.92, 0.90, and 0.84, respectively) and achieved higher reproducibility than the MRF, SPOT, and SCANALYZE (MAE: 497, 1215, 1180, and 503, CV: 0.88, 1.15, 0.93, and 0.90, respectively).

  20. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement.

    PubMed

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G

    2015-08-07

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method.

  1. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement

    PubMed Central

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G.

    2015-01-01

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method. PMID:26262620

  2. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  3. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  4. Exploiting the self-similarity in ERP images by nonlocal means for single-trial denoising.

    PubMed

    Strauss, Daniel J; Teuber, Tanja; Steidl, Gabriele; Corona-Strauss, Farah I

    2013-07-01

    Event related potentials (ERPs) represent a noninvasive and widely available means to analyze neural correlates of sensory and cognitive processing. Recent developments in neural and cognitive engineering proposed completely new application fields of this well-established measurement technique when using an advanced single-trial processing. We have recently shown that 2-D diffusion filtering methods from image processing can be used for the denoising of ERP single-trials in matrix representations, also called ERP images. In contrast to conventional 1-D transient ERP denoising techniques, the 2-D restoration of ERP images allows for an integration of regularities over multiple stimulations into the denoising process. Advanced anisotropic image restoration methods may require directional information for the ERP denoising process. This is especially true if there is a lack of a priori knowledge about possible traces in ERP images. However due to the use of event related experimental paradigms, ERP images are characterized by a high degree of self-similarity over the individual trials. In this paper, we propose the simple and easy to apply nonlocal means method for ERP image denoising in order to exploit this self-similarity rather than focusing on the edge-based extraction of directional information. Using measured and simulated ERP data, we compare our method to conventional approaches in ERP denoising. It is concluded that the self-similarity in ERP images can be exploited for single-trial ERP denoising by the proposed approach. This method might be promising for a variety of evoked and event-related potential applications, including nonstationary paradigms such as changing exogeneous stimulus characteristics or endogenous states during the experiment. As presented, the proposed approach is for the a posteriori denoising of single-trial sequences.

  5. Optical wavelet transform for fingerprint identification

    NASA Astrophysics Data System (ADS)

    MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.

    1994-03-01

    The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.

  6. Non-local mean denoising in diffusion tensor space

    PubMed Central

    SU, BAIHAI; LIU, QIANG; CHEN, JIE; WU, XI

    2014-01-01

    The aim of the present study was to present a novel non-local mean (NLM) method to denoise diffusion tensor imaging (DTI) data in the tensor space. Compared with the original NLM method, which uses intensity similarity to weigh the voxel, the proposed method weighs the voxel using tensor similarity measures in the diffusion tensor space. Euclidean distance with rotational invariance, and Riemannian distance and Log-Euclidean distance with affine invariance were implemented to compare the geometric and orientation features of the diffusion tensor comprehensively. The accuracy and efficacy of the proposed novel NLM method using these three similarity measures in DTI space, along with unbiased novel NLM in diffusion-weighted image space, were compared quantitatively and qualitatively in the present study. PMID:25009599

  7. Denoised Wigner distribution deconvolution via low-rank matrix completion.

    PubMed

    Lee, Justin; Barbastathis, George

    2016-09-01

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object's phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phase retrieval such as ptychography. Our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise. PMID:27607616

  8. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data.

    PubMed

    Pnevmatikakis, Eftychios A; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M; Peterka, Darcy S; Yuste, Rafael; Paninski, Liam

    2016-01-20

    We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multi-neuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  9. MRI noise estimation and denoising using non-local PCA.

    PubMed

    Manjón, José V; Coupé, Pierrick; Buades, Antonio

    2015-05-01

    This paper proposes a novel method for MRI denoising that exploits both the sparseness and self-similarity properties of the MR images. The proposed method is a two-stage approach that first filters the noisy image using a non local PCA thresholding strategy by automatically estimating the local noise level present in the image and second uses this filtered image as a guide image within a rotationally invariant non-local means filter. The proposed method internally estimates the amount of local noise presents in the images that enables applying it automatically to images with spatially varying noise levels and also corrects the Rician noise induced bias locally. The proposed approach has been compared with related state-of-the-art methods showing competitive results in all the studied cases. PMID:25725303

  10. Analysis of photonic Doppler velocimetry data based on the continuous wavelet transform

    SciTech Connect

    Liu Shouxian; Wang Detian; Li Tao; Chen Guanghua; Li Zeren; Peng Qixian

    2011-02-15

    The short time Fourier transform (STFT) cannot resolve rapid velocity changes in most photonic Doppler velocimetry (PDV) data. A practical analysis method based on the continuous wavelet transform (CWT) was presented to overcome this difficulty. The adaptability of the wavelet family predicates that the continuous wavelet transform uses an adaptive time window to estimate the instantaneous frequency of signals. The local frequencies of signal are accurately determined by finding the ridge in the spectrogram of the CWT and then are converted to target velocity according to the Doppler effects. A performance comparison between the CWT and STFT is demonstrated by a plate-impact experiment data. The results illustrate that the new method is automatic and adequate for analysis of PDV data.

  11. New fuzzy wavelet network for modeling and control: The modeling approach

    NASA Astrophysics Data System (ADS)

    Ebadat, Afrooz; Noroozi, Navid; Safavi, Ali Akbar; Mousavi, Seyyed Hossein

    2011-08-01

    In this paper, a fuzzy wavelet network is proposed to approximate arbitrary nonlinear functions based on the theory of multiresolution analysis (MRA) of wavelet transform and fuzzy concepts. The presented network combines TSK fuzzy models with wavelet transform and ROLS learning algorithm while still preserve the property of linearity in parameters. In order to reduce the number of fuzzy rules, fuzzy clustering is invoked. In the clustering algorithm, those wavelets that are closer to each other in the sense of the Euclidean norm are placed in a group and are used in the consequent part of a fuzzy rule. Antecedent parts of the rules are Gaussian membership functions. Determination of the deviation parameter is performed with the help of gold partition method. Here, mean of each function is derived by averaging center of all wavelets that are related to that particular rule. The overall developed fuzzy wavelet network is called fuzzy wave-net and simulation results show superior performance over previous networks. The present work is complemented by a second part which focuses on the control aspects and to be published in this journal( [17]). This paper proposes an observer based self-structuring robust adaptive fuzzy wave-net (FWN) controller for a class of nonlinear uncertain multi-input multi-output systems.

  12. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  13. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  14. Wavelet analysis in two-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  15. Wavelet analysis of epileptic spikes

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-05-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  16. Radiation Dose Reduction in Pediatric Body CT Using Iterative Reconstruction and a Novel Image-Based Denoising Method

    PubMed Central

    Yu, Lifeng; Fletcher, Joel G.; Shiung, Maria; Thomas, Kristen B.; Matsumoto, Jane M.; Zingula, Shannon N.; McCollough, Cynthia H.

    2016-01-01

    OBJECTIVE The objective of this study was to evaluate the radiation dose reduction potential of a novel image-based denoising technique in pediatric abdominopelvic and chest CT examinations and compare it with a commercial iterative reconstruction method. MATERIALS AND METHODS Data were retrospectively collected from 50 (25 abdominopelvic and 25 chest) clinically indicated pediatric CT examinations. For each examination, a validated noise-insertion tool was used to simulate half-dose data, which were reconstructed using filtered back-projection (FBP) and sinogram-affirmed iterative reconstruction (SAFIRE) methods. A newly developed denoising technique, adaptive nonlocal means (aNLM), was also applied. For each of the 50 patients, three pediatric radiologists evaluated four datasets: full dose plus FBP, half dose plus FBP, half dose plus SAFIRE, and half dose plus aNLM. For each examination, the order of preference for the four datasets was ranked. The organ-specific diagnosis and diagnostic confidence for five primary organs were recorded. RESULTS The mean (± SD) volume CT dose index for the full-dose scan was 5.3 ± 2.1 mGy for abdominopelvic examinations and 2.4 ± 1.1 mGy for chest examinations. For abdominopelvic examinations, there was no statistically significant difference between the half dose plus aNLM dataset and the full dose plus FBP dataset (3.6 ± 1.0 vs 3.6 ± 0.9, respectively; p = 0.52), and aNLM performed better than SAFIRE. For chest examinations, there was no statistically significant difference between the half dose plus SAFIRE and the full dose plus FBP (4.1 ± 0.6 vs 4.2 ± 0.6, respectively; p = 0.67), and SAFIRE performed better than aNLM. For all organs, there was more than 85% agreement in organ-specific diagnosis among the three half-dose configurations and the full dose plus FBP configuration. CONCLUSION Although a novel image-based denoising technique performed better than a commercial iterative reconstruction method in pediatric

  17. Haar Wavelet Analysis of Climatic Time Series

    NASA Astrophysics Data System (ADS)

    Zhang, Zhihua; Moore, John; Grinsted, Aslak

    2014-05-01

    In order to extract the intrinsic information of climatic time series from background red noise, we will first give an analytic formula on the distribution of Haar wavelet power spectra of red noise in a rigorous statistical framework. The relation between scale aand Fourier period T for the Morlet wavelet is a= 0.97T . However, for Haar wavelet, the corresponding formula is a= 0.37T . Since for any time series of time step δt and total length Nδt, the range of scales is from the smallest resolvable scale 2δt to the largest scale Nδt in wavelet-based time series analysis, by using the Haar wavelet analysis, one can extract more low frequency intrinsic information. Finally, we use our method to analyze Arctic Oscillation which is a key aspect of climate variability in the Northern Hemisphere, and discover a great change in fundamental properties of the AO,-commonly called a regime shift or tripping point. Our partial results have been published as follows: [1] Z. Zhang, J.C. Moore and A. Grinsted, Haar wavelet analysis of climatic time series, Int. J. Wavelets, Multiresol. & Inf. Process., in press, 2013 [2] Z. Zhang, J.C. Moore, Comment on "Significance tests for the wavelet power and the wavelet power spectrum", Ann. Geophys., 30:12, 2012

  18. Entangled Husimi Distribution and Complex Wavelet Transformation

    NASA Astrophysics Data System (ADS)

    Hu, Li-Yun; Fan, Hong-Yi

    2010-05-01

    Similar in spirit to the preceding work (Int. J. Theor. Phys. 48:1539, 2009) where the relationship between wavelet transformation and Husimi distribution function is revealed, we study this kind of relationship to the entangled case. We find that the optical complex wavelet transformation can be used to study the entangled Husimi distribution function in phase space theory of quantum optics. We prove that, up to a Gaussian function, the entangled Husimi distribution function of a two-mode quantum state | ψ> is just the modulus square of the complex wavelet transform of e^{-\\vert η \\vert 2/2} with ψ( η) being the mother wavelet.

  19. Wavelet Analysis of Soil Reflectance for the Characterization of Soil Properties

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Wavelet analysis has proven to be effective in many fields including signal processing and digital image analysis. Recently, it has been adapted to spectroscopy, where the reflectance of various materials is measured with respect to wavelength (nm) or wave number (cm-1). Spectra can cover broad wave...

  20. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  1. Mass spectrometry data processing using zero-crossing lines in multi-scale of Gaussian derivative wavelet

    PubMed Central

    Nguyen, Nha; Huang, Heng; Oraintara, Soontorn; Vo, An

    2010-01-01

    Motivation: Peaks are the key information in mass spectrometry (MS) which has been increasingly used to discover diseases-related proteomic patterns. Peak detection is an essential step for MS-based proteomic data analysis. Recently, several peak detection algorithms have been proposed. However, in these algorithms, there are three major deficiencies: (i) because the noise is often removed, the true signal could also be removed; (ii) baseline removal step may get rid of true peaks and create new false peaks; (iii) in peak quantification step, a threshold of signal-to-noise ratio (SNR) is usually used to remove false peaks; however, noise estimations in SNR calculation are often inaccurate in either time or wavelet domain. In this article, we propose new algorithms to solve these problems. First, we use bivariate shrinkage estimator in stationary wavelet domain to avoid removing true peaks in denoising step. Second, without baseline removal, zero-crossing lines in multi-scale of derivative Gaussian wavelets are investigated with mixture of Gaussian to estimate discriminative parameters of peaks. Third, in quantification step, the frequency, SD, height and rank of peaks are used to detect both high and small energy peaks with robustness to noise. Results: We propose a novel Gaussian Derivative Wavelet (GDWavelet) method to more accurately detect true peaks with a lower false discovery rate than existing methods. The proposed GDWavelet method has been performed on the real Surface-Enhanced Laser Desorption/Ionization Time-Of-Flight (SELDI-TOF) spectrum with known polypeptide positions and on two synthetic data with Gaussian and real noise. All experimental results demonstrate that our method outperforms other commonly used methods. The standard receiver operating characteristic (ROC) curves are used to evaluate the experimental results. Availability: http://ranger.uta.edu/∼heng/MS/GDWavelet.html or http://www.naaan.org/nhanguyen/archive.htm Contact: heng

  2. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems. PMID:24880269

  3. Daubechies wavelets for linear scaling density functional theory

    SciTech Connect

    Mohr, Stephan; Ratcliff, Laura E.; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Boulanger, Paul; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10 000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  4. A new wavelet-based thin plate element using B-spline wavelet on the interval

    NASA Astrophysics Data System (ADS)

    Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang

    2008-01-01

    By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.

  5. Edge-preserving image denoising via group coordinate descent on the GPU

    PubMed Central

    McGaffin, Madison G.; Fessler, Jeffrey A.

    2015-01-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel one-dimensional pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates inplace and only store the noisy data, denoised image and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation (TV). Both algorithms use the majorize-minimize (MM) framework to solve the one-dimensional pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time. PMID:25675454

  6. Total variation-regularized weighted nuclear norm minimization for hyperspectral image mixed denoising

    NASA Astrophysics Data System (ADS)

    Wu, Zhaojun; Wang, Qiang; Wu, Zhenghua; Shen, Yi

    2016-01-01

    Many nuclear norm minimization (NNM)-based methods have been proposed for hyperspectral image (HSI) mixed denoising due to the low-rank (LR) characteristics of clean HSI. However, the NNM-based methods regularize each eigenvalue equally, which is unsuitable for the denoising problem, where each eigenvalue stands for special physical meaning and should be regularized differently. However, the NNM-based methods only exploit the high spectral correlation, while ignoring the local structure of HSI and resulting in spatial distortions. To address these problems, a total variation (TV)-regularized weighted nuclear norm minimization (TWNNM) method is proposed. To obtain the desired denoising performance, two issues are included. First, to exploit the high spectral correlation, the HSI is restricted to be LR, and different eigenvalues are minimized with different weights based on the WNNM. Second, to preserve the local structure of HSI, the TV regularization is incorporated, and the alternating direction method of multipliers is used to solve the resulting optimization problem. Both simulated and real data experiments demonstrate that the proposed TWNNM approach produces superior denoising results for the mixed noise case in comparison with several state-of-the-art denoising methods.

  7. Edge-preserving image denoising via group coordinate descent on the GPU.

    PubMed

    McGaffin, Madison Gray; Fessler, Jeffrey A

    2015-04-01

    Image denoising is a fundamental operation in image processing, and its applications range from the direct (photographic enhancement) to the technical (as a subproblem in image reconstruction algorithms). In many applications, the number of pixels has continued to grow, while the serial execution speed of computational hardware has begun to stall. New image processing algorithms must exploit the power offered by massively parallel architectures like graphics processing units (GPUs). This paper describes a family of image denoising algorithms well-suited to the GPU. The algorithms iteratively perform a set of independent, parallel 1D pixel-update subproblems. To match GPU memory limitations, they perform these pixel updates in-place and only store the noisy data, denoised image, and problem parameters. The algorithms can handle a wide range of edge-preserving roughness penalties, including differentiable convex penalties and anisotropic total variation. Both algorithms use the majorize-minimize framework to solve the 1D pixel update subproblem. Results from a large 2D image denoising problem and a 3D medical imaging denoising problem demonstrate that the proposed algorithms converge rapidly in terms of both iteration and run-time.

  8. Iterative weighted maximum likelihood denoising with probabilistic patch-based weights.

    PubMed

    Deledalle, Charles-Alban; Denis, Loïc; Tupin, Florence

    2009-12-01

    Image denoising is an important problem in image processing since noise may interfere with visual or automatic interpretation. This paper presents a new approach for image denoising in the case of a known uncorrelated noise model. The proposed filter is an extension of the nonlocal means (NL means) algorithm introduced by Buades , which performs a weighted average of the values of similar pixels. Pixel similarity is defined in NL means as the Euclidean distance between patches (rectangular windows centered on each two pixels). In this paper, a more general and statistically grounded similarity criterion is proposed which depends on the noise distribution model. The denoising process is expressed as a weighted maximum likelihood estimation problem where the weights are derived in a data-driven way. These weights can be iteratively refined based on both the similarity between noisy patches and the similarity of patches extracted from the previous estimate. We show that this iterative process noticeably improves the denoising performance, especially in the case of low signal-to-noise ratio images such as synthetic aperture radar (SAR) images. Numerical experiments illustrate that the technique can be successfully applied to the classical case of additive Gaussian noise but also to cases such as multiplicative speckle noise. The proposed denoising technique seems to improve on the state of the art performance in that latter case.

  9. [An Improved Empirical Mode Decomposition Algorithm for Phonocardiogram Signal De-noising and Its Application in S1/S2 Extraction].

    PubMed

    Gong, Jing; Nie, Shengdong; Wang, Yuanjun

    2015-10-01

    In this paper, an improved empirical mode decomposition (EMD) algorithm for phonocardiogram (PCG) signal de-noising is proposed. Based on PCG signal processing theory, the S1/S2 components can be extracted by combining the improved EMD-Wavelet algorithm and Shannon energy envelope algorithm. Firstly, by applying EMD-Wavelet algorithm for pre-processing, the PCG signal was well filtered. Then, the filtered PCG signal was saved and applied in the following processing steps. Secondly, time domain features, frequency domain features and energy envelope of the each intrinsic mode function's (IMF) were computed. Based on the time frequency domain features of PCG's IMF components which were extracted from the EMD algorithm and energy envelope of the PCG, the S1/S2 components were pinpointed accurately. Meanwhile, a detecting fixed method, which was based on the time domain processing, was proposed to amend the detection results. Finally, to test the performance of the algorithm proposed in this paper, a series of experiments was contrived. The experiments with thirty samples were tested for validating the effectiveness of the new method. Results of test experiments revealed that the accuracy for recognizing S1/S2 components was as high as 99.75%. Comparing the results of the method proposed in this paper with those of traditional algorithm, the detection accuracy was increased by 5.56%. The detection results showed that the algorithm described in this paper was effective and accurate. The work described in this paper will be utilized in the further studying on identity recognition.

  10. A fast-convergence POCS seismic denoising and reconstruction method

    NASA Astrophysics Data System (ADS)

    Ge, Zi-Jian; Li, Jing-Ye; Pan, Shu-Lin; Chen, Xiao-Hong

    2015-06-01

    The efficiency, precision, and denoising capabilities of reconstruction algorithms are critical to seismic data processing. Based on the Fourier-domain projection onto convex sets (POCS) algorithm, we propose an inversely proportional threshold model that defines the optimum threshold, in which the descent rate is larger than in the exponential threshold in the large-coefficient section and slower than in the exponential threshold in the small-coefficient section. Thus, the computation efficiency of the POCS seismic reconstruction greatly improves without affecting the reconstructed precision of weak reflections. To improve the flexibility of the inversely proportional threshold, we obtain the optimal threshold by using an adjustable dependent variable in the denominator of the inversely proportional threshold model. For random noise attenuation by completing the missing traces in seismic data reconstruction, we present a weighted reinsertion strategy based on the data-driven model that can be obtained by using the percentage of the data-driven threshold in each iteration in the threshold section. We apply the proposed POCS reconstruction method to 3D synthetic and field data. The results suggest that the inversely proportional threshold model improves the computational efficiency and precision compared with the traditional threshold models; furthermore, the proposed reinserting weight strategy increases the SNR of the reconstructed data.

  11. Unsupervised dealiasing and denoising of color-Doppler data.

    PubMed

    Muth, Stéphan; Dort, Sarah; Sebag, Igal A; Blais, Marie-Josée; Garcia, Damien

    2011-08-01

    Color Doppler imaging (CDI) is the premiere modality to analyze blood flow in clinical practice. In the prospect of producing new CDI-based tools, we developed a fast unsupervised denoiser and dealiaser (DeAN) algorithm for color Doppler raw data. The proposed technique uses robust and automated image post-processing techniques that make the DeAN clinically compliant. The DeAN includes three consecutive advanced and hands-off numerical tools: (1) statistical region merging segmentation, (2) recursive dealiasing process, and (3) regularized robust smoothing. The performance of the DeAN was evaluated using Monte-Carlo simulations on mock Doppler data corrupted by aliasing and inhomogeneous noise. Fifty aliased Doppler images of the left ventricle acquired with a clinical ultrasound scanner were also analyzed. The analytical study demonstrated that color Doppler data can be reconstructed with high accuracy despite the presence of strong corruption. The normalized RMS error on the numerical data was less than 8% even with signal-to-noise ratio as low as 10dB. The algorithm also allowed us to recover highly reliable Doppler flows in clinical data. The DeAN is fast, accurate and not observer-dependent. Preliminary results showed that it is also directly applicable to 3-D data. This will offer the possibility of developing new tools to better decipher the blood flow dynamics in cardiovascular diseases.

  12. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  13. Generalized non-local means filtering for image denoising

    NASA Astrophysics Data System (ADS)

    Dolui, Sudipto; Salgado Patarroyo, Iván. C.; Michailovich, Oleg V.

    2014-02-01

    Non-local means (NLM) filtering has been shown to outperform alternative denoising methodologies under the model of additive white Gaussian noise contamination. Recently, several theoretical frameworks have been developed to extend this class of algorithms to more general types of noise statistics. However, many of these frameworks are specifically designed for a single noise contamination model, and are far from optimal across varying noise statistics. The NLM filtering techniques rely on the definition of a similarity measure, which quantifies the similarity of two neighbourhoods along with their respective centroids. The key to the unification of the NLM filter for different noise statistics lies in the definition of a universal similarity measure which is guaranteed to provide favourable performance irrespective of the statistics of the noise. Accordingly, the main contribution of this work is to provide a rigorous statistical framework to derive such a universal similarity measure, while highlighting some of its theoretical and practical favourable characteristics. Additionally, the closed form expressions of the proposed similarity measure are provided for a number of important noise scenarios and the practical utility of the proposed similarity measure is demonstrated through numerical experiments.

  14. Fast fractal image compression with triangulation wavelets

    NASA Astrophysics Data System (ADS)

    Hebert, D. J.; Soundararajan, Ezekiel

    1998-10-01

    We address the problem of improving the performance of wavelet based fractal image compression by applying efficient triangulation methods. We construct iterative function systems (IFS) in the tradition of Barnsley and Jacquin, using non-uniform triangular range and domain blocks instead of uniform rectangular ones. We search for matching domain blocks in the manner of Zhang and Chen, performing a fast wavelet transform on the blocks and eliminating low resolution mismatches to gain speed. We obtain further improvements by the efficiencies of binary triangulations (including the elimination of affine and symmetry calculations and reduced parameter storage), and by pruning the binary tree before construction of the IFS. Our wavelets are triangular Haar wavelets and `second generation' interpolation wavelets as suggested by Sweldens' recent work.

  15. An Investigation of Wavelet Bases for Grid-Based Multi-Scale Simulations Final Report

    SciTech Connect

    Baty, R.S.; Burns, S.P.; Christon, M.A.; Roach, D.W.; Trucano, T.G.; Voth, T.E.; Weatherby, J.R.; Womble, D.E.

    1998-11-01

    The research summarized in this report is the result of a two-year effort that has focused on evaluating the viability of wavelet bases for the solution of partial differential equations. The primary objective for this work has been to establish a foundation for hierarchical/wavelet simulation methods based upon numerical performance, computational efficiency, and the ability to exploit the hierarchical adaptive nature of wavelets. This work has demonstrated that hierarchical bases can be effective for problems with a dominant elliptic character. However, the strict enforcement of orthogonality was found to be less desirable than weaker semi-orthogonality or bi-orthogonality for solving partial differential equations. This conclusion has led to the development of a multi-scale linear finite element based on a hierarchical change of basis. The reproducing kernel particle method has been found to yield extremely accurate phase characteristics for hyperbolic problems while providing a convenient framework for multi-scale analyses.

  16. Improved DCT-based nonlocal means filter for MR images denoising.

    PubMed

    Hu, Jinrong; Pu, Yifei; Wu, Xi; Zhang, Yi; Zhou, Jiliu

    2012-01-01

    The nonlocal means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter based on the discrete cosine transform (DCT). Instead of computing similarity weights using the gray level information directly, the proposed method calculates similarity weights in the DCT subspace of neighborhood. Due to promising characteristics of DCT, such as low data correlation and high energy compaction, the proposed filter is naturally endowed with more accurate estimation of weights thus enhances denoising effectively. The performance of the proposed filter is evaluated qualitatively and quantitatively together with two other NLM filters, namely, the original NLM filter and the unbiased NLM (UNLM) filter. Experimental results demonstrate that the proposed filter achieves better denoising performance in MRI compared to the others.

  17. Application of Conjunctive Nonlinear Model Based on Wavelet Transforms and Artificial Neural Networks to Drought Forecasting

    NASA Astrophysics Data System (ADS)

    Abrishamchi, A.; Mehdikhani, H.; Tajrishy, M.; Marino, M. A.; Abrishamchi, A.

    2007-12-01

    Drought forecasting plays an important role in mitigation of economic, environmental and social impacts of drought. Traditional statistical time series methods have a limited ability to capture non-stationarities and nonlinearities in data. Artificial Neural Network (ANN) because of highly flexible function estimator that has self- learning and self-adaptive feature has shown great ability in forecasting nonlinear and nonstationary time series in hydrology. Recently wavelet transforms have become a common tool for analyzing local variation in time series. Wavelet transforms provide a useful decomposition of a signal, or time series; therefore, hybrid models have been proposed for forecasting a time series based on a wavelet transform preprocessing. Wavelet-transformed data aids in improving the ability of forecasting models by diagnosing signal's main frequency component and abstract local information of the original time series on various resolution levels. This paper presents a conjunctive nonlinear model using Wavelet Transforms and Artificial Neural Network. Application of the model in Zayandeh-Rood River basin (Iran) shows that the conjunctive model significantly improves the ability of artificial neural networks for 1, 3, 6 and 9 months ahead forecasting of EDI (effective drought indices) time series. Improved forecasts allow water resources decision makers to develop drought preparedness plans far in advance.

  18. Directional dual-tree complex wavelet packet transforms for processing quadrature signals.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2016-03-01

    Quadrature signals containing in-phase and quadrature-phase components are used in many signal processing applications in every field of science and engineering. Specifically, Doppler ultrasound systems used to evaluate cardiovascular disorders noninvasively also result in quadrature format signals. In order to obtain directional blood flow information, the quadrature outputs have to be preprocessed using methods such as asymmetrical and symmetrical phasing filter techniques. These resultant directional signals can be employed in order to detect asymptomatic embolic signals caused by small emboli, which are indicators of a possible future stroke, in the cerebral circulation. Various transform-based methods such as Fourier and wavelet were frequently used in processing embolic signals. However, most of the times, the Fourier and discrete wavelet transforms are not appropriate for the analysis of embolic signals due to their non-stationary time-frequency behavior. Alternatively, discrete wavelet packet transform can perform an adaptive decomposition of the time-frequency axis. In this study, directional discrete wavelet packet transforms, which have the ability to map directional information while processing quadrature signals and have less computational complexity than the existing wavelet packet-based methods, are introduced. The performances of proposed methods are examined in detail by using single-frequency, synthetic narrow-band, and embolic quadrature signals.

  19. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  20. Evaluating image denoising methods in myocardial perfusion single photon emission computed tomography (SPECT) imaging

    NASA Astrophysics Data System (ADS)

    Skiadopoulos, S.; Karatrantou, A.; Korfiatis, P.; Costaridou, L.; Vassilakos, P.; Apostolopoulos, D.; Panayiotakis, G.

    2009-10-01

    The statistical nature of single photon emission computed tomography (SPECT) imaging, due to the Poisson noise effect, results in the degradation of image quality, especially in the case of lesions of low signal-to-noise ratio (SNR). A variety of well-established single-scale denoising methods applied on projection raw images have been incorporated in SPECT imaging applications, while multi-scale denoising methods with promising performance have been proposed. In this paper, a comparative evaluation study is performed between a multi-scale platelet denoising method and the well-established Butterworth filter applied as a pre- and post-processing step on images reconstructed without and/or with attenuation correction. Quantitative evaluation was carried out employing (i) a cardiac phantom containing two different size cold defects, utilized in two experiments conducted to simulate conditions without and with photon attenuation from myocardial surrounding tissue and (ii) a pilot-verified clinical dataset of 15 patients with ischemic defects. Image noise, defect contrast, SNR and defect contrast-to-noise ratio (CNR) metrics were computed for both phantom and patient defects. In addition, an observer preference study was carried out for the clinical dataset, based on rankings from two nuclear medicine clinicians. Without photon attenuation conditions, denoising by platelet and Butterworth post-processing methods outperformed Butterworth pre-processing for large size defects, while for small size defects, as well as with photon attenuation conditions, all methods have demonstrated similar denoising performance. Under both attenuation conditions, the platelet method showed improved performance with respect to defect contrast, SNR and defect CNR in the case of images reconstructed without attenuation correction, however not statistically significant (p > 0.05). Quantitative as well as preference results obtained from clinical data showed similar performance of the

  1. MR images denoising using DCT-based unbiased nonlocal means filter

    NASA Astrophysics Data System (ADS)

    Zheng, Xiuqing; Hu, Jinrong; Zhou, Jiuliu

    2013-03-01

    The non-local means (NLM) filter has been proven to be an efficient feature-preserved denoising method and can be applied to remove noise in the magnetic resonance (MR) images. To suppress noise more efficiently, we present a novel NLM filter by using a low-pass filtered and low dimensional version of neighborhood for calculating the similarity weights. The discrete cosine transform (DCT) is used as a smoothing kernel, allowing both improvements in similarity estimation and computational speed-up. Experimental results show that the proposed filter achieves better denoising performance in MR Images compared to others filters, such as recently proposed NLM filter and unbiased NLM (UNLM) filter.

  2. Imaging system of wavelet optics described by the Gaussian linear frequency-modulated complex wavelet.

    PubMed

    Tan, Liying; Ma, Jing; Wang, Guangming

    2005-12-01

    The image formation and the point-spread function of an optical system are analyzed by use of the wavelet basis function. The image described by a wavelet is no longer an indivisible whole image. It is, rather, a complex image consisting of many wavelet subimages, which come from the changes of different parameters (scale) a and c, and parameters b and d show the positions of wavelet subimages under different scales. A Gaussian frequency-modulated complex-valued wavelet function is introduced to express the point-spread function of an optical system and used to describe the image formation. The analysis, in allusion to the situation of illumination with a monochromatic plain light wave, shows that using the theory of wavelet optics to describe the image formation of an optical system is feasible.

  3. Imaging system of wavelet optics described by the Gaussian linear frequency-modulated complex wavelet

    NASA Astrophysics Data System (ADS)

    Tan, Liying; Ma, Jing; Wang, Guangming

    2005-12-01

    The image formation and the point-spread function of an optical system are analyzed by use of the wavelet basis function. The image described by a wavelet is no longer an indivisible whole image. It is, rather, a complex image consisting of many wavelet subimages, which come from the changes of different parameters (scale) a and c, and parameters b and d show the positions of wavelet subimages under different scales. A Gaussian frequency-modulated complex-valued wavelet function is introduced to express the point-spread function of an optical system and used to describe the image formation. The analysis, in allusion to the situation of illumination with a monochromatic plain light wave, shows that using the theory of wavelet optics to describe the image formation of an optical system is feasible.

  4. Applications of a fast, continuous wavelet transform

    SciTech Connect

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  5. [Hyper spectral estimation method for soil alkali hydrolysable nitrogen content based on discrete wavelet transform and genetic algorithm in combining with partial least squares DWT-GA-PLS)].

    PubMed

    Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling

    2013-11-01

    Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.

  6. Push-Broom-Type Very High-Resolution Satellite Sensor Data Correction Using Combined Wavelet-Fourier and Multiscale Non-Local Means Filtering

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki

    2015-01-01

    In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments. PMID:26378532

  7. Wavelet-based multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen

    2008-09-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  8. Wavelet Applications for Flight Flutter Testing

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.

    1999-01-01

    Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.

  9. Selection of Mother Wavelet Functions for Multi-Channel EEG Signal Analysis during a Working Memory Task

    PubMed Central

    Al-Qazzaz, Noor Kamal; Hamid Bin Mohd Ali, Sawal; Ahmad, Siti Anom; Islam, Mohd Shabiul; Escudero, Javier

    2015-01-01

    We performed a comparative study to select the efficient mother wavelet (MWT) basis functions that optimally represent the signal characteristics of the electrical activity of the human brain during a working memory (WM) task recorded through electro-encephalography (EEG). Nineteen EEG electrodes were placed on the scalp following the 10–20 system. These electrodes were then grouped into five recording regions corresponding to the scalp area of the cerebral cortex. Sixty-second WM task data were recorded from ten control subjects. Forty-five MWT basis functions from orthogonal families were investigated. These functions included Daubechies (db1–db20), Symlets (sym1–sym20), and Coiflets (coif1–coif5). Using ANOVA, we determined the MWT basis functions with the most significant differences in the ability of the five scalp regions to maximize their cross-correlation with the EEG signals. The best results were obtained using “sym9” across the five scalp regions. Therefore, the most compatible MWT with the EEG signals should be selected to achieve wavelet denoising, decomposition, reconstruction, and sub-band feature extraction. This study provides a reference of the selection of efficient MWT basis functions. PMID:26593918

  10. Application of wavelet analysis on thin-film wideband monitoring system

    NASA Astrophysics Data System (ADS)

    Han, Jun; Wang, Song; Shang, Xiaoyan; An, Yuying

    2009-05-01

    In the thin-film thickness wideband monitoring system, an accurate measurement of the spectral intensity signal is critical to improving precision of coating. Because of electron gun, ion source and baking, vacuum chamber is a complex environment with background light. Together with the inherent noise of linear CCD and quantization noise of A/D conversion, which are main factors affecting accurate measurement of spectrum intensity. This paper uses time and frequency multi-resolution properties of wavelet transform, adaptive threshold adjustment method is designed. According to the different characteristics of signal and the random noise processed by wavelet transform in different scales, the fine adjustment factor is added when the threshold is determined, on the hand, which makes the adaptive threshold of wavelet coefficient with positive Lipschitz index decrease, this is beneficial to preserve real signals of wavelet coefficient; on the other hand, which makes the one with negative Lipschitz index increase, this is favorable to filter out noise signal. By this method, both rejecting true probability and false declaration probability are reduced, the random noise is suppressed effectively, a very good filtering result is achieved, and finally the analysis accuracy of spectrum signal and the precision of systematic decision pause are improved.

  11. A novel approach for removing ECG interferences from surface EMG signals using a combined ANFIS and wavelet.

    PubMed

    Abbaspour, Sara; Fallah, Ali; Lindén, Maria; Gholamhosseini, Hamid

    2016-02-01

    In recent years, the removal of electrocardiogram (ECG) interferences from electromyogram (EMG) signals has been given large consideration. Where the quality of EMG signal is of interest, it is important to remove ECG interferences from EMG signals. In this paper, an efficient method based on a combination of adaptive neuro-fuzzy inference system (ANFIS) and wavelet transform is proposed to effectively eliminate ECG interferences from surface EMG signals. The proposed approach is compared with other common methods such as high-pass filter, artificial neural network, adaptive noise canceller, wavelet transform, subtraction method and ANFIS. It is found that the performance of the proposed ANFIS-wavelet method is superior to the other methods with the signal to noise ratio and relative error of 14.97dB and 0.02 respectively and a significantly higher correlation coefficient (p<0.05). PMID:26643795

  12. Stationary wavelet transform and higher order statistical analyses of intrafascicular nerve recordings.

    PubMed

    Qiao, Shaoyu; Torkamani-Azar, Mastaneh; Salama, Paul; Yoshida, Ken

    2012-10-01

    Nerve signals were recorded from the sciatic nerve of the rabbits in the acute experiments with multi-channel thin-film longitudinal intrafascicular electrodes. 5.5 s sequences of quiescent and high-level nerve activity were spectrally decomposed by applying a ten-level stationary wavelet transform with the Daubechies 10 (Db10) mother wavelet. Then, the statistical distributions of the raw and subband-decomposed sequences were estimated and used to fit a fourth-order Pearson distribution as well as check for normality. The results indicated that the raw and decomposed background and high-level nerve activity distributions were nearly zero-mean and non-skew. All distributions with the frequency content above 187.5 Hz were leptokurtic except for the first-level decomposition representing frequencies in the subband between 12 and 24 kHz, which was Gaussian. This suggests that nerve activity acts to change the statistical distribution of the recording. The results further demonstrated that quiescent recording contained a mixture of an underlying pink noise and low-level nerve activity that could not be silenced. The signal-to-noise ratios based upon the standard deviation (SD) and kurtosis were estimated, and the latter was found as an effective measure for monitoring the nerve activity residing in different frequency subbands. The nerve activity modulated kurtosis along with SD, suggesting that the joint use of SD and kurtosis could improve the stability and detection accuracy of spike-detection algorithms. Finally, synthesizing the reconstructed subband signals following denoising based upon the higher order statistics of the subband-decomposed coefficient sequences allowed us to effectively purify the signal without distorting spike shape.

  13. Extracting information in spike time patterns with wavelets and information theory.

    PubMed

    Lopes-dos-Santos, Vítor; Panzeri, Stefano; Kayser, Christoph; Diamond, Mathew E; Quian Quiroga, Rodrigo

    2015-02-01

    We present a new method to assess the information carried by temporal patterns in spike trains. The method first performs a wavelet decomposition of the spike trains, then uses Shannon information to select a subset of coefficients carrying information, and finally assesses timing information in terms of decoding performance: the ability to identify the presented stimuli from spike train patterns. We show that the method allows: 1) a robust assessment of the information carried by spike time patterns even when this is distributed across multiple time scales and time points; 2) an effective denoising of the raster plots that improves the estimate of stimulus tuning of spike trains; and 3) an assessment of the information carried by temporally coordinated spikes across neurons. Using simulated data, we demonstrate that the Wavelet-Information (WI) method performs better and is more robust to spike time-jitter, background noise, and sample size than well-established approaches, such as principal component analysis, direct estimates of information from digitized spike trains, or a metric-based method. Furthermore, when applied to real spike trains from monkey auditory cortex and from rat barrel cortex, the WI method allows extracting larger amounts of spike timing information. Importantly, the fact that the WI method incorporates multiple time scales makes it robust to the choice of partly arbitrary parameters such as temporal resolution, response window length, number of response features considered, and the number of available trials. These results highlight the potential of the proposed method for accurate and objective assessments of how spike timing encodes information.

  14. Velocity and Object Detection Using Quaternion Wavelets

    SciTech Connect

    Traversoni, Leonardo; Xu Yi

    2007-09-06

    DStarting from stereoscopic films we detect corresponding objects in both and stablish an epipolar geometry as well as corresponding moving objects are detected and its movement described all using quaternion wavelets and quaternion phase space decomposition.

  15. Wavelet analysis for characterizing human electroencephalogram signals

    NASA Astrophysics Data System (ADS)

    Li, Bai-Lian; Wu, Hsin-i.

    1995-04-01

    Wavelet analysis is a recently developed mathematical theory and computational method for decomposing a nonstationary signal into components that have good localization properties both in time and frequency domains and hierarchical structures. Wavelet transform provides local information and multiresolution decomposition on a signal that cannot be obtained using traditional methods such as Fourier transforms and distribution-based statistical methods. Hence the change in complex biological signals can be detected. We use wavelet analysis as an innovative method for identifying and characterizing multiscale electroencephalogram signals in this paper. We develop a wavelet-based stationary phase transition method to extract instantaneous frequencies of the signal that vary in time. The results under different clinical situations show that the brian triggers small bursts of either low or high frequency immediately prior to changing on the global scale to that behavior. This information could be used as a diagnostic for detecting the onset of an epileptic seizure.

  16. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  17. Applications of a fast continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Dress, William B.

    1997-04-01

    A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice

  18. Wavelet neural networks for stock trading

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxing; Fataliyev, Kamaladdin; Wang, Lipo

    2013-05-01

    This paper explores the application of a wavelet neural network (WNN), whose hidden layer is comprised of neurons with adjustable wavelets as activation functions, to stock prediction. We discuss some basic rationales behind technical analysis, and based on which, inputs of the prediction system are carefully selected. This system is tested on Istanbul Stock Exchange National 100 Index and compared with traditional neural networks. The results show that the WNN can achieve very good prediction accuracy.

  19. Texture preservation in de-noising UAV surveillance video through multi-frame sampling

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Fevig, Ronald A.; Schultz, Richard R.

    2009-02-01

    Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model. Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes a significant loss of detail. Recently, a new non-local means method has been developed, which is based on the similarities among the different pixels. This technique results in good preservation of the textures; however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT) [1] method to find the corresponding region between different images, and then reconstruct the de-noised images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance video show that our method is superior to popular methods in the literature.

  20. Fast and Memory-Efficient Topological Denoising of 2D and 3D Scalar Fields.

    PubMed

    Günther, David; Jacobson, Alec; Reininghaus, Jan; Seidel, Hans-Peter; Sorkine-Hornung, Olga; Weinkauf, Tino

    2014-12-01

    Data acquisition, numerical inaccuracies, and sampling often introduce noise in measurements and simulations. Removing this noise is often necessary for efficient analysis and visualization of this data, yet many denoising techniques change the minima and maxima of a scalar field. For example, the extrema can appear or disappear, spatially move, and change their value. This can lead to wrong interpretations of the data, e.g., when the maximum temperature over an area is falsely reported being a few degrees cooler because the denoising method is unaware of these features. Recently, a topological denoising technique based on a global energy optimization was proposed, which allows the topology-controlled denoising of 2D scalar fields. While this method preserves the minima and maxima, it is constrained by the size of the data. We extend this work to large 2D data and medium-sized 3D data by introducing a novel domain decomposition approach. It allows processing small patches of the domain independently while still avoiding the introduction of new critical points. Furthermore, we propose an iterative refinement of the solution, which decreases the optimization energy compared to the previous approach and therefore gives smoother results that are closer to the input. We illustrate our technique on synthetic and real-world 2D and 3D data sets that highlight potential applications. PMID:26356972

  1. Subject-specific patch-based denoising for contrast-enhanced cardiac MR images

    NASA Astrophysics Data System (ADS)

    Ma, Lorraine; Ebrahimi, Mehran; Pop, Mihaela

    2016-03-01

    Many patch-based techniques in imaging, e.g., Non-local means denoising, require tuning parameters to yield optimal results. In real-world applications, e.g., denoising of MR images, ground truth is not generally available and the process of choosing an appropriate set of parameters is a challenge. Recently, Zhu et al. proposed a method to define an image quality measure, called Q, that does not require ground truth. In this manuscript, we evaluate the effect of various parameters of the NL-means denoising on this quality metric Q. Our experiments are based on the late-gadolinium enhancement (LGE) cardiac MR images that are inherently noisy. Our described exhaustive evaluation approach can be used in tuning parameters of patch-based schemes. Even in the case that an estimation of optimal parameters is provided using another existing approach, our described method can be used as a secondary validation step. Our preliminary results suggest that denoising parameters should be case-specific rather than generic.

  2. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  3. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  4. A multi-scale non-local means algorithm for image de-noising

    NASA Astrophysics Data System (ADS)

    Nercessian, Shahan; Panetta, Karen A.; Agaian, Sos S.

    2012-06-01

    A highly studied problem in image processing and the field of electrical engineering in general is the recovery of a true signal from its noisy version. Images can be corrupted by noise during their acquisition or transmission stages. As noisy images are visually very poor in quality, and complicate further processing stages of computer vision systems, it is imperative to develop algorithms which effectively remove noise in images. In practice, it is a difficult task to effectively remove the noise while simultaneously retaining the edge structures within the image. Accordingly, many de-noising algorithms have been considered attempt to intelligent smooth the image while still preserving its details. Recently, a non-local means (NLM) de-noising algorithm was introduced, which exploited the redundant nature of images to achieve image de-noising. The algorithm was shown to outperform current de-noising standards, including Gaussian filtering, anisotropic diffusion, total variation minimization, and multi-scale transform coefficient thresholding. However, the NLM algorithm was developed in the spatial domain, and therefore, does not leverage the benefit that multi-scale transforms provide a framework in which signals can be better distinguished by noise. Accordingly, in this paper, a multi-scale NLM (MS-NLM) algorithm is proposed, which combines the advantage of the NLM algorithm and multi-scale image processing techniques. Experimental results via computer simulations illustrate that the MS-NLM algorithm outperforms the NLM, both visually and quantitatively.

  5. Denoising of hyperspectral images by best multilinear rank approximation of a tensor

    NASA Astrophysics Data System (ADS)

    Marin-McGee, Maider; Velez-Reyes, Miguel

    2010-04-01

    The hyperspectral image cube can be modeled as a three dimensional array. Tensors and the tools of multilinear algebra provide a natural framework to deal with this type of mathematical object. Singular value decomposition (SVD) and its variants have been used by the HSI community for denoising of hyperspectral imagery. Denoising of HSI using SVD is achieved by finding a low rank approximation of a matrix representation of the hyperspectral image cube. This paper investigates similar concepts in hyperspectral denoising by using a low multilinear rank approximation the given HSI tensor representation. The Best Multilinear Rank Approximation (BMRA) of a given tensor A is to find a lower multilinear rank tensor B that is as close as possible to A in the Frobenius norm. Different numerical methods to compute the BMRA using Alternating Least Square (ALS) method and Newton's Methods over product of Grassmann manifolds are presented. The effect of the multilinear rank, the numerical method used to compute the BMRA, and different parameter choices in those methods are studied. Results show that comparable results are achievable with both ALS and Newton type methods. Also, classification results using the filtered tensor are better than those obtained either with denoising using SVD or MNF.

  6. Denoising peptide tandem mass spectra for spectral libraries: a Bayesian approach.

    PubMed

    Shao, Wenguang; Lam, Henry

    2013-07-01

    With the rapid accumulation of data from shotgun proteomics experiments, it has become feasible to build comprehensive and high-quality spectral libraries of tandem mass spectra of peptides. A spectral library condenses experimental data into a retrievable format and can be used to aid peptide identification by spectral library searching. A key step in spectral library building is spectrum denoising, which is best accomplished by merging multiple replicates of the same peptide ion into a consensus spectrum. However, this approach cannot be applied to "singleton spectra," for which only one observed spectrum is available for the peptide ion. We developed a method, based on a Bayesian classifier, for denoising peptide tandem mass spectra. The classifier accounts for relationships between peaks, and can be trained on the fly from consensus spectra and immediately applied to denoise singleton spectra, without hard-coded knowledge about peptide fragmentation. A linear regression model was also trained to predict the number of useful "signal" peaks in a spectrum, thereby obviating the need for arbitrary thresholds for peak filtering. This Bayesian approach accumulates weak evidence systematically to boost the discrimination power between signal and noise peaks, and produces readily interpretable conditional probabilities that offer valuable insights into peptide fragmentation behaviors. By cross validation, spectra denoised by this method were shown to retain more signal peaks, and have higher spectral similarities to replicates, than those filtered by intensity only.

  7. The Continuous wavelet in airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Liang, X.; Liu, L.

    2013-12-01

    Airborne gravimetry is an efficient method to recover medium and high frequency band of earth gravity over any region, especially inaccessible areas, which can measure gravity data with high accuracy,high resolution and broad range in a rapidly and economical way, and It will play an important role for geoid and geophysical exploration. Filtering methods for reducing high-frequency errors is critical to the success of airborne gravimetry due to Aircraft acceleration determination based on GPS.Tradiontal filters used in airborne gravimetry are FIR,IIR filer and so on. This study recommends an improved continuous wavelet to process airborne gravity data. Here we focus on how to construct the continuous wavelet filters and show their working principle. Particularly the technical parameters (window width parameter and scale parameter) of the filters are tested. Then the raw airborne gravity data from the first Chinese airborne gravimetry campaign are filtered using FIR-low pass filter and continuous wavelet filters to remove the noise. The comparison to reference data is performed to determinate external accuracy, which shows that continuous wavelet filters applied to airborne gravity in this thesis have good performances. The advantages of the continuous wavelet filters over digital filters are also introduced. The effectiveness of the continuous wavelet filters for airborne gravimetry is demonstrated through real data computation.

  8. Speckle reduction process based on digital filtering and wavelet compounding in optical coherence tomography for dermatology

    NASA Astrophysics Data System (ADS)

    Gómez Valverde, Juan J.; Ortuño, Juan E.; Guerra, Pedro; Hermann, Boris; Zabihian, Behrooz; Rubio-Guivernau, José L.; Santos, Andrés.; Drexler, Wolfgang; Ledesma-Carbayo, Maria J.

    2015-07-01

    Optical Coherence Tomography (OCT) has shown a great potential as a complementary imaging tool in the diagnosis of skin diseases. Speckle noise is the most prominent artifact present in OCT images and could limit the interpretation and detection capabilities. In this work we propose a new speckle reduction process and compare it with various denoising filters with high edge-preserving potential, using several sets of dermatological OCT B-scans. To validate the performance we used a custom-designed spectral domain OCT and two different data set groups. The first group consisted in five datasets of a single B-scan captured N times (with N<20), the second were five 3D volumes of 25 Bscans. As quality metrics we used signal to noise (SNR), contrast to noise (CNR) and equivalent number of looks (ENL) ratios. Our results show that a process based on a combination of a 2D enhanced sigma digital filter and a wavelet compounding method achieves the best results in terms of the improvement of the quality metrics. In the first group of individual B-scans we achieved improvements in SNR, CNR and ENL of 16.87 dB, 2.19 and 328 respectively; for the 3D volume datasets the improvements were 15.65 dB, 3.44 and 1148. Our results suggest that the proposed enhancement process may significantly reduce speckle, increasing SNR, CNR and ENL and reducing the number of extra acquisitions of the same frame.

  9. An improved automated ultrasonic NDE system by wavelet and neuron networks.

    PubMed

    Bettayeb, Fairouz; Rachedi, Tarek; Benbartaoui, Hamid

    2004-04-01

    Despite of the widespread and increasing use of digitized signals, the ultrasonic testing community has not realized yet the full potential of the electronic processing. The performance of an ultrasonic flaw detection method is evaluated by the success of distinguishing the flaw echoes from those scattered by microstructures. So, de-noising of ultrasonic signals is extremely important as to correctly identify smaller defects, because the probability of detection usually decreases as the defect size decreases, while the probability of false call does increase. In this paper, the wavelet transform has been successfully experimented to suppress noise and to enhance flaw location from ultrasonic signal, with a good defect localization. The obtained result is then directed to an automatic Artificial Neuronal Networks classification and learning algorithm of defects from A-scan data. Since there is some uncertainty connected with the testing technique, the system needs a numerical modelling. So, knowing the technical characteristics of the transducer, we can preview which are the defects that experimental inspection should find. Indeed, the system performs simulation of the ultrasonic wave propagation in the material, and gives a very helpful tool to get information and physical phenomena understanding, which can help to a suitable prediction of the service life of the component.

  10. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  11. Automated quantitative analysis of in-situ NaI measured spectra in the marine environment using a wavelet-based smoothing technique.

    PubMed

    Tsabaris, Christos; Prospathopoulos, Aristides

    2011-10-01

    An algorithm for automated analysis of in-situ NaI γ-ray spectra in the marine environment is presented. A standard wavelet denoising technique is implemented for obtaining a smoothed spectrum, while the stability of the energy spectrum is achieved by taking advantage of the permanent presence of two energy lines in the marine environment. The automated analysis provides peak detection, net area calculation, energy autocalibration, radionuclide identification and activity calculation. The results of the algorithm performance, presented for two different cases, show that analysis of short-term spectra with poor statistical information is considerably improved and that incorporation of further advancements could allow the use of the algorithm in early-warning marine radioactivity systems. PMID:21742510

  12. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  13. Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform

    PubMed Central

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2016-01-01

    Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques. PMID:27043570

  14. Wavelet transforms for electroencephalographic spike and seizure detection

    NASA Astrophysics Data System (ADS)

    Schiff, Steven J.; Milton, John G.

    1993-11-01

    The application of wavelet transforms (WT) to experimental data from the nervous system has been hindered by the lack of a straightforward method to handle noise. A noise reduction technique, developed recently for use in wavelet cluster analysis in cosmology and astronomy, is here adapted for electroencephalographic (EEG) time-series data. Noise is filtered using control surrogate data sets generated from randomized aspects of the original time-series. In this study, WT were applied to EEG data from human patients undergoing brain mapping with implanted subdural electrodes for the localization of epileptic seizure foci. EEG data in 1D were analyzed from individual electrodes, and 2D data from electrode grids. These techniques are a powerful means to identify epileptic spikes in such data, and offer a method to identity the onset and spatial extent of epileptic seizure foci. The method is readily applied to the detection of structure in stationary and non-stationary time-series from a variety of physical systems.

  15. Background Subtraction Based on Three-Dimensional Discrete Wavelet Transform.

    PubMed

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2016-01-01

    Background subtraction without a separate training phase has become a critical task, because a sufficiently long and clean training sequence is usually unavailable, and people generally thirst for immediate detection results from the first frame of a video. Without a training phase, we propose a background subtraction method based on three-dimensional (3D) discrete wavelet transform (DWT). Static backgrounds with few variations along the time axis are characterized by intensity temporal consistency in the 3D space-time domain and, hence, correspond to low-frequency components in the 3D frequency domain. Enlightened by this, we eliminate low-frequency components that correspond to static backgrounds using the 3D DWT in order to extract moving objects. Owing to the multiscale analysis property of the 3D DWT, the elimination of low-frequency components in sub-bands of the 3D DWT is equivalent to performing a pyramidal 3D filter. This 3D filter brings advantages to our method in reserving the inner parts of detected objects and reducing the ringing around object boundaries. Moreover, we make use of wavelet shrinkage to remove disturbance of intensity temporal consistency and introduce an adaptive threshold based on the entropy of the histogram to obtain optimal detection results. Experimental results show that our method works effectively in situations lacking training opportunities and outperforms several popular techniques. PMID:27043570

  16. Multiparameter radar analysis using wavelets

    NASA Astrophysics Data System (ADS)

    Tawfik, Ben Bella Sayed

    Multiparameter radars have been used in the interpretation of many meteorological phenomena. Rainfall estimates can be obtained from multiparameter radar measurements. Studying and analyzing spatial variability of different rainfall algorithms, namely R(ZH), the algorithm based on reflectivity, R(ZH, ZDR), the algorithm based on reflectivity and differential reflectivity, R(KDP), the algorithm based on specific differential phase, and R(KDP, Z DR), the algorithm based on specific differential phase and differential reflectivity, are important for radar applications. The data used in this research were collected using CSU-CHILL, CP-2, and S-POL radars. In this research multiple objectives are addressed using wavelet analysis namely, (1)space time variability of various rainfall algorithms, (2)separation of convective and stratiform storms based on reflectivity measurements, (3)and detection of features such as bright bands. The bright band is a multiscale edge detection problem. In this research, the technique of multiscale edge detection is applied on the radar data collected using CP-2 radar on August 23, 1991 to detect the melting layer. In the analysis of space/time variability of rainfall algorithms, wavelet variance introduces an idea about the statistics of the radar field. In addition, multiresolution analysis of different rainfall estimates based on four algorithms, namely R(ZH), R( ZH, ZDR), R(K DP), and R(KDP, Z DR), are analyzed. The flood data of July 29, 1997 collected by CSU-CHILL radar were used for this analysis. Another set of S-POL radar data collected on May 2, 1997 at Wichita, Kansas were used as well. At each level of approximation, the detail and the approximation components are analyzed. Based on this analysis, the rainfall algorithms can be judged. From this analysis, an important result was obtained. The Z-R algorithms that are widely used do not show the full spatial variability of rainfall. In addition another intuitively obvious result

  17. ECG De-noising: A comparison between EEMD-BLMS and DWT-NN algorithms.

    PubMed

    Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan

    2015-08-01

    Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings. Results clearly show that both the methods works equally well when used on Type-I signals. However, on Type-II signals the DWT-NN performed better. In the case of real ECG data, though both methods performed similar, the DWT-NN method was a slightly better in terms of minimizing the high frequency artifacts. PMID:26737124

  18. ECG De-noising: A comparison between EEMD-BLMS and DWT-NN algorithms.

    PubMed

    Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan

    2015-08-01

    Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings. Results clearly show that both the methods works equally well when used on Type-I signals. However, on Type-II signals the DWT-NN performed better. In the case of real ECG data, though both methods performed similar, the DWT-NN method was a slightly better in terms of minimizing the high frequency artifacts.

  19. An open-source Matlab code package for improved rank-reduction 3D seismic data denoising and reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Yangkang; Huang, Weilin; Zhang, Dong; Chen, Wei

    2016-10-01

    Simultaneous seismic data denoising and reconstruction is a currently popular research subject in modern reflection seismology. Traditional rank-reduction based 3D seismic data denoising and reconstruction algorithm will cause strong residual noise in the reconstructed data and thus affect the following processing and interpretation tasks. In this paper, we propose an improved rank-reduction method by modifying the truncated singular value decomposition (TSVD) formula used in the traditional method. The proposed approach can help us obtain nearly perfect reconstruction performance even in the case of low signal-to-noise ratio (SNR). The proposed algorithm is tested via one synthetic and field data examples. Considering that seismic data interpolation and denoising source packages are seldom in the public domain, we also provide a program template for the rank-reduction based simultaneous denoising and reconstruction algorithm by providing an open-source Matlab package.

  20. The study of real-time denoising algorithm based on parallel computing for the MEMS IR imager

    NASA Astrophysics Data System (ADS)

    Gong, Cheng; Hui, Mei; Dong, Liquan; Zhao, Yuejin

    2011-11-01

    Recent years, the MEMS-based optical readout infrared imaging technology is becoming a research hotspot. Studies show that the MEMS-based optical readout infrared imager features a high frame rate. Considering the high data Throughput and computing complexity of denoising algorithm It's difficult to ensure real-time of the image processing. In order to improve processing speed and achieve real-time, we conducted a study of denoising algorithm based on parallel computing using FPGA (Field Programmable Gate Array). In the paper, we analyze the imaging characteristics of MEMS-based optical readout infrared imager and design parallel computing methods for real-time denoising using the hardware description language. The experiment shows that the parallel computing denoising algorithm can improve infrared image processing speed to meet real-time requirement.