Sample records for threshold denoising method

  1. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  2. Photoacoustic signals denoising of the glucose aqueous solutions using an improved wavelet threshold method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Xiong, Zhihua

    2016-10-01

    The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.

  3. Adaptive threshold shearlet transform for surface microseismic data denoising

    NASA Astrophysics Data System (ADS)

    Tang, Na; Zhao, Xian; Li, Yue; Zhu, Dan

    2018-06-01

    Random noise suppression plays an important role in microseismic data processing. The microseismic data is often corrupted by strong random noise, which would directly influence identification and location of microseismic events. Shearlet transform is a new multiscale transform, which can effectively process the low magnitude of microseismic data. In shearlet domain, due to different distributions of valid signals and random noise, shearlet coefficients can be shrunk by threshold. Therefore, threshold is vital in suppressing random noise. The conventional threshold denoising algorithms usually use the same threshold to process all coefficients, which causes noise suppression inefficiency or valid signals loss. In order to solve above problems, we propose the adaptive threshold shearlet transform (ATST) for surface microseismic data denoising. In the new algorithm, we calculate the fundamental threshold for each direction subband firstly. In each direction subband, the adjustment factor is obtained according to each subband coefficient and its neighboring coefficients, in order to adaptively regulate the fundamental threshold for different shearlet coefficients. Finally we apply the adaptive threshold to deal with different shearlet coefficients. The experimental denoising results of synthetic records and field data illustrate that the proposed method exhibits better performance in suppressing random noise and preserving valid signal than the conventional shearlet denoising method.

  4. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  5. Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform

    NASA Astrophysics Data System (ADS)

    Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili

    2017-08-01

    A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.

  6. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  7. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  8. Study on De-noising Technology of Radar Life Signal

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei

    2016-05-01

    Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..

  9. Wavelet median denoising of ultrasound images

    NASA Astrophysics Data System (ADS)

    Macey, Katherine E.; Page, Wyatt H.

    2002-05-01

    Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.

  10. Noise reduction in Lidar signal using correlation-based EMD combined with soft thresholding and roughness penalty

    NASA Astrophysics Data System (ADS)

    Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo

    2018-01-01

    Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.

  11. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  12. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  13. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  14. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.

  15. A method for predicting DCT-based denoising efficiency for grayscale images corrupted by AWGN and additive spatially correlated noise

    NASA Astrophysics Data System (ADS)

    Rubel, Aleksey S.; Lukin, Vladimir V.; Egiazarian, Karen O.

    2015-03-01

    Results of denoising based on discrete cosine transform for a wide class of images corrupted by additive noise are obtained. Three types of noise are analyzed: additive white Gaussian noise and additive spatially correlated Gaussian noise with middle and high correlation levels. TID2013 image database and some additional images are taken as test images. Conventional DCT filter and BM3D are used as denoising techniques. Denoising efficiency is described by PSNR and PSNR-HVS-M metrics. Within hard-thresholding denoising mechanism, DCT-spectrum coefficient statistics are used to characterize images and, subsequently, denoising efficiency for them. Results of denoising efficiency are fitted for such statistics and efficient approximations are obtained. It is shown that the obtained approximations provide high accuracy of prediction of denoising efficiency.

  16. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  17. A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan

    2016-11-01

    The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.

  18. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    PubMed

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  19. OBS Data Denoising Based on Compressed Sensing Using Fast Discrete Curvelet Transform

    NASA Astrophysics Data System (ADS)

    Nan, F.; Xu, Y.

    2017-12-01

    OBS (Ocean Bottom Seismometer) data denoising is an important step of OBS data processing and inversion. It is necessary to get clearer seismic phases for further velocity structure analysis. Traditional methods for OBS data denoising include band-pass filter, Wiener filter and deconvolution etc. (Liu, 2015). Most of these filtering methods are based on Fourier Transform (FT). Recently, the multi-scale transform methods such as wavelet transform (WT) and Curvelet transform (CvT) are widely used for data denoising in various applications. The FT, WT and CvT could represent signal sparsely and separate noise in transform domain. They could be used in different cases. Compared with Curvelet transform, the FT has Gibbs phenomenon and it cannot handle points discontinuities well. WT is well localized and multi scale, but it has poor orientation selectivity and could not handle curves discontinuities well. CvT is a multiscale directional transform that could represent curves with only a small number of coefficients. It provide an optimal sparse representation of objects with singularities along smooth curves, which is suitable for seismic data processing. As we know, different seismic phases in OBS data are showed as discontinuous curves in time domain. Hence, we promote to analysis the OBS data via CvT and separate the noise in CvT domain. In this paper, our sparsity-promoting inversion approach is restrained by L1 condition and we solve this L1 problem by using modified iteration thresholding. Results show that the proposed method could suppress the noise well and give sparse results in Curvelet domain. Figure 1 compares the Curvelet denoising method with Wavelet method on the same iterations and threshold through synthetic example. a)Original data. b) Add-noise data. c) Denoised data using CvT. d) Denoised data using WT. The CvT can well eliminate the noise and has better result than WT. Further we applied the CvT denoise method for the OBS data processing. Figure 2a is a common receiver gather collected in the Bohai Sea, China. The whole profile is 120km long with 987 shots. The horizontal axis is shot number. The vertical axis is travel time reduced by 6km/s. We use our method to process the data and get a denoised profile figure 2b. After denoising, most of the high frequency noise was suppressed and the seismic phases were clearer.

  20. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  1. Cloud-Scale Genomic Signals Processing for Robust Large-Scale Cancer Genomic Microarray Data Analysis.

    PubMed

    Harvey, Benjamin Simeon; Ji, Soo-Yeon

    2017-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.

  2. Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2017-02-01

    Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.

  3. Near-Infrared Spectrum Detection of Wheat Gluten Protein Content Based on a Combined Filtering Method.

    PubMed

    Cai, Jian-Hua

    2017-09-01

    To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.

  4. Improved CEEMDAN-wavelet transform de-noising method and its application in well logging noise reduction

    NASA Astrophysics Data System (ADS)

    Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi

    2018-06-01

    The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.

  5. Denoising solar radiation data using coiflet wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuatesmore » according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.« less

  6. Research on Ship-Radiated Noise Denoising Using Secondary Variational Mode Decomposition and Correlation Coefficient.

    PubMed

    Li, Yuxing; Li, Yaan; Chen, Xiao; Yu, Jing

    2017-12-26

    As the sound signal of ships obtained by sensors contains other many significant characteristics of ships and called ship-radiated noise (SN), research into a denoising algorithm and its application has obtained great significance. Using the advantage of variational mode decomposition (VMD) combined with the correlation coefficient for denoising, a hybrid secondary denoising algorithm is proposed using secondary VMD combined with a correlation coefficient (CC). First, different kinds of simulation signals are decomposed into several bandwidth-limited intrinsic mode functions (IMFs) using VMD, where the decomposition number by VMD is equal to the number by empirical mode decomposition (EMD); then, the CCs between the IMFs and the simulation signal are calculated respectively. The noise IMFs are identified by the CC threshold and the rest of the IMFs are reconstructed in order to realize the first denoising process. Finally, secondary denoising of the simulation signal can be accomplished by repeating the above steps of decomposition, screening and reconstruction. The final denoising result is determined according to the CC threshold. The denoising effect is compared under the different signal-to-noise ratio and the time of decomposition by VMD. Experimental results show the validity of the proposed denoising algorithm using secondary VMD (2VMD) combined with CC compared to EMD denoising, ensemble EMD (EEMD) denoising, VMD denoising and cubic VMD (3VMD) denoising, as well as two denoising algorithms presented recently. The proposed denoising algorithm is applied to feature extraction and classification for SN signals, which can effectively improve the recognition rate of different kinds of ships.

  7. Communication in a noisy environment: Perception of one's own voice and speech enhancement

    NASA Astrophysics Data System (ADS)

    Le Cocq, Cecile

    Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing protectors. A possible solution to this problem is to denoise the speech signal and transmit it under the hearing protector. Lots of denoising techniques are available and are often used for denoising speech in telecommunication. In the framework of this thesis, denoising by wavelet thresholding is considered. A first study on "classical" wavelet denoising technics is conducted in order to evaluate their performance in noisy industrial environments. The tested speech signals are altered by industrial noises according to a wide range of signal to noise ratios. The speech denoised signals are evaluated with four criteria. A large database is obtained and analyzed with a selection algorithm which has been designed for this purpose. This first study has lead to the identification of the influence from the different parameters of the wavelet denoising method on its quality and has identified the "classical" method which has given the best performances in terms of denoising quality. This first study has also generated ideas for designing a new thresholding rule suitable for speech wavelet denoising in an industrial noisy environment. In a second study, this new thresholding rule is presented and evaluated. Its performances are better than the "classical" method found in the first study when the signal to noise ratio from the speech signal is between --10 dB and 15 dB.

  8. A simple filter circuit for denoising biomechanical impact signals.

    PubMed

    Subramaniam, Suba R; Georgakis, Apostolos

    2009-01-01

    We present a simple scheme for denoising non-stationary biomechanical signals with the aim of accurately estimating their second derivative (acceleration). The method is based on filtering in fractional Fourier domains using well-known low-pass filters in a way that amounts to a time-varying cut-off threshold. The resulting algorithm is linear and its design is facilitated by the relationship between the fractional Fourier transform and joint time-frequency representations. The implemented filter circuit employs only three low-order filters while its efficiency is further supported by the low computational complexity of the fractional Fourier transform. The results demonstrate that the proposed method can denoise the signals effectively and is more robust against noise as compared to conventional low-pass filters.

  9. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  10. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  11. Improved wavelet de-noising method of rail vibration signal for wheel tread detection

    NASA Astrophysics Data System (ADS)

    Zhao, Quan-ke; Zhao, Quanke; Gao, Xiao-rong; Luo, Lin

    2011-12-01

    The irregularities of wheel tread can be detected by processing acceleration vibration signal of railway. Various kinds of noise from different sources such as wheel-rail resonance, bad weather and artificial reasons are the key factors influencing detection accuracy. A method which uses wavelet threshold de-noising is investigated to reduce noise in the detection signal, and an improved signal processing algorithm based on it has been established. The results of simulations and field experiments show that the proposed method can increase signal-to-noise ratio (SNR) of the rail vibration signal effectively, and improve the detection accuracy.

  12. Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.

    PubMed

    Zhou, Weidong; Gotman, Jean

    2004-01-01

    In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.

  13. Remote sensing image denoising application by generalized morphological component analysis

    NASA Astrophysics Data System (ADS)

    Yu, Chong; Chen, Xiong

    2014-12-01

    In this paper, we introduced a remote sensing image denoising method based on generalized morphological component analysis (GMCA). This novel algorithm is the further extension of morphological component analysis (MCA) algorithm to the blind source separation framework. The iterative thresholding strategy adopted by GMCA algorithm firstly works on the most significant features in the image, and then progressively incorporates smaller features to finely tune the parameters of whole model. Mathematical analysis of the computational complexity of GMCA algorithm is provided. Several comparison experiments with state-of-the-art denoising algorithms are reported. In order to make quantitative assessment of algorithms in experiments, Peak Signal to Noise Ratio (PSNR) index and Structural Similarity (SSIM) index are calculated to assess the denoising effect from the gray-level fidelity aspect and the structure-level fidelity aspect, respectively. Quantitative analysis on experiment results, which is consistent with the visual effect illustrated by denoised images, has proven that the introduced GMCA algorithm possesses a marvelous remote sensing image denoising effectiveness and ability. It is even hard to distinguish the original noiseless image from the recovered image by adopting GMCA algorithm through visual effect.

  14. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  15. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  16. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  17. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE PAGES

    Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...

    2018-04-06

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  18. Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Qiang; Du, Qizhen; Gong, Xufei

    Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less

  19. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    PubMed Central

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  20. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  1. Signal-Noise Identification of Magnetotelluric Signals Using Fractal-Entropy and Clustering Algorithm for Targeted De-Noising

    NASA Astrophysics Data System (ADS)

    Li, Jin; Zhang, Xian; Gong, Jinzhe; Tang, Jingtian; Ren, Zhengyong; Li, Guang; Deng, Yanli; Cai, Jin

    A new technique is proposed for signal-noise identification and targeted de-noising of Magnetotelluric (MT) signals. This method is based on fractal-entropy and clustering algorithm, which automatically identifies signal sections corrupted by common interference (square, triangle and pulse waves), enabling targeted de-noising and preventing the loss of useful information in filtering. To implement the technique, four characteristic parameters — fractal box dimension (FBD), higuchi fractal dimension (HFD), fuzzy entropy (FuEn) and approximate entropy (ApEn) — are extracted from MT time-series. The fuzzy c-means (FCM) clustering technique is used to analyze the characteristic parameters and automatically distinguish signals with strong interference from the rest. The wavelet threshold (WT) de-noising method is used only to suppress the identified strong interference in selected signal sections. The technique is validated through signal samples with known interference, before being applied to a set of field measured MT/Audio Magnetotelluric (AMT) data. Compared with the conventional de-noising strategy that blindly applies the filter to the overall dataset, the proposed method can automatically identify and purposefully suppress the intermittent interference in the MT/AMT signal. The resulted apparent resistivity-phase curve is more continuous and smooth, and the slow-change trend in the low-frequency range is more precisely reserved. Moreover, the characteristic of the target-filtered MT/AMT signal is close to the essential characteristic of the natural field, and the result more accurately reflects the inherent electrical structure information of the measured site.

  2. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    NASA Astrophysics Data System (ADS)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  3. Wavelet analysis techniques applied to removing varying spectroscopic background in calibration model for pear sugar content

    NASA Astrophysics Data System (ADS)

    Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping

    2005-11-01

    A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.

  4. Denoising time-resolved microscopy image sequences with singular value thresholding.

    PubMed

    Furnival, Tom; Leary, Rowan K; Midgley, Paul A

    2017-07-01

    Time-resolved imaging in microscopy is important for the direct observation of a range of dynamic processes in both the physical and life sciences. However, the image sequences are often corrupted by noise, either as a result of high frame rates or a need to limit the radiation dose received by the sample. Here we exploit both spatial and temporal correlations using low-rank matrix recovery methods to denoise microscopy image sequences. We also make use of an unbiased risk estimator to address the issue of how much thresholding to apply in a robust and automated manner. The performance of the technique is demonstrated using simulated image sequences, as well as experimental scanning transmission electron microscopy data, where surface adatom motion and nanoparticle structural dynamics are recovered at rates of up to 32 frames per second. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Rejection of the maternal electrocardiogram in the electrohysterogram signal.

    PubMed

    Leman, H; Marque, C

    2000-08-01

    The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.

  6. An improved method based on wavelet coefficient correlation to filter noise in Doppler ultrasound blood flow signals

    NASA Astrophysics Data System (ADS)

    Wan, Renzhi; Zu, Yunxiao; Shao, Lin

    2018-04-01

    The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.

  7. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  8. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  9. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    PubMed

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed by minimizing the nuclear norm of difference between the sampled image and the recovered image. It has been illustrated that this algorithm improves the ability of previous image reconstruction algorithms to remove noise artifacts while significantly improving the quality of MRI recovery.

  10. Customized Multiwavelets for Planetary Gearbox Fault Detection Based on Vibration Sensor Signals

    PubMed Central

    Sun, Hailiang; Zi, Yanyang; He, Zhengjia; Yuan, Jing; Wang, Xiaodong; Chen, Lue

    2013-01-01

    Planetary gearboxes exhibit complicated dynamic responses which are more difficult to detect in vibration signals than fixed-axis gear trains because of the special gear transmission structures. Diverse advanced methods have been developed for this challenging task to reduce or avoid unscheduled breakdown and catastrophic accidents. It is feasible to make fault features distinct by using multiwavelet denoising which depends on the feature separation and the threshold denoising. However, standard and fixed multiwavelets are not suitable for accurate fault feature detections because they are usually independent of the measured signals. To overcome this drawback, a method to construct customized multiwavelets based on the redundant symmetric lifting scheme is proposed in this paper. A novel indicator which combines kurtosis and entropy is applied to select the optimal multiwavelets, because kurtosis is sensitive to sharp impulses and entropy is effective for periodic impulses. The improved neighboring coefficients method is introduced into multiwavelet denoising. The vibration signals of a planetary gearbox from a satellite communication antenna on a measurement ship are captured under various motor speeds. The results show the proposed method could accurately detect the incipient pitting faults on two neighboring teeth in the planetary gearbox. PMID:23334609

  11. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  12. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  13. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  14. A wavelet-based estimator of the degrees of freedom in denoised fMRI time series for probabilistic testing of functional connectivity and brain graphs.

    PubMed

    Patel, Ameera X; Bullmore, Edward T

    2016-11-15

    Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised fMRI time series. Accurate estimation of df offers many potential advantages for probabilistically thresholding functional connectivity and network statistics tested in the context of spatially variant and non-stationary noise. Code for wavelet despiking, seed correlational testing and probabilistic graph construction is freely available to download as part of the BrainWavelet Toolbox at www.brainwavelet.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  16. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  17. Denoising forced-choice detection data.

    PubMed

    García-Pérez, Miguel A

    2010-02-01

    Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.

  18. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models

    PubMed Central

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today’s increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, Singapore’s MSCI futures, South Korea’s KOSPI 200 futures, and Taiwan’s TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis. PMID:27248692

  19. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models.

    PubMed

    Chan Phooi M'ng, Jacinta; Mehralizadeh, Mohammadali

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today's increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong's Hang Seng futures, Japan's NIKKEI 225 futures, Singapore's MSCI futures, South Korea's KOSPI 200 futures, and Taiwan's TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis.

  20. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling bearing system and a gear system. The results show that the proposed multisensor matching synchronous squeezing wavelet transform (MMSWT) is superior to existing methods.

  1. Wavelet-domain de-noising technique for THz pulsed spectroscopy

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.

    2014-09-01

    De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.

  2. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  3. Speckle reduction in optical coherence tomography images based on wave atoms

    PubMed Central

    Du, Yongzhao; Liu, Gangjun; Feng, Guoying; Chen, Zhongping

    2014-01-01

    Abstract. Optical coherence tomography (OCT) is an emerging noninvasive imaging technique, which is based on low-coherence interferometry. OCT images suffer from speckle noise, which reduces image contrast. A shrinkage filter based on wave atoms transform is proposed for speckle reduction in OCT images. Wave atoms transform is a new multiscale geometric analysis tool that offers sparser expansion and better representation for images containing oscillatory patterns and textures than other traditional transforms, such as wavelet and curvelet transforms. Cycle spinning-based technology is introduced to avoid visual artifacts, such as Gibbs-like phenomenon, and to develop a translation invariant wave atoms denoising scheme. The speckle suppression degree in the denoised images is controlled by an adjustable parameter that determines the threshold in the wave atoms domain. The experimental results show that the proposed method can effectively remove the speckle noise and improve the OCT image quality. The signal-to-noise ratio, contrast-to-noise ratio, average equivalent number of looks, and cross-correlation (XCOR) values are obtained, and the results are also compared with the wavelet and curvelet thresholding techniques. PMID:24825507

  4. Blind One-Bit Compressive Sampling

    DTIC Science & Technology

    2013-01-17

    14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0

  5. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  6. Seismic data interpolation and denoising by learning a tensor tight frame

    NASA Astrophysics Data System (ADS)

    Liu, Lina; Plonka, Gerlind; Ma, Jianwei

    2017-10-01

    Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.

  7. ECG denoising with adaptive bionic wavelet transform.

    PubMed

    Sayadi, Omid; Shamsollahi, Mohammad Bagher

    2006-01-01

    In this paper a new ECG denoising scheme is proposed using a novel adaptive wavelet transform, named bionic wavelet transform (BWT), which had been first developed based on a model of the active auditory system. There has been some outstanding features with the BWT such as nonlinearity, high sensitivity and frequency selectivity, concentrated energy distribution and its ability to reconstruct signal via inverse transform but the most distinguishing characteristic of BWT is that its resolution in the time-frequency domain can be adaptively adjusted not only by the signal frequency but also by the signal instantaneous amplitude and its first-order differential. Besides by optimizing the BWT parameters parallel to modifying a new threshold value, one can handle ECG denoising with results comparing to those of wavelet transform (WT). Preliminary tests of BWT application to ECG denoising were constructed on the signals of MIT-BIH database which showed high performance of noise reduction.

  8. Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine

    PubMed Central

    Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin

    2016-01-01

    This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox. PMID:26848665

  9. Simultaneous-Fault Diagnosis of Gearboxes Using Probabilistic Committee Machine.

    PubMed

    Zhong, Jian-Hua; Wong, Pak Kin; Yang, Zhi-Xin

    2016-02-02

    This study combines signal de-noising, feature extraction, two pairwise-coupled relevance vector machines (PCRVMs) and particle swarm optimization (PSO) for parameter optimization to form an intelligent diagnostic framework for gearbox fault detection. Firstly, the noises of sensor signals are de-noised by using the wavelet threshold method to lower the noise level. Then, the Hilbert-Huang transform (HHT) and energy pattern calculation are applied to extract the fault features from de-noised signals. After that, an eleven-dimension vector, which consists of the energies of nine intrinsic mode functions (IMFs), maximum value of HHT marginal spectrum and its corresponding frequency component, is obtained to represent the features of each gearbox fault. The two PCRVMs serve as two different fault detection committee members, and they are trained by using vibration and sound signals, respectively. The individual diagnostic result from each committee member is then combined by applying a new probabilistic ensemble method, which can improve the overall diagnostic accuracy and increase the number of detectable faults as compared to individual classifiers acting alone. The effectiveness of the proposed framework is experimentally verified by using test cases. The experimental results show the proposed framework is superior to existing single classifiers in terms of diagnostic accuracies for both single- and simultaneous-faults in the gearbox.

  10. Incrementing data quality of multi-frequency echograms using the Adaptive Wiener Filter (AWF) denoising algorithm

    NASA Astrophysics Data System (ADS)

    Peña, M.

    2016-10-01

    Achieving acceptable signal-to-noise ratio (SNR) can be difficult when working in sparsely populated waters and/or when species have low scattering such as fluid filled animals. The increasing use of higher frequencies and the study of deeper depths in fisheries acoustics, as well as the use of commercial vessels, is raising the need to employ good denoising algorithms. The use of a lower Sv threshold to remove noise or unwanted targets is not suitable in many cases and increases the relative background noise component in the echogram, demanding more effectiveness from denoising algorithms. The Adaptive Wiener Filter (AWF) denoising algorithm is presented in this study. The technique is based on the AWF commonly used in digital photography and video enhancement. The algorithm firstly increments the quality of the data with a variance-dependent smoothing, before estimating the noise level as the envelope of the Sv minima. The AWF denoising algorithm outperforms existing algorithms in the presence of gaussian, speckle and salt & pepper noise, although impulse noise needs to be previously removed. Cleaned echograms present homogenous echotraces with outlined edges.

  11. Identification of a self-paced hitting task in freely moving rats based on adaptive spike detection from multi-unit M1 cortical signals

    PubMed Central

    Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie

    2013-01-01

    Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254

  12. Sparse representations via learned dictionaries for x-ray angiogram image denoising

    NASA Astrophysics Data System (ADS)

    Shang, Jingfan; Huang, Zhenghua; Li, Qian; Zhang, Tianxu

    2018-03-01

    X-ray angiogram image denoising is always an active research topic in the field of computer vision. In particular, the denoising performance of many existing methods had been greatly improved by the widely use of nonlocal similar patches. However, the only nonlocal self-similar (NSS) patch-based methods can be still be improved and extended. In this paper, we propose an image denoising model based on the sparsity of the NSS patches to obtain high denoising performance and high-quality image. In order to represent the sparsely NSS patches in every location of the image well and solve the image denoising model more efficiently, we obtain dictionaries as a global image prior by the K-SVD algorithm over the processing image; Then the single and effectively alternating directions method of multipliers (ADMM) method is used to solve the image denoising model. The results of widely synthetic experiments demonstrate that, owing to learned dictionaries by K-SVD algorithm, a sparsely augmented lagrangian image denoising (SALID) model, which perform effectively, obtains a state-of-the-art denoising performance and better high-quality images. Moreover, we also give some denoising results of clinical X-ray angiogram images.

  13. A Fast and On-Machine Measuring System Using the Laser Displacement Sensor for the Contour Parameters of the Drill Pipe Thread.

    PubMed

    Dong, Zhixu; Sun, Xingwei; Chen, Changzheng; Sun, Mengnan

    2018-04-13

    The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor's measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved.

  14. A Fast and On-Machine Measuring System Using the Laser Displacement Sensor for the Contour Parameters of the Drill Pipe Thread

    PubMed Central

    Sun, Xingwei; Chen, Changzheng; Sun, Mengnan

    2018-01-01

    The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor’s measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved. PMID:29652836

  15. Fractional domain varying-order differential denoising method

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran

    2014-10-01

    Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.

  16. Vibration Sensor Data Denoising Using a Time-Frequency Manifold for Machinery Fault Diagnosis

    PubMed Central

    He, Qingbo; Wang, Xiangxiang; Zhou, Qiang

    2014-01-01

    Vibration sensor data from a mechanical system are often associated with important measurement information useful for machinery fault diagnosis. However, in practice the existence of background noise makes it difficult to identify the fault signature from the sensing data. This paper introduces the time-frequency manifold (TFM) concept into sensor data denoising and proposes a novel denoising method for reliable machinery fault diagnosis. The TFM signature reflects the intrinsic time-frequency structure of a non-stationary signal. The proposed method intends to realize data denoising by synthesizing the TFM using time-frequency synthesis and phase space reconstruction (PSR) synthesis. Due to the merits of the TFM in noise suppression and resolution enhancement, the denoised signal would have satisfactory denoising effects, as well as inherent time-frequency structure keeping. Moreover, this paper presents a clustering-based statistical parameter to evaluate the proposed method, and also presents a new diagnostic approach, called frequency probability time series (FPTS) spectral analysis, to show its effectiveness in fault diagnosis. The proposed TFM-based data denoising method has been employed to deal with a set of vibration sensor data from defective bearings, and the results verify that for machinery fault diagnosis the method is superior to two traditional denoising methods. PMID:24379045

  17. Deep Marginalized Sparse Denoising Auto-Encoder for Image Denoising

    NASA Astrophysics Data System (ADS)

    Ma, Hongqiang; Ma, Shiping; Xu, Yuelei; Zhu, Mingming

    2018-01-01

    Stacked Sparse Denoising Auto-Encoder (SSDA) has been successfully applied to image denoising. As a deep network, the SSDA network with powerful data feature learning ability is superior to the traditional image denoising algorithms. However, the algorithm has high computational complexity and slow convergence rate in the training. To address this limitation, we present a method of image denoising based on Deep Marginalized Sparse Denoising Auto-Encoder (DMSDA). The loss function of Sparse Denoising Auto-Encoder is marginalized so that it satisfies both sparseness and marginality. The experimental results show that the proposed algorithm can not only outperform SSDA in the convergence speed and training time, but also has better denoising performance than the current excellent denoising algorithms, including both the subjective and objective evaluation of image denoising.

  18. Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation

    NASA Astrophysics Data System (ADS)

    Abbasi, Ashkan; Monadjemi, Amirhassan; Fang, Leyuan; Rabbani, Hossein

    2018-03-01

    We present a nonlocal weighted sparse representation (NWSR) method for reconstruction of retinal optical coherence tomography (OCT) images. To reconstruct a high signal-to-noise ratio and high-resolution OCT images, utilization of efficient denoising and interpolation algorithms are necessary, especially when the original data were subsampled during acquisition. However, the OCT images suffer from the presence of a high level of noise, which makes the estimation of sparse representations a difficult task. Thus, the proposed NWSR method merges sparse representations of multiple similar noisy and denoised patches to better estimate a sparse representation for each patch. First, the sparse representation of each patch is independently computed over an overcomplete dictionary, and then a nonlocal weighted sparse coefficient is computed by averaging representations of similar patches. Since the sparsity can reveal relevant information from noisy patches, combining noisy and denoised patches' representations is beneficial to obtain a more robust estimate of the unknown sparse representation. The denoised patches are obtained by applying an off-the-shelf image denoising method and our method provides an efficient way to exploit information from noisy and denoised patches' representations. The experimental results on denoising and interpolation of spectral domain OCT images demonstrated the effectiveness of the proposed NWSR method over existing state-of-the-art methods.

  19. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  20. A multichannel block-matching denoising algorithm for spectral photon-counting CT images.

    PubMed

    Harrison, Adam P; Xu, Ziyue; Pourmorteza, Amir; Bluemke, David A; Mollura, Daniel J

    2017-06-01

    We present a denoising algorithm designed for a whole-body prototype photon-counting computed tomography (PCCT) scanner with up to 4 energy thresholds and associated energy-binned images. Spectral PCCT images can exhibit low signal to noise ratios (SNRs) due to the limited photon counts in each simultaneously-acquired energy bin. To help address this, our denoising method exploits the correlation and exact alignment between energy bins, adapting the highly-effective block-matching 3D (BM3D) denoising algorithm for PCCT. The original single-channel BM3D algorithm operates patch-by-patch. For each small patch in the image, a patch grouping action collects similar patches from the rest of the image, which are then collaboratively filtered together. The resulting performance hinges on accurate patch grouping. Our improved multi-channel version, called BM3D_PCCT, incorporates two improvements. First, BM3D_PCCT uses a more accurate shared patch grouping based on the image reconstructed from photons detected in all 4 energy bins. Second, BM3D_PCCT performs a cross-channel decorrelation, adding a further dimension to the collaborative filtering process. These two improvements produce a more effective algorithm for PCCT denoising. Preliminary results compare BM3D_PCCT against BM3D_Naive, which denoises each energy bin independently. Experiments use a three-contrast PCCT image of a canine abdomen. Within five regions of interest, selected from paraspinal muscle, liver, and visceral fat, BM3D_PCCT reduces the noise standard deviation by 65.0%, compared to 40.4% for BM3D_Naive. Attenuation values of the contrast agents in calibration vials also cluster much tighter to their respective lines of best fit. Mean angular differences (in degrees) for the original, BM3D_Naive, and BM3D_PCCT images, respectively, were 15.61, 7.34, and 4.45 (iodine); 12.17, 7.17, and 4.39 (galodinium); and 12.86, 6.33, and 3.96 (bismuth). We outline a multi-channel denoising algorithm tailored for spectral PCCT images, demonstrating improved performance over an independent, yet state-of-the-art, single-channel approach. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  1. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  2. Image denoising by exploring external and internal correlations.

    PubMed

    Yue, Huanjing; Sun, Xiaoyan; Yang, Jingyu; Wu, Feng

    2015-06-01

    Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

  3. Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease

    DTIC Science & Technology

    2016-09-01

    Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing

  4. Single image super resolution algorithm based on edge interpolation in NSCT domain

    NASA Astrophysics Data System (ADS)

    Zhang, Mengqun; Zhang, Wei; He, Xinyu

    2017-11-01

    In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.

  5. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  6. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    PubMed Central

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  7. Filtering of high noise breast thermal images using fast non-local means.

    PubMed

    Suganthi, S S; Ramakrishnan, S

    2014-01-01

    Analyses of breast thermograms are still a challenging task primarily due to the limitations such as low contrast, low signal to noise ratio and absence of clear edges. Therefore, always there is a requirement for preprocessing techniques before performing any quantitative analysis. In this work, a noise removal framework using fast non-local means algorithm, method noise and median filter was used to denoise breast thermograms. The images considered were subjected to Anscombe transformation to convert the distribution from Poisson to Gaussian. The pre-denoised image was obtained by subjecting the transformed image to fast non-local means filtering. The method noise which is the difference between the original and pre-denoised image was observed with the noise component merged in few structures and fine detail of the image. The image details presented in the method noise was extracted by smoothing the noise part using the median filter. The retrieved image part was added to the pre-denoised image to obtain the final denoised image. The performance of this technique was compared with that of Wiener and SUSAN filters. The results show that all the filters considered are able to remove the noise component. The performance of the proposed denoising framework is found to be good in preserving detail and removing noise. Further, the method noise is observed with negligible image details. Similarly, denoised image with no noise and smoothed edges are observed using Wiener filter and its method noise is contained with few structures and image details. The performance results of SUSAN filter is found to be blurred denoised image with little noise and also method noise with extensive structure and image details. Hence, it appears that the proposed denoising framework is able to preserve the edge information and generate clear image that could help in enhancing the diagnostic relevance of breast thermograms. In this paper, the introduction, objectives, materials and methods, results and discussion and conclusions are presented in detail.

  8. Denoising Medical Images using Calculus of Variations

    PubMed Central

    Kohan, Mahdi Nakhaie; Behnam, Hamid

    2011-01-01

    We propose a method for medical image denoising using calculus of variations and local variance estimation by shaped windows. This method reduces any additive noise and preserves small patterns and edges of images. A pyramid structure-texture decomposition of images is used to separate noise and texture components based on local variance measures. The experimental results show that the proposed method has visual improvement as well as a better SNR, RMSE and PSNR than common medical image denoising methods. Experimental results in denoising a sample Magnetic Resonance image show that SNR, PSNR and RMSE have been improved by 19, 9 and 21 percents respectively. PMID:22606674

  9. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising.

    PubMed

    Zhang, Kai; Zuo, Wangmeng; Chen, Yunjin; Meng, Deyu; Zhang, Lei

    2017-07-01

    The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

  10. From heuristic optimization to dictionary learning: a review and comprehensive comparison of image denoising algorithms.

    PubMed

    Shao, Ling; Yan, Ruomei; Li, Xuelong; Liu, Yan

    2014-07-01

    Image denoising is a well explored topic in the field of image processing. In the past several decades, the progress made in image denoising has benefited from the improved modeling of natural images. In this paper, we introduce a new taxonomy based on image representations for a better understanding of state-of-the-art image denoising techniques. Within each category, several representative algorithms are selected for evaluation and comparison. The experimental results are discussed and analyzed to determine the overall advantages and disadvantages of each category. In general, the nonlocal methods within each category produce better denoising results than local ones. In addition, methods based on overcomplete representations using learned dictionaries perform better than others. The comprehensive study in this paper would serve as a good reference and stimulate new research ideas in image denoising.

  11. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  12. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  13. Green Channel Guiding Denoising on Bayer Image

    PubMed Central

    Zhang, Maojun

    2014-01-01

    Denoising is an indispensable function for digital cameras. In respect that noise is diffused during the demosaicking, the denoising ought to work directly on bayer data. The difficulty of denoising on bayer image is the interlaced mosaic pattern of red, green, and blue. Guided filter is a novel time efficient explicit filter kernel which can incorporate additional information from the guidance image, but it is still not applied for bayer image. In this work, we observe that the green channel of bayer mode is higher in both sampling rate and Signal-to-Noise Ratio (SNR) than the red and blue ones. Therefore the green channel can be used to guide denoising. This kind of guidance integrates the different color channels together. Experiments on both actual and simulated bayer images indicate that green channel acts well as the guidance signal, and the proposed method is competitive with other popular filter kernel denoising methods. PMID:24741370

  14. Pipeline for effective denoising of digital mammography and digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Bakic, Predrag R.; Foi, Alessandro; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2017-03-01

    Denoising can be used as a tool to enhance image quality and enforce low radiation doses in X-ray medical imaging. The effectiveness of denoising techniques relies on the validity of the underlying noise model. In full-field digital mammography (FFDM) and digital breast tomosynthesis (DBT), calibration steps like the detector offset and flat-fielding can affect some assumptions made by most denoising techniques. Furthermore, quantum noise found in X-ray images is signal-dependent and can only be treated by specific filters. In this work we propose a pipeline for FFDM and DBT image denoising that considers the calibration steps and simplifies the modeling of the noise statistics through variance-stabilizing transformations (VST). The performance of a state-of-the-art denoising method was tested with and without the proposed pipeline. To evaluate the method, objective metrics such as the normalized root mean square error (N-RMSE), noise power spectrum, modulation transfer function (MTF) and the frequency signal-to-noise ratio (SNR) were analyzed. Preliminary tests show that the pipeline improves denoising. When the pipeline is not used, bright pixels of the denoised image are under-filtered and dark pixels are over-smoothed due to the assumption of a signal-independent Gaussian model. The pipeline improved denoising up to 20% in terms of spatial N-RMSE and up to 15% in terms of frequency SNR. Besides improving the denoising, the pipeline does not increase signal smoothing significantly, as shown by the MTF. Thus, the proposed pipeline can be used with state-of-the-art denoising techniques to improve the quality of DBT and FFDM images.

  15. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  16. [Laser Raman spectrum analysis of carbendazim pesticide].

    PubMed

    Wang, Xiao-bin; Wu, Rui-mei; Liu, Mu-hua; Zhang, Lu-ling; Lin, Lei; Yan, Lin-yuan

    2014-06-01

    Raman signal of solid and liquid carbendazim pesticide was collected by laser Raman spectrometer. The acquired Raman spectrum signal of solid carbendazim was preprocessed by wavelet analysis method, and the optimal combination of wavelet denoising parameter was selected through mixed orthogonal test. The results showed that the best effect was got with signal to noise ratio (SNR) being 62.483 when db2 wavelet function was used, decomposition level was 2, the threshold option scheme was 'rigisure' and reset mode was 'sln'. According to the vibration mode of different functional groups, the de-noised Raman bands could be divided into 3 areas: 1 400-2 000, 700-1 400 and 200-700 cm(-1). And the de-noised Raman bands were assigned with and analyzed. The characteristic vibrational modes were gained in different ranges of wavenumbers. Strong Raman signals were observed in the Raman spectrum at 619, 725, 964, 1 022, 1 265, 1 274 and 1 478 cm(-1), respectively. These characteristic vibrational modes are characteristic Raman peaks of solid carbendazim pesticide. Find characteristic Raman peaks at 629, 727, 1 001, 1 219, 1 258 and 1 365 cm(-1) in Raman spectrum signal of liquid carbendazim. These characteristic peaks were basically tallies with the solid carbendazim. The results can provide basis for the rapid screening of pesticide residue in food and agricultural products based on Raman spectrum.

  17. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  18. Geodesic denoising for optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Shahrian Varnousfaderani, Ehsan; Vogl, Wolf-Dieter; Wu, Jing; Gerendas, Bianca S.; Simader, Christian; Langs, Georg; Waldstein, Sebastian M.; Schmidt-Erfurth, Ursula

    2016-03-01

    Optical coherence tomography (OCT) is an optical signal acquisition method capturing micrometer resolution, cross-sectional three-dimensional images. OCT images are used widely in ophthalmology to diagnose and monitor retinal diseases such as age-related macular degeneration (AMD) and Glaucoma. While OCT allows the visualization of retinal structures such as vessels and retinal layers, image quality and contrast is reduced by speckle noise, obfuscating small, low intensity structures and structural boundaries. Existing denoising methods for OCT images may remove clinically significant image features such as texture and boundaries of anomalies. In this paper, we propose a novel patch based denoising method, Geodesic Denoising. The method reduces noise in OCT images while preserving clinically significant, although small, pathological structures, such as fluid-filled cysts in diseased retinas. Our method selects optimal image patch distribution representations based on geodesic patch similarity to noisy samples. Patch distributions are then randomly sampled to build a set of best matching candidates for every noisy sample, and the denoised value is computed based on a geodesic weighted average of the best candidate samples. Our method is evaluated qualitatively on real pathological OCT scans and quantitatively on a proposed set of ground truth, noise free synthetic OCT scans with artificially added noise and pathologies. Experimental results show that performance of our method is comparable with state of the art denoising methods while outperforming them in preserving the critical clinically relevant structures.

  19. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  20. Analysis of Non Local Image Denoising Methods

    NASA Astrophysics Data System (ADS)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  1. A denoising algorithm for CT image using low-rank sparse coding

    NASA Astrophysics Data System (ADS)

    Lei, Yang; Xu, Dong; Zhou, Zhengyang; Wang, Tonghe; Dong, Xue; Liu, Tian; Dhabaan, Anees; Curran, Walter J.; Yang, Xiaofeng

    2018-03-01

    We propose a denoising method of CT image based on low-rank sparse coding. The proposed method constructs an adaptive dictionary of image patches and estimates the sparse coding regularization parameters using the Bayesian interpretation. A low-rank approximation approach is used to simultaneously construct the dictionary and achieve sparse representation through clustering similar image patches. A variable-splitting scheme and a quadratic optimization are used to reconstruct CT image based on achieved sparse coefficients. We tested this denoising technology using phantom, brain and abdominal CT images. The experimental results showed that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.

  2. Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method

    NASA Astrophysics Data System (ADS)

    Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2015-12-01

    In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.

  3. Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.

    2008-07-01

    Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.

  4. A Shearlet-based algorithm for quantum noise removal in low-dose CT images

    NASA Astrophysics Data System (ADS)

    Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng

    2016-03-01

    Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.

  5. Fractional Diffusion, Low Exponent Lévy Stable Laws, and 'Slow Motion' Denoising of Helium Ion Microscope Nanoscale Imagery.

    PubMed

    Carasso, Alfred S; Vladár, András E

    2012-01-01

    Helium ion microscopes (HIM) are capable of acquiring images with better than 1 nm resolution, and HIM images are particularly rich in morphological surface details. However, such images are generally quite noisy. A major challenge is to denoise these images while preserving delicate surface information. This paper presents a powerful slow motion denoising technique, based on solving linear fractional diffusion equations forward in time. The method is easily implemented computationally, using fast Fourier transform (FFT) algorithms. When applied to actual HIM images, the method is found to reproduce the essential surface morphology of the sample with high fidelity. In contrast, such highly sophisticated methodologies as Curvelet Transform denoising, and Total Variation denoising using split Bregman iterations, are found to eliminate vital fine scale information, along with the noise. Image Lipschitz exponents are a useful image metrology tool for quantifying the fine structure content in an image. In this paper, this tool is applied to rank order the above three distinct denoising approaches, in terms of their texture preserving properties. In several denoising experiments on actual HIM images, it was found that fractional diffusion smoothing performed noticeably better than split Bregman TV, which in turn, performed slightly better than Curvelet denoising.

  6. Exploring the impact of wavelet-based denoising in the classification of remote sensing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco

    2016-10-01

    The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.

  7. Network-based de-noising improves prediction from microarray data.

    PubMed

    Kato, Tsuyoshi; Murata, Yukio; Miura, Koh; Asai, Kiyoshi; Horton, Paul B; Koji, Tsuda; Fujibuchi, Wataru

    2006-03-20

    Prediction of human cell response to anti-cancer drugs (compounds) from microarray data is a challenging problem, due to the noise properties of microarrays as well as the high variance of living cell responses to drugs. Hence there is a strong need for more practical and robust methods than standard methods for real-value prediction. We devised an extended version of the off-subspace noise-reduction (de-noising) method to incorporate heterogeneous network data such as sequence similarity or protein-protein interactions into a single framework. Using that method, we first de-noise the gene expression data for training and test data and also the drug-response data for training data. Then we predict the unknown responses of each drug from the de-noised input data. For ascertaining whether de-noising improves prediction or not, we carry out 12-fold cross-validation for assessment of the prediction performance. We use the Pearson's correlation coefficient between the true and predicted response values as the prediction performance. De-noising improves the prediction performance for 65% of drugs. Furthermore, we found that this noise reduction method is robust and effective even when a large amount of artificial noise is added to the input data. We found that our extended off-subspace noise-reduction method combining heterogeneous biological data is successful and quite useful to improve prediction of human cell cancer drug responses from microarray data.

  8. Movie denoising by average of warped lines.

    PubMed

    Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro

    2007-09-01

    Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.

  9. MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?

    PubMed

    Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence

    2017-09-01

    Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

  10. Adaptively Tuned Iterative Low Dose CT Image Denoising

    PubMed Central

    Hashemi, SayedMasoud; Paul, Narinder S.; Beheshti, Soosan; Cobbold, Richard S. C.

    2015-01-01

    Improving image quality is a critical objective in low dose computed tomography (CT) imaging and is the primary focus of CT image denoising. State-of-the-art CT denoising algorithms are mainly based on iterative minimization of an objective function, in which the performance is controlled by regularization parameters. To achieve the best results, these should be chosen carefully. However, the parameter selection is typically performed in an ad hoc manner, which can cause the algorithms to converge slowly or become trapped in a local minimum. To overcome these issues a noise confidence region evaluation (NCRE) method is used, which evaluates the denoising residuals iteratively and compares their statistics with those produced by additive noise. It then updates the parameters at the end of each iteration to achieve a better match to the noise statistics. By combining NCRE with the fundamentals of block matching and 3D filtering (BM3D) approach, a new iterative CT image denoising method is proposed. It is shown that this new denoising method improves the BM3D performance in terms of both the mean square error and a structural similarity index. Moreover, simulations and patient results show that this method preserves the clinically important details of low dose CT images together with a substantial noise reduction. PMID:26089972

  11. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2017-09-01

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  12. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  13. Denoising by coupled partial differential equations and extracting phase by backpropagation neural networks for electronic speckle pattern interferometry.

    PubMed

    Tang, Chen; Lu, Wenjing; Chen, Song; Zhang, Zhen; Li, Botao; Wang, Wenping; Han, Lin

    2007-10-20

    We extend and refine previous work [Appl. Opt. 46, 2907 (2007)]. Combining the coupled nonlinear partial differential equations (PDEs) denoising model with the ordinary differential equations enhancement method, we propose the new denoising and enhancing model for electronic speckle pattern interferometry (ESPI) fringe patterns. Meanwhile, we propose the backpropagation neural networks (BPNN) method to obtain unwrapped phase values based on a skeleton map instead of traditional interpolations. We test the introduced methods on the computer-simulated speckle ESPI fringe patterns and experimentally obtained fringe pattern, respectively. The experimental results show that the coupled nonlinear PDEs denoising model is capable of effectively removing noise, and the unwrapped phase values obtained by the BPNN method are much more accurate than those obtained by the well-known traditional interpolation. In addition, the accuracy of the BPNN method is adjustable by changing the parameters of networks such as the number of neurons.

  14. PULSAR SIGNAL DENOISING METHOD BASED ON LAPLACE DISTRIBUTION IN NO-SUBSAMPLING WAVELET PACKET DOMAIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang

    2016-11-01

    In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.

  15. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  16. A comparative study of new and current methods for dental micro-CT image denoising

    PubMed Central

    Lashgari, Mojtaba; Qin, Jie; Swain, Michael

    2016-01-01

    Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583

  17. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  18. Nonlocal maximum likelihood estimation method for denoising multiple-coil magnetic resonance images.

    PubMed

    Rajan, Jeny; Veraart, Jelle; Van Audekerke, Johan; Verhoye, Marleen; Sijbers, Jan

    2012-12-01

    Effective denoising is vital for proper analysis and accurate quantitative measurements from magnetic resonance (MR) images. Even though many methods were proposed to denoise MR images, only few deal with the estimation of true signal from MR images acquired with phased-array coils. If the magnitude data from phased array coils are reconstructed as the root sum of squares, in the absence of noise correlations and subsampling, the data is assumed to follow a non central-χ distribution. However, when the k-space is subsampled to increase the acquisition speed (as in GRAPPA like methods), noise becomes spatially varying. In this note, we propose a method to denoise multiple-coil acquired MR images. Both the non central-χ distribution and the spatially varying nature of the noise is taken into account in the proposed method. Experiments were conducted on both simulated and real data sets to validate and to demonstrate the effectiveness of the proposed method. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  20. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  1. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  2. Dictionary Pair Learning on Grassmann Manifolds for Image Denoising.

    PubMed

    Zeng, Xianhua; Bian, Wei; Liu, Wei; Shen, Jialie; Tao, Dacheng

    2015-11-01

    Image denoising is a fundamental problem in computer vision and image processing that holds considerable practical importance for real-world applications. The traditional patch-based and sparse coding-driven image denoising methods convert 2D image patches into 1D vectors for further processing. Thus, these methods inevitably break down the inherent 2D geometric structure of natural images. To overcome this limitation pertaining to the previous image denoising methods, we propose a 2D image denoising model, namely, the dictionary pair learning (DPL) model, and we design a corresponding algorithm called the DPL on the Grassmann-manifold (DPLG) algorithm. The DPLG algorithm first learns an initial dictionary pair (i.e., the left and right dictionaries) by employing a subspace partition technique on the Grassmann manifold, wherein the refined dictionary pair is obtained through a sub-dictionary pair merging. The DPLG obtains a sparse representation by encoding each image patch only with the selected sub-dictionary pair. The non-zero elements of the sparse representation are further smoothed by the graph Laplacian operator to remove the noise. Consequently, the DPLG algorithm not only preserves the inherent 2D geometric structure of natural images but also performs manifold smoothing in the 2D sparse coding space. We demonstrate that the DPLG algorithm also improves the structural SIMilarity values of the perceptual visual quality for denoised images using the experimental evaluations on the benchmark images and Berkeley segmentation data sets. Moreover, the DPLG also produces the competitive peak signal-to-noise ratio values from popular image denoising algorithms.

  3. Robust estimation approach for blind denoising.

    PubMed

    Rabie, Tamer

    2005-11-01

    This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treated as an outlier random variable. A denoised image is estimated by fitting a spatially coherent stationary image model to the available noisy data using a robust estimator-based regression method within an optimal-size adaptive window. The robust formulation aims at eliminating the noise outliers while preserving the edge structures in the restored image. Several examples demonstrating the effectiveness of this robust denoising technique are reported and a comparison with other standard denoising filters is presented.

  4. Speckle Noise Reduction in Optical Coherence Tomography Using Two-dimensional Curvelet-based Dictionary Learning.

    PubMed

    Esmaeili, Mahdad; Dehnavi, Alireza Mehri; Rabbani, Hossein; Hajizadeh, Fedra

    2017-01-01

    The process of interpretation of high-speed optical coherence tomography (OCT) images is restricted due to the large speckle noise. To address this problem, this paper proposes a new method using two-dimensional (2D) curvelet-based K-SVD algorithm for speckle noise reduction and contrast enhancement of intra-retinal layers of 2D spectral-domain OCT images. For this purpose, we take curvelet transform of the noisy image. In the next step, noisy sub-bands of different scales and rotations are separately thresholded with an adaptive data-driven thresholding method, then, each thresholded sub-band is denoised based on K-SVD dictionary learning with a variable size initial dictionary dependent on the size of curvelet coefficients' matrix in each sub-band. We also modify each coefficient matrix to enhance intra-retinal layers, with noise suppression at the same time. We demonstrate the ability of the proposed algorithm in speckle noise reduction of 100 publically available OCT B-scans with and without non-neovascular age-related macular degeneration (AMD), and improvement of contrast-to-noise ratio from 1.27 to 5.12 and mean-to-standard deviation ratio from 3.20 to 14.41 are obtained.

  5. Total Variation Denoising and Support Localization of the Gradient

    NASA Astrophysics Data System (ADS)

    Chambolle, A.; Duval, V.; Peyré, G.; Poon, C.

    2016-10-01

    This paper describes the geometrical properties of the solutions to the total variation denoising method. A folklore statement is that this method is able to restore sharp edges, but at the same time, might introduce some staircasing (i.e. “fake” edges) in flat areas. Quite surprisingly, put aside numerical evidences, almost no theoretical result are available to backup these claims. The first contribution of this paper is a precise mathematical definition of the “extended support” (associated to the noise-free image) of TV denoising. This is intuitively the region which is unstable and will suffer from the staircasing effect. Our main result shows that the TV denoising method indeed restores a piece-wise constant image outside a small tube surrounding the extended support. Furthermore, the radius of this tube shrinks toward zero as the noise level vanishes and in some cases, an upper bound on the convergence rate is given.

  6. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  8. Kmeans-ICA based automatic method for ocular artifacts removal in a motorimagery classification.

    PubMed

    Bou Assi, Elie; Rihana, Sandy; Sawan, Mohamad

    2014-01-01

    Electroencephalogram (EEG) recordings aroused as inputs of a motor imagery based BCI system. Eye blinks contaminate the spectral frequency of the EEG signals. Independent Component Analysis (ICA) has been already proved for removing these artifacts whose frequency band overlap with the EEG of interest. However, already ICA developed methods, use a reference lead such as the ElectroOculoGram (EOG) to identify the ocular artifact components. In this study, artifactual components were identified using an adaptive thresholding by means of Kmeans clustering. The denoised EEG signals have been fed into a feature extraction algorithm extracting the band power, the coherence and the phase locking value and inserted into a linear discriminant analysis classifier for a motor imagery classification.

  9. Hyperspectral image denoising and anomaly detection based on low-rank and sparse representations

    NASA Astrophysics Data System (ADS)

    Zhuang, Lina; Gao, Lianru; Zhang, Bing; Bioucas-Dias, José M.

    2017-10-01

    The very high spectral resolution of Hyperspectral Images (HSIs) enables the identification of materials with subtle differences and the extraction subpixel information. However, the increasing of spectral resolution often implies an increasing in the noise linked with the image formation process. This degradation mechanism limits the quality of extracted information and its potential applications. Since HSIs represent natural scenes and their spectral channels are highly correlated, they are characterized by a high level of self-similarity and are well approximated by low-rank representations. These characteristic underlies the state-of-the-art in HSI denoising. However, in presence of rare pixels, the denoising performance of those methods is not optimal and, in addition, it may compromise the future detection of those pixels. To address these hurdles, we introduce RhyDe (Robust hyperspectral Denoising), a powerful HSI denoiser, which implements explicit low-rank representation, promotes self-similarity, and, by using a form of collaborative sparsity, preserves rare pixels. The denoising and detection effectiveness of the proposed robust HSI denoiser is illustrated using semi-real data.

  10. Image denoising based on noise detection

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen

    2018-03-01

    Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.

  11. The use of wavelet filters for reducing noise in posterior fossa Computed Tomography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pita-Machado, Reinado; Perez-Diaz, Marlen, E-mail: mperez@uclv.edu.cu; Lorenzo-Ginori, Juan V., E-mail: mperez@uclv.edu.cu

    Wavelet transform based de-noising like wavelet shrinkage, gives the good results in CT. This procedure affects very little the spatial resolution. Some applications are reconstruction methods, while others are a posteriori de-noising methods. De-noising after reconstruction is very difficult because the noise is non-stationary and has unknown distribution. Therefore, methods which work on the sinogram-space don’t have this problem, because they always work over a known noise distribution at this point. On the other hand, the posterior fossa in a head CT is a very complex region for physicians, because it is commonly affected by artifacts and noise which aremore » not eliminated during the reconstruction procedure. This can leads to some false positive evaluations. The purpose of our present work is to compare different wavelet shrinkage de-noising filters to reduce noise, particularly in images of the posterior fossa within CT scans in the sinogram-space. This work describes an experimental search for the best wavelets, to reduce Poisson noise in Computed Tomography (CT) scans. Results showed that de-noising with wavelet filters improved the quality of posterior fossa region in terms of an increased CNR, without noticeable structural distortions.« less

  12. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  13. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  14. Combination of oriented partial differential equation and shearlet transform for denoising in electronic speckle pattern interferometry fringe patterns.

    PubMed

    Xu, Wenjun; Tang, Chen; Gu, Fan; Cheng, Jiajia

    2017-04-01

    It is a key step to remove the massive speckle noise in electronic speckle pattern interferometry (ESPI) fringe patterns. In the spatial-domain filtering methods, oriented partial differential equations have been demonstrated to be a powerful tool. In the transform-domain filtering methods, the shearlet transform is a state-of-the-art method. In this paper, we propose a filtering method for ESPI fringe patterns denoising, which is a combination of second-order oriented partial differential equation (SOOPDE) and the shearlet transform, named SOOPDE-Shearlet. Here, the shearlet transform is introduced into the ESPI fringe patterns denoising for the first time. This combination takes advantage of the fact that the spatial-domain filtering method SOOPDE and the transform-domain filtering method shearlet transform benefit from each other. We test the proposed SOOPDE-Shearlet on five experimentally obtained ESPI fringe patterns with poor quality and compare our method with SOOPDE, shearlet transform, windowed Fourier filtering (WFF), and coherence-enhancing diffusion (CEDPDE). Among them, WFF and CEDPDE are the state-of-the-art methods for ESPI fringe patterns denoising in transform domain and spatial domain, respectively. The experimental results have demonstrated the good performance of the proposed SOOPDE-Shearlet.

  15. A novel partial volume effects correction technique integrating deconvolution associated with denoising within an iterative PET image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe

    2015-02-15

    Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less

  16. A hybrid fault diagnosis method based on second generation wavelet de-noising and local mean decomposition for rotating machinery.

    PubMed

    Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun

    2016-03-01

    In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  18. Hyperspectral Image Denoising Using a Nonlocal Spectral Spatial Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Li, D.; Xu, L.; Peng, J.; Ma, J.

    2018-04-01

    Hyperspectral images (HSIs) denoising is a critical research area in image processing duo to its importance in improving the quality of HSIs, which has a negative impact on object detection and classification and so on. In this paper, we develop a noise reduction method based on principal component analysis (PCA) for hyperspectral imagery, which is dependent on the assumption that the noise can be removed by selecting the leading principal components. The main contribution of paper is to introduce the spectral spatial structure and nonlocal similarity of the HSIs into the PCA denoising model. PCA with spectral spatial structure can exploit spectral correlation and spatial correlation of HSI by using 3D blocks instead of 2D patches. Nonlocal similarity means the similarity between the referenced pixel and other pixels in nonlocal area, where Mahalanobis distance algorithm is used to estimate the spatial spectral similarity by calculating the distance in 3D blocks. The proposed method is tested on both simulated and real hyperspectral images, the results demonstrate that the proposed method is superior to several other popular methods in HSI denoising.

  19. Online Denoising Based on the Second-Order Adaptive Statistics Model.

    PubMed

    Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei

    2017-07-20

    Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

  20. Non-local means denoising of dynamic PET images.

    PubMed

    Dutta, Joyita; Leahy, Richard M; Li, Quanzheng

    2013-01-01

    Dynamic positron emission tomography (PET), which reveals information about both the spatial distribution and temporal kinetics of a radiotracer, enables quantitative interpretation of PET data. Model-based interpretation of dynamic PET images by means of parametric fitting, however, is often a challenging task due to high levels of noise, thus necessitating a denoising step. The objective of this paper is to develop and characterize a denoising framework for dynamic PET based on non-local means (NLM). NLM denoising computes weighted averages of voxel intensities assigning larger weights to voxels that are similar to a given voxel in terms of their local neighborhoods or patches. We introduce three key modifications to tailor the original NLM framework to dynamic PET. Firstly, we derive similarities from less noisy later time points in a typical PET acquisition to denoise the entire time series. Secondly, we use spatiotemporal patches for robust similarity computation. Finally, we use a spatially varying smoothing parameter based on a local variance approximation over each spatiotemporal patch. To assess the performance of our denoising technique, we performed a realistic simulation on a dynamic digital phantom based on the Digimouse atlas. For experimental validation, we denoised [Formula: see text] PET images from a mouse study and a hepatocellular carcinoma patient study. We compared the performance of NLM denoising with four other denoising approaches - Gaussian filtering, PCA, HYPR, and conventional NLM based on spatial patches. The simulation study revealed significant improvement in bias-variance performance achieved using our NLM technique relative to all the other methods. The experimental data analysis revealed that our technique leads to clear improvement in contrast-to-noise ratio in Patlak parametric images generated from denoised preclinical and clinical dynamic images, indicating its ability to preserve image contrast and high intensity details while lowering the background noise variance.

  1. A Wiener-Wavelet-Based filter for de-noising satellite soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Massari, Christian; Brocca, Luca; Ciabatta, Luca; Moramarco, Tommaso; Su, Chun-Hsu; Ryu, Dongryeol; Wagner, Wolfgang

    2014-05-01

    The reduction of noise in microwave satellite soil moisture (SM) retrievals is of paramount importance for practical applications especially for those associated with the study of climate changes, droughts, floods and other related hydrological processes. So far, Fourier based methods have been used for de-noising satellite SM retrievals by filtering either the observed emissivity time series (Du, 2012) or the retrieved SM observations (Su et al. 2013). This contribution introduces an alternative approach based on a Wiener-Wavelet-Based filtering (WWB) technique, which uses the Entropy-Based Wavelet de-noising method developed by Sang et al. (2009) to design both a causal and a non-causal version of the filter. WWB is used as a post-retrieval processing tool to enhance the quality of observations derived from the i) Advanced Microwave Scanning Radiometer for the Earth observing system (AMSR-E), ii) the Advanced SCATterometer (ASCAT), and iii) the Soil Moisture and Ocean Salinity (SMOS) satellite. The method is tested on three pilot sites located in Spain (Remedhus Network), in Greece (Hydrological Observatory of Athens) and in Australia (Oznet network), respectively. Different quantitative criteria are used to judge the goodness of the de-noising technique. Results show that WWB i) is able to improve both the correlation and the root mean squared differences between satellite retrievals and in situ soil moisture observations, and ii) effectively separates random noise from deterministic components of the retrieved signals. Moreover, the use of WWB de-noised data in place of raw observations within a hydrological application confirms the usefulness of the proposed filtering technique. Du, J. (2012), A method to improve satellite soil moisture retrievals based on Fourier analysis, Geophys. Res. Lett., 39, L15404, doi:10.1029/ 2012GL052435 Su,C.-H.,D.Ryu, A. W. Western, and W. Wagner (2013), De-noising of passive and active microwave satellite soil moisture time series, Geophys. Res. Lett., 40,3624-3630, doi:10.1002/grl.50695. Sang Y.-F., D. Wang, J.-C. Wu, Q.-P. Zhu, and L. Wang (2009), Entropy-Based Wavelet De-noising Method for Time Series Analysis, Entropy, 11, pp. 1123-1148, doi:10.3390/e11041123.

  2. E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers

    NASA Technical Reports Server (NTRS)

    Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.

    2005-01-01

    Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.

  3. Wavelet Denoising of Radio Observations of Rotating Radio Transients (RRATs): Improved Timing Parameters for Eight RRATs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, M.; Schmid, N. A.; Cao, Z.-C.

    Rotating radio transients (RRATs) are sporadically emitting pulsars detectable only through searches for single pulses. While over 100 RRATs have been detected, only a small fraction (roughly 20%) have phase-connected timing solutions, which are critical for determining how they relate to other neutron star populations. Detecting more pulses in order to achieve solutions is key to understanding their physical nature. Astronomical signals collected by radio telescopes contain noise from many sources, making the detection of weak pulses difficult. Applying a denoising method to raw time series prior to performing a single-pulse search typically leads to a more accurate estimation ofmore » their times of arrival (TOAs). Taking into account some features of RRAT pulses and noise, we present a denoising method based on wavelet data analysis, an image-processing technique. Assuming that the spin period of an RRAT is known, we estimate the frequency spectrum components contributing to the composition of RRAT pulses. This allows us to suppress the noise, which contributes to other frequencies. We apply the wavelet denoising method including selective wavelet reconstruction and wavelet shrinkage to the de-dispersed time series of eight RRATs with existing timing solutions. The signal-to-noise ratio (S/N) of most pulses are improved after wavelet denoising. Compared to the conventional approach, we measure 12%–69% more TOAs for the eight RRATs. The new timing solutions for the eight RRATs show 16%–90% smaller estimation error of most parameters. Thus, we conclude that wavelet analysis is an effective tool for denoising RRATs signal.« less

  4. Video denoising using low rank tensor decomposition

    NASA Astrophysics Data System (ADS)

    Gui, Lihua; Cui, Gaochao; Zhao, Qibin; Wang, Dongsheng; Cichocki, Andrzej; Cao, Jianting

    2017-03-01

    Reducing noise in a video sequence is of vital important in many real-world applications. One popular method is block matching collaborative filtering. However, the main drawback of this method is that noise standard deviation for the whole video sequence is known in advance. In this paper, we present a tensor based denoising framework that considers 3D patches instead of 2D patches. By collecting the similar 3D patches non-locally, we employ the low-rank tensor decomposition for collaborative filtering. Since we specify the non-informative prior over the noise precision parameter, the noise variance can be inferred automatically from observed video data. Therefore, our method is more practical, which does not require knowing the noise variance. The experimental on video denoising demonstrates the effectiveness of our proposed method.

  5. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  6. Denoising of polychromatic CT images based on their own noise properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr

    Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less

  7. Convolutional auto-encoder for image denoising of ultra-low-dose CT.

    PubMed

    Nishio, Mizuho; Nagashima, Chihiro; Hirabayashi, Saori; Ohnishi, Akinori; Sasaki, Kaori; Sagawa, Tomoyuki; Hamada, Masayuki; Yamashita, Tatsuo

    2017-08-01

    The purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom. Standard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality. For the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05. Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.

  8. HARDI DATA DENOISING USING VECTORIAL TOTAL VARIATION AND LOGARITHMIC BARRIER

    PubMed Central

    Kim, Yunho; Thompson, Paul M.; Vese, Luminita A.

    2010-01-01

    In this work, we wish to denoise HARDI (High Angular Resolution Diffusion Imaging) data arising in medical brain imaging. Diffusion imaging is a relatively new and powerful method to measure the three-dimensional profile of water diffusion at each point in the brain. These images can be used to reconstruct fiber directions and pathways in the living brain, providing detailed maps of fiber integrity and connectivity. HARDI data is a powerful new extension of diffusion imaging, which goes beyond the diffusion tensor imaging (DTI) model: mathematically, intensity data is given at every voxel and at any direction on the sphere. Unfortunately, HARDI data is usually highly contaminated with noise, depending on the b-value which is a tuning parameter pre-selected to collect the data. Larger b-values help to collect more accurate information in terms of measuring diffusivity, but more noise is generated by many factors as well. So large b-values are preferred, if we can satisfactorily reduce the noise without losing the data structure. Here we propose two variational methods to denoise HARDI data. The first one directly denoises the collected data S, while the second one denoises the so-called sADC (spherical Apparent Diffusion Coefficient), a field of radial functions derived from the data. These two quantities are related by an equation of the form S = SSexp (−b · sADC) (in the noise-free case). By applying these two different models, we will be able to determine which quantity will most accurately preserve data structure after denoising. The theoretical analysis of the proposed models is presented, together with experimental results and comparisons for denoising synthetic and real HARDI data. PMID:20802839

  9. A diffusion-matched principal component analysis (DM-PCA) based two-channel denoising procedure for high-resolution diffusion-weighted MRI

    PubMed Central

    Chang, Hing-Chiu; Bilgin, Ali; Bernstein, Adam; Trouard, Theodore P.

    2018-01-01

    Over the past several years, significant efforts have been made to improve the spatial resolution of diffusion-weighted imaging (DWI), aiming at better detecting subtle lesions and more reliably resolving white-matter fiber tracts. A major concern with high-resolution DWI is the limited signal-to-noise ratio (SNR), which may significantly offset the advantages of high spatial resolution. Although the SNR of DWI data can be improved by denoising in post-processing, existing denoising procedures may potentially reduce the anatomic resolvability of high-resolution imaging data. Additionally, non-Gaussian noise induced signal bias in low-SNR DWI data may not always be corrected with existing denoising approaches. Here we report an improved denoising procedure, termed diffusion-matched principal component analysis (DM-PCA), which comprises 1) identifying a group of (not necessarily neighboring) voxels that demonstrate very similar magnitude signal variation patterns along the diffusion dimension, 2) correcting low-frequency phase variations in complex-valued DWI data, 3) performing PCA along the diffusion dimension for real- and imaginary-components (in two separate channels) of phase-corrected DWI voxels with matched diffusion properties, 4) suppressing the noisy PCA components in real- and imaginary-components, separately, of phase-corrected DWI data, and 5) combining real- and imaginary-components of denoised DWI data. Our data show that the new two-channel (i.e., for real- and imaginary-components) DM-PCA denoising procedure performs reliably without noticeably compromising anatomic resolvability. Non-Gaussian noise induced signal bias could also be reduced with the new denoising method. The DM-PCA based denoising procedure should prove highly valuable for high-resolution DWI studies in research and clinical uses. PMID:29694400

  10. MLESAC Based Localization of Needle Insertion Using 2D Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Gao, Dedong; Wang, Shan; Zhanwen, A.

    2018-04-01

    In the 2D ultrasound image of ultrasound-guided percutaneous needle insertions, it is difficult to determine the positions of needle axis and tip because of the existence of artifacts and other noises. In this work the speckle is regarded as the noise of an ultrasound image, and a novel algorithm is presented to detect the needle in a 2D ultrasound image. Firstly, the wavelet soft thresholding technique based on BayesShrink rule is used to denoise the speckle of ultrasound image. Secondly, we add Otsu’s thresholding method and morphologic operations to pre-process the ultrasound image. Finally, the localization of the needle is identified and positioned in the 2D ultrasound image based on the maximum likelihood estimation sample consensus (MLESAC) algorithm. The experimental results show that it is valid for estimating the position of needle axis and tip in the ultrasound images with the proposed algorithm. The research work is hopeful to be used in the path planning and robot-assisted needle insertion procedures.

  11. Adaptive regularization of the NL-means: application to image and video denoising.

    PubMed

    Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François

    2014-08-01

    Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

  12. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines.

    PubMed

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-12-13

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.

  13. A Small Leak Detection Method Based on VMD Adaptive De-Noising and Ambiguity Correlation Classification Intended for Natural Gas Pipelines

    PubMed Central

    Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo

    2016-01-01

    In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577

  14. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  15. Simultaneous multi-component seismic denoising and reconstruction via K-SVD

    NASA Astrophysics Data System (ADS)

    Hou, Sian; Zhang, Feng; Li, Xiangyang; Zhao, Qiang; Dai, Hengchang

    2018-06-01

    Data denoising and reconstruction play an increasingly significant role in seismic prospecting for their value in enhancing effective signals, dealing with surface obstacles and reducing acquisition costs. In this paper, we propose a novel method to denoise and reconstruct multicomponent seismic data simultaneously. This method lies within the framework of machine learning and the key points are defining a suitable weight function and a modified inner product operator. The purpose of these two processes are to perform missing data machine learning when the random noise deviation is unknown, and building a mathematical relationship for each component to incorporate all the information of multi-component data. Two examples, using synthetic and real multicomponent data, demonstrate that the new method is a feasible alternative for multi-component seismic data processing.

  16. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  17. Deep RNNs for video denoising

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Song, Li; Yang, Xiaokang

    2016-09-01

    Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

  18. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    PubMed

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  19. Patch-based anisotropic diffusion scheme for fluorescence diffuse optical tomography--part 2: image reconstruction.

    PubMed

    Correia, Teresa; Koch, Maximilian; Ale, Angelique; Ntziachristos, Vasilis; Arridge, Simon

    2016-02-21

    Fluorescence diffuse optical tomography (fDOT) provides 3D images of fluorescence distributions in biological tissue, which represent molecular and cellular processes. The image reconstruction problem is highly ill-posed and requires regularisation techniques to stabilise and find meaningful solutions. Quadratic regularisation tends to either oversmooth or generate very noisy reconstructions, depending on the regularisation strength. Edge preserving methods, such as anisotropic diffusion regularisation (AD), can preserve important features in the fluorescence image and smooth out noise. However, AD has limited ability to distinguish an edge from noise. We propose a patch-based anisotropic diffusion regularisation (PAD), where regularisation strength is determined by a weighted average according to the similarity between patches around voxels within a search window, instead of a simple local neighbourhood strategy. However, this method has higher computational complexity and, hence, we wavelet compress the patches (PAD-WT) to speed it up, while simultaneously taking advantage of the denoising properties of wavelet thresholding. Furthermore, structural information can be incorporated into the image reconstruction with PAD-WT to improve image quality and resolution. In this case, the weights used to average voxels in the image are calculated using the structural image, instead of the fluorescence image. The regularisation strength depends on both structural and fluorescence images, which guarantees that the method can preserve fluorescence information even when it is not structurally visible in the anatomical images. In part 1, we tested the method using a denoising problem. Here, we use simulated and in vivo mouse fDOT data to assess the algorithm performance. Our results show that the proposed PAD-WT method provides high quality and noise free images, superior to those obtained using AD.

  20. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  1. Denoising Algorithm for CFA Image Sensors Considering Inter-Channel Correlation.

    PubMed

    Lee, Min Seok; Park, Sang Wook; Kang, Moon Gi

    2017-05-28

    In this paper, a spatio-spectral-temporal filter considering an inter-channel correlation is proposed for the denoising of a color filter array (CFA) sequence acquired by CCD/CMOS image sensors. Owing to the alternating under-sampled grid of the CFA pattern, the inter-channel correlation must be considered in the direct denoising process. The proposed filter is applied in the spatial, spectral, and temporal domain, considering the spatio-tempo-spectral correlation. First, nonlocal means (NLM) spatial filtering with patch-based difference (PBD) refinement is performed by considering both the intra-channel correlation and inter-channel correlation to overcome the spatial resolution degradation occurring with the alternating under-sampled pattern. Second, a motion-compensated temporal filter that employs inter-channel correlated motion estimation and compensation is proposed to remove the noise in the temporal domain. Then, a motion adaptive detection value controls the ratio of the spatial filter and the temporal filter. The denoised CFA sequence can thus be obtained without motion artifacts. Experimental results for both simulated and real CFA sequences are presented with visual and numerical comparisons to several state-of-the-art denoising methods combined with a demosaicing method. Experimental results confirmed that the proposed frameworks outperformed the other techniques in terms of the objective criteria and subjective visual perception in CFA sequences.

  2. Fractal properties and denoising of lidar signals from cirrus clouds

    NASA Astrophysics Data System (ADS)

    van den Heuvel, J. C.; Driesenaar, M. L.; Lerou, R. J. L.

    2000-02-01

    Airborne lidar signals of cirrus clouds are analyzed to determine the cloud structure. Climate modeling and numerical weather prediction benefit from accurate modeling of cirrus clouds. Airborne lidar measurements of the European Lidar in Space Technology Experiment (ELITE) campaign were analyzed by combining shots to obtain the backscatter at constant altitude. The signal at high altitude was analyzed for horizontal structure of cirrus clouds. The power spectrum and the structure function show straight lines on a double logarithmic plot. This behavior is characteristic for a Brownian fractal. Wavelet analysis using the Haar wavelet confirms the fractal aspects. It is shown that the horizontal structure of cirrus can be described by a fractal with a dimension of 1.8 over length scales that vary 4 orders of magnitude. We use the fractal properties in a new denoising method. Denoising is required for future lidar measurements from space that have a low signal to noise ratio. Our wavelet denoising is based on the Haar wavelet and uses the statistical fractal properties of cirrus clouds in a method based on the maximum a posteriori (MAP) probability. This denoising based on wavelets is tested on airborne lidar signals from ELITE using added Gaussian noise. Superior results with respect to averaging are obtained.

  3. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering

    PubMed Central

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments. PMID:29599954

  4. Efficient OCT Image Enhancement Based on Collaborative Shock Filtering.

    PubMed

    Liu, Guohua; Wang, Ziyu; Mu, Guoying; Li, Peijin

    2018-01-01

    Efficient enhancement of noisy optical coherence tomography (OCT) images is a key task for interpreting them correctly. In this paper, to better enhance details and layered structures of a human retina image, we propose a collaborative shock filtering for OCT image denoising and enhancement. Noisy OCT image is first denoised by a collaborative filtering method with new similarity measure, and then the denoised image is sharpened by a shock-type filtering for edge and detail enhancement. For dim OCT images, in order to improve image contrast for the detection of tiny lesions, a gamma transformation is first used to enhance the images within proper gray levels. The proposed method integrating image smoothing and sharpening simultaneously obtains better visual results in experiments.

  5. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    NASA Astrophysics Data System (ADS)

    Floberg, J. M.; Holden, J. E.

    2013-02-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.

  6. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  7. Poisson denoising on the sphere

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.

    2009-08-01

    In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.

  8. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    NASA Astrophysics Data System (ADS)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  9. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    PubMed

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  10. A data-driven approach for denoising GNSS position time series

    NASA Astrophysics Data System (ADS)

    Li, Yanyan; Xu, Caijun; Yi, Lei; Fang, Rongxin

    2017-12-01

    Global navigation satellite system (GNSS) datasets suffer from common mode error (CME) and other unmodeled errors. To decrease the noise level in GNSS positioning, we propose a new data-driven adaptive multiscale denoising method in this paper. Both synthetic and real-world long-term GNSS datasets were employed to assess the performance of the proposed method, and its results were compared with those of stacking filtering, principal component analysis (PCA) and the recently developed multiscale multiway PCA. It is found that the proposed method can significantly eliminate the high-frequency white noise and remove the low-frequency CME. Furthermore, the proposed method is more precise for denoising GNSS signals than the other denoising methods. For example, in the real-world example, our method reduces the mean standard deviation of the north, east and vertical components from 1.54 to 0.26, 1.64 to 0.21 and 4.80 to 0.72 mm, respectively. Noise analysis indicates that for the original signals, a combination of power-law plus white noise model can be identified as the best noise model. For the filtered time series using our method, the generalized Gauss-Markov model is the best noise model with the spectral indices close to - 3, indicating that flicker walk noise can be identified. Moreover, the common mode error in the unfiltered time series is significantly reduced by the proposed method. After filtering with our method, a combination of power-law plus white noise model is the best noise model for the CMEs in the study region.

  11. Standardized processing of MALDI imaging raw data for enhancement of weak analyte signals in mouse models of gastric cancer and Alzheimer's disease.

    PubMed

    Schwartz, Matthias; Meyer, Björn; Wirnitzer, Bernhard; Hopf, Carsten

    2015-03-01

    Conventional mass spectrometry image preprocessing methods used for denoising, such as the Savitzky-Golay smoothing or discrete wavelet transformation, typically do not only remove noise but also weak signals. Recently, memory-efficient principal component analysis (PCA) in conjunction with random projections (RP) has been proposed for reversible compression and analysis of large mass spectrometry imaging datasets. It considers single-pixel spectra in their local context and consequently offers the prospect of using information from the spectra of adjacent pixels for denoising or signal enhancement. However, little systematic analysis of key RP-PCA parameters has been reported so far, and the utility and validity of this method for context-dependent enhancement of known medically or pharmacologically relevant weak analyte signals in linear-mode matrix-assisted laser desorption/ionization (MALDI) mass spectra has not been explored yet. Here, we investigate MALDI imaging datasets from mouse models of Alzheimer's disease and gastric cancer to systematically assess the importance of selecting the right number of random projections k and of principal components (PCs) L for reconstructing reproducibly denoised images after compression. We provide detailed quantitative data for comparison of RP-PCA-denoising with the Savitzky-Golay and wavelet-based denoising in these mouse models as a resource for the mass spectrometry imaging community. Most importantly, we demonstrate that RP-PCA preprocessing can enhance signals of low-intensity amyloid-β peptide isoforms such as Aβ1-26 even in sparsely distributed Alzheimer's β-amyloid plaques and that it enables enhanced imaging of multiply acetylated histone H4 isoforms in response to pharmacological histone deacetylase inhibition in vivo. We conclude that RP-PCA denoising may be a useful preprocessing step in biomarker discovery workflows.

  12. Multiresolution generalized N dimension PCA for ultrasound image denoising

    PubMed Central

    2014-01-01

    Background Ultrasound images are usually affected by speckle noise, which is a type of random multiplicative noise. Thus, reducing speckle and improving image visual quality are vital to obtaining better diagnosis. Method In this paper, a novel noise reduction method for medical ultrasound images, called multiresolution generalized N dimension PCA (MR-GND-PCA), is presented. In this method, the Gaussian pyramid and multiscale image stacks on each level are built first. GND-PCA as a multilinear subspace learning method is used for denoising. Each level is combined to achieve the final denoised image based on Laplacian pyramids. Results The proposed method is tested with synthetically speckled and real ultrasound images, and quality evaluation metrics, including MSE, SNR and PSNR, are used to evaluate its performance. Conclusion Experimental results show that the proposed method achieved the lowest noise interference and improved image quality by reducing noise and preserving the structure. Our method is also robust for the image with a much higher level of speckle noise. For clinical images, the results show that MR-GND-PCA can reduce speckle and preserve resolvable details. PMID:25096917

  13. An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.

    PubMed

    Khanian, Maryam; Feizi, Awat; Davari, Ali

    2014-01-01

    Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.

  14. SU-D-12A-02: DeTECT, a Method to Enhance Soft Tissue Contrast From Mega Voltage CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, K; Gou, S; Qi, S

    Purpose: MVCT images have been used on TomoTherapy system to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation and adaptive radiation therapy is severely limited due to minimal photoelectric interaction and prominent presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. We aim to utilize a non-local means denoising method and texture analysis to recover the soft tissue information for MVCT. Methods: A block matching 3D (BM3D) algorithm was adapted to reduce the noise while keeping the texture information of the MVCT images. BM3D is an imaging denoising algorithm developed frommore » non-local means methods. BM3D additionally creates 3D groups by stacking 2D patches by the order of similarity. 3D denoising operation is then performed. The resultant 3D group is inversely transformed back to 2D images. In this study, BM3D was applied to MVCT images of a CT quality phantom, a head and neck and a prostate patient. Following denoising, imaging texture was enhanced to create the denoised and texture enhanced CT (DeTECT). Results: The original MVCT images show prevalent noise and poor soft tissue contrast. By applying BM3D denoising and texture enhancement, all MVCT images show remarkable improvements. For the phantom, the contrast to noise ratio for the low contrast plug was improved from 2.2 to 13.1 without compromising line pair conspicuity. For the head and neck patient, the lymph nodes and vein in the carotid space inconspicuous in the original MVCT image becomes highly visible in DeTECT. For the prostate patient, the boundary between the bladder and the prostate in the original MVCT is successfully recovered. Both results are visually validated by kVCT images of the corresponding patients. Conclusion: DeTECT showed the promise to drastically improve the soft tissue contrast of MVCT for image guided radiotherapy and adaptive radiotherapy.« less

  15. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-03-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.

  16. LCD denoise and the vector mutual information method in the application of the gear fault diagnosis under different working conditions

    NASA Astrophysics Data System (ADS)

    Xiangfeng, Zhang; Hong, Jiang

    2018-03-01

    In this paper, the full vector LCD method is proposed to solve the misjudgment problem caused by the change of the working condition. First, the signal from different working condition is decomposed by LCD, to obtain the Intrinsic Scale Component (ISC)whose instantaneous frequency with physical significance. Then, calculate of the cross correlation coefficient between ISC and the original signal, signal denoising based on the principle of mutual information minimum. At last, calculate the sum of absolute Vector mutual information of the sample under different working condition and the denoised ISC as the characteristics to classify by use of Support vector machine (SVM). The wind turbines vibration platform gear box experiment proves that this method can identify fault characteristics under different working conditions. The advantage of this method is that it reduce dependence of man’s subjective experience, identify fault directly from the original data of vibration signal. It will has high engineering value.

  17. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    PubMed

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  18. New second order Mumford-Shah model based on Γ-convergence approximation for image processing

    NASA Astrophysics Data System (ADS)

    Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li

    2016-05-01

    In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.

  19. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  20. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  1. Denoised and texture enhanced MVCT to improve soft tissue conspicuity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ke, E-mail: ksheng@mednet.ucla.edu; Qi, Sharon X.; Gou, Shuiping

    Purpose: MVCT images have been used in TomoTherapy treatment to align patients based on bony anatomies but its usefulness for soft tissue registration, delineation, and adaptive radiation therapy is limited due to insignificant photoelectric interaction components and the presence of noise resulting from low detector quantum efficiency of megavoltage x-rays. Algebraic reconstruction with sparsity regularizers as well as local denoising methods has not significantly improved the soft tissue conspicuity. The authors aim to utilize a nonlocal means denoising method and texture enhancement to recover the soft tissue information in MVCT (DeTECT). Methods: A block matching 3D (BM3D) algorithm was adaptedmore » to reduce the noise while keeping the texture information of the MVCT images. Following imaging denoising, a saliency map was created to further enhance visual conspicuity of low contrast structures. In this study, BM3D and saliency maps were applied to MVCT images of a CT imaging quality phantom, a head and neck, and four prostate patients. Following these steps, the contrast-to-noise ratios (CNRs) were quantified. Results: By applying BM3D denoising and saliency map, postprocessed MVCT images show remarkable improvements in imaging contrast without compromising resolution. For the head and neck patient, the difficult-to-see lymph nodes and vein in the carotid space in the original MVCT image became conspicuous in DeTECT. For the prostate patients, the ambiguous boundary between the bladder and the prostate in the original MVCT was clarified. The CNRs of phantom low contrast inserts were improved from 1.48 and 3.8 to 13.67 and 16.17, respectively. The CNRs of two regions-of-interest were improved from 1.5 and 3.17 to 3.14 and 15.76, respectively, for the head and neck patient. DeTECT also increased the CNR of prostate from 0.13 to 1.46 for the four prostate patients. The results are substantially better than a local denoising method using anisotropic diffusion. Conclusions: The authors showed that it is feasible to extract more soft tissue contrast information from the noisy MVCT images using a nonlocal means 3D block matching method in combination with saliency maps, revealing information that was originally unperceivable to human observers.« less

  2. Application of reversible denoising and lifting steps with step skipping to color space transforms for improved lossless compression

    NASA Astrophysics Data System (ADS)

    Starosolski, Roman

    2016-07-01

    Reversible denoising and lifting steps (RDLS) are lifting steps integrated with denoising filters in such a way that, despite the inherently irreversible nature of denoising, they are perfectly reversible. We investigated the application of RDLS to reversible color space transforms: RCT, YCoCg-R, RDgDb, and LDgEb. In order to improve RDLS effects, we propose a heuristic for image-adaptive denoising filter selection, a fast estimator of the compressed image bitrate, and a special filter that may result in skipping of the steps. We analyzed the properties of the presented methods, paying special attention to their usefulness from a practical standpoint. For a diverse image test-set and lossless JPEG-LS, JPEG 2000, and JPEG XR algorithms, RDLS improves the bitrates of all the examined transforms. The most interesting results were obtained for an estimation-based heuristic filter selection out of a set of seven filters; the cost of this variant was similar to or lower than the transform cost, and it improved the average lossless JPEG 2000 bitrates by 2.65% for RDgDb and by over 1% for other transforms; bitrates of certain images were improved to a significantly greater extent.

  3. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    PubMed

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  4. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement.

    PubMed

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G

    2015-08-07

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method.

  5. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  6. Fully Convolutional Architecture for Low-Dose CT Image Noise Reduction

    NASA Astrophysics Data System (ADS)

    Badretale, S.; Shaker, F.; Babyn, P.; Alirezaie, J.

    2017-10-01

    One of the critical topics in medical low-dose Computed Tomography (CT) imaging is how best to maintain image quality. As the quality of images decreases with lowering the X-ray radiation dose, improving image quality is extremely important and challenging. We have proposed a novel approach to denoise low-dose CT images. Our algorithm learns directly from an end-to-end mapping from the low-dose Computed Tomography images for denoising the normal-dose CT images. Our method is based on a deep convolutional neural network with rectified linear units. By learning various low-level to high-level features from a low-dose image the proposed algorithm is capable of creating a high-quality denoised image. We demonstrate the superiority of our technique by comparing the results with two other state-of-the-art methods in terms of the peak signal to noise ratio, root mean square error, and a structural similarity index.

  7. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  8. A novel strategy for signal denoising using reweighted SVD and its applications to weak fault feature enhancement of rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Jia, Xiaodong

    2017-09-01

    Singular value decomposition (SVD), as an effective signal denoising tool, has been attracting considerable attention in recent years. The basic idea behind SVD denoising is to preserve the singular components (SCs) with significant singular values. However, it is shown that the singular values mainly reflect the energy of decomposed SCs, therefore traditional SVD denoising approaches are essentially energy-based, which tend to highlight the high-energy regular components in the measured signal, while ignoring the weak feature caused by early fault. To overcome this issue, a reweighted singular value decomposition (RSVD) strategy is proposed for signal denoising and weak feature enhancement. In this work, a novel information index called periodic modulation intensity is introduced to quantify the diagnostic information in a mechanical signal. With this index, the decomposed SCs can be evaluated and sorted according to their information levels, rather than energy. Based on that, a truncated linear weighting function is proposed to control the contribution of each SC in the reconstruction of the denoised signal. In this way, some weak but informative SCs could be highlighted effectively. The advantages of RSVD over traditional approaches are demonstrated by both simulated signals and real vibration/acoustic data from a two-stage gearbox as well as train bearings. The results demonstrate that the proposed method can successfully extract the weak fault feature even in the presence of heavy noise and ambient interferences.

  9. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal

    PubMed Central

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-01-01

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal. PMID:26512665

  10. AMA- and RWE- Based Adaptive Kalman Filter for Denoising Fiber Optic Gyroscope Drift Signal.

    PubMed

    Yang, Gongliu; Liu, Yuanyuan; Li, Ming; Song, Shunguang

    2015-10-23

    An improved double-factor adaptive Kalman filter called AMA-RWE-DFAKF is proposed to denoise fiber optic gyroscope (FOG) drift signal in both static and dynamic conditions. The first factor is Kalman gain updated by random weighting estimation (RWE) of the covariance matrix of innovation sequence at any time to ensure the lowest noise level of output, but the inertia of KF response increases in dynamic condition. To decrease the inertia, the second factor is the covariance matrix of predicted state vector adjusted by RWE only when discontinuities are detected by adaptive moving average (AMA).The AMA-RWE-DFAKF is applied for denoising FOG static and dynamic signals, its performance is compared with conventional KF (CKF), RWE-based adaptive KF with gain correction (RWE-AKFG), AMA- and RWE- based dual mode adaptive KF (AMA-RWE-DMAKF). Results of Allan variance on static signal and root mean square error (RMSE) on dynamic signal show that this proposed algorithm outperforms all the considered methods in denoising FOG signal.

  11. Low-illumination image denoising method for wide-area search of nighttime sea surface

    NASA Astrophysics Data System (ADS)

    Song, Ming-zhu; Qu, Hong-song; Zhang, Gui-xiang; Tao, Shu-ping; Jin, Guang

    2018-05-01

    In order to suppress complex mixing noise in low-illumination images for wide-area search of nighttime sea surface, a model based on total variation (TV) and split Bregman is proposed in this paper. A fidelity term based on L1 norm and a fidelity term based on L2 norm are designed considering the difference between various noise types, and the regularization mixed first-order TV and second-order TV are designed to balance the influence of details information such as texture and edge for sea surface image. The final detection result is obtained by using the high-frequency component solved from L1 norm and the low-frequency component solved from L2 norm through wavelet transform. The experimental results show that the proposed denoising model has perfect denoising performance for artificially degraded and low-illumination images, and the result of image quality assessment index for the denoising image is superior to that of the contrastive models.

  12. Denoising embolic Doppler ultrasound signals using Dual Tree Complex Discrete Wavelet Transform.

    PubMed

    Serbes, Gorkem; Aydin, Nizamettin

    2010-01-01

    Early and accurate detection of asymptomatic emboli is important for monitoring of preventive therapy in stroke-prone patients. One of the problems in detection of emboli is the identification of an embolic signal caused by very small emboli. The amplitude of the embolic signal may be so small that advanced processing methods are required to distinguish these signals from Doppler signals arising from red blood cells. In this study instead of conventional discrete wavelet transform, the Dual Tree Complex Discrete Wavelet Transform was used for denoising embolic signals. Performances of both approaches were compared. Unlike the conventional discrete wavelet transform discrete complex wavelet transform is a shift invariant transform with limited redundancy. Results demonstrate that the Dual Tree Complex Discrete Wavelet Transform based denoising outperforms conventional discrete wavelet denoising. Approximately 8 dB improvement is obtained by using the Dual Tree Complex Discrete Wavelet Transform compared to the improvement provided by the conventional Discrete Wavelet Transform (less than 5 dB).

  13. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  14. SNR enhancement for downhole microseismic data based on scale classification shearlet transform

    NASA Astrophysics Data System (ADS)

    Li, Juan; Ji, Shuo; Li, Yue; Qian, Zhihong; Lu, Weili

    2018-06-01

    Shearlet transform (ST) can be effective in 2D signal processing, due to its parabolic scaling, high directional sensitivity, and optimal sparsity. ST combined with thresholding has been successfully applied to suppress random noise. However, because of the low magnitude and high frequency of a downhole microseismic signal, the coefficient values of valid signals and noise are similar in the shearlet domain. As a result, it is difficult to use for denoising. In this paper, we present a scale classification ST to solve this problem. The ST is used to decompose noisy microseismic data into serval scales. By analyzing the spectrum and energy distribution of the shearlet coefficients of microseismic data, we divide the scales into two types: low-frequency scales which contain less useful signal and high-frequency scales which contain more useful signal. After classification, we use two different methods to deal with the coefficients on different scales. For the low-frequency scales, the noise is attenuated using a thresholding method. As for the high-frequency scales, we propose to use a generalized Gauss distribution model based a non-local means filter, which takes advantage of the temporal and spatial similarity of microseismic data. The experimental results on both synthetic records and field data illustrate that our proposed method preserves the useful components and attenuates the noise well.

  15. Nonlocal Means Denoising of Self-Gated and k-Space Sorted 4-Dimensional Magnetic Resonance Imaging Using Block-Matching and 3-Dimensional Filtering: Implications for Pancreatic Tumor Registration and Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Jun; McKenzie, Elizabeth; Fan, Zhaoyang

    Purpose: To denoise self-gated k-space sorted 4-dimensional magnetic resonance imaging (SG-KS-4D-MRI) by applying a nonlocal means denoising filter, block-matching and 3-dimensional filtering (BM3D), to test its impact on the accuracy of 4D image deformable registration and automated tumor segmentation for pancreatic cancer patients. Methods and Materials: Nine patients with pancreatic cancer and abdominal SG-KS-4D-MRI were included in the study. Block-matching and 3D filtering was adapted to search in the axial slices/frames adjacent to the reference image patch in the spatial and temporal domains. The patches with high similarity to the reference patch were used to collectively denoise the 4D-MRI image. Themore » pancreas tumor was manually contoured on the first end-of-exhalation phase for both the raw and the denoised 4D-MRI. B-spline deformable registration was applied to the subsequent phases for contour propagation. The consistency of tumor volume defined by the standard deviation of gross tumor volumes from 10 breathing phases (σ-GTV), tumor motion trajectories in 3 cardinal motion planes, 4D-MRI imaging noise, and image contrast-to-noise ratio were compared between the raw and denoised groups. Results: Block-matching and 3D filtering visually and quantitatively reduced image noise by 52% and improved image contrast-to-noise ratio by 56%, without compromising soft tissue edge definitions. Automatic tumor segmentation is statistically more consistent on the denoised 4D-MRI (σ-GTV = 0.6 cm{sup 3}) than on the raw 4D-MRI (σ-GTV = 0.8 cm{sup 3}). Tumor end-of-exhalation location is also more reproducible on the denoised 4D-MRI than on the raw 4D-MRI in all 3 cardinal motion planes. Conclusions: Block-matching and 3D filtering can significantly reduce random image noise while maintaining structural features in the SG-KS-4D-MRI datasets. In this study of pancreatic tumor segmentation, automatic segmentation of GTV in the registered image sets is shown to be more consistent on the denoised 4D-MRI than on the raw 4D-MRI.« less

  16. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  17. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    PubMed

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  18. Signal quality enhancement using higher order wavelets for ultrasonic TOFD signals from austenitic stainless steel welds.

    PubMed

    Praveen, Angam; Vijayarekha, K; Abraham, Saju T; Venkatraman, B

    2013-09-01

    Time of flight diffraction (TOFD) technique is a well-developed ultrasonic non-destructive testing (NDT) method and has been applied successfully for accurate sizing of defects in metallic materials. This technique was developed in early 1970s as a means for accurate sizing and positioning of cracks in nuclear components became very popular in the late 1990s and is today being widely used in various industries for weld inspection. One of the main advantages of TOFD is that, apart from fast technique, it provides higher probability of detection for linear defects. Since TOFD is based on diffraction of sound waves from the extremities of the defect compared to reflection from planar faces as in pulse echo and phased array, the resultant signal would be quite weak and signal to noise ratio (SNR) low. In many cases the defect signal is submerged in this noise making it difficult for detection, positioning and sizing. Several signal processing methods such as digital filtering, Split Spectrum Processing (SSP), Hilbert Transform and Correlation techniques have been developed in order to suppress unwanted noise and enhance the quality of the defect signal which can thus be used for characterization of defects and the material. Wavelet Transform based thresholding techniques have been applied largely for de-noising of ultrasonic signals. However in this paper, higher order wavelets are used for analyzing the de-noising performance for TOFD signals obtained from Austenitic Stainless Steel welds. It is observed that higher order wavelets give greater SNR improvement compared to the lower order wavelets. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. A new algorithm for ECG interference removal from single channel EMG recording.

    PubMed

    Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein

    2017-09-01

    This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.

  20. GPR random noise reduction using BPD and EMD

    NASA Astrophysics Data System (ADS)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  1. Noninvasive Fetal Electrocardiography Part II: Segmented-Beat Modulation Method for Signal Denoising

    PubMed Central

    Agostinelli, Angela; Sbrollini, Agnese; Burattini, Luca; Fioretti, Sandro; Di Nardo, Francesco; Burattini, Laura

    2017-01-01

    Background: Fetal well-being evaluation may be accomplished by monitoring cardiac activity through fetal electrocardiography. Direct fetal electrocardiography (acquired through scalp electrodes) is the gold standard but its invasiveness limits its clinical applicability. Instead, clinical use of indirect fetal electrocardiography (acquired through abdominal electrodes) is limited by its poor signal quality. Objective: Aim of this study was to evaluate the suitability of the Segmented-Beat Modulation Method to denoise indirect fetal electrocardiograms in order to achieve a signal-quality at least comparable to the direct ones. Method: Direct and indirect recordings, simultaneously acquired from 5 pregnant women during labor, were filtered with the Segmented-Beat Modulation Method and correlated in order to assess their morphological correspondence. Signal-to-noise ratio was used to quantify their quality. Results: Amplitude was higher in direct than indirect fetal electrocardiograms (median:104 µV vs. 22 µV; P=7.66·10-4), whereas noise was comparable (median:70 µV vs. 49 µV, P=0.45). Moreover, fetal electrocardiogram amplitude was significantly higher than affecting noise in direct recording (P=3.17·10-2) and significantly in indirect recording (P=1.90·10-3). Consequently, signal-to-noise ratio was initially higher for direct than indirect recordings (median:3.3 dB vs. -2.3 dB; P=3.90·10-3), but became lower after denoising of indirect ones (median:9.6 dB; P=9.84·10-4). Eventually, direct and indirect recordings were highly correlated (median: ρ=0.78; P<10-208), indicating that the two electrocardiograms were morphologically equivalent. Conclusion: Segmented-Beat Modulation Method is particularly useful for denoising of indirect fetal electrocardiogram and may contribute to the spread of this noninvasive technique in the clinical practice. PMID:28567129

  2. Implementation of dictionary pair learning algorithm for image quality improvement

    NASA Astrophysics Data System (ADS)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    This paper proposes an image denoising on dictionary pair learning algorithm. Visual information is transmitted in the form of digital images is becoming a major method of communication in the modern age, but the image obtained after transmissions is often corrupted with noise. The received image needs processing before it can be used in applications. Image denoising involves the manipulation of the image data to produce a visually high quality image.

  3. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

    PubMed Central

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2010-01-01

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positives voxels. PMID:21034833

  4. A photon recycling approach to the denoising of ultra-low dose X-ray sequences.

    PubMed

    Hariharan, Sai Gokul; Strobel, Norbert; Kaethner, Christian; Kowarschik, Markus; Demirci, Stefanie; Albarqouni, Shadi; Fahrig, Rebecca; Navab, Nassir

    2018-06-01

    Clinical procedures that make use of fluoroscopy may expose patients as well as the clinical staff (throughout their career) to non-negligible doses of radiation. The potential consequences of such exposures fall under two categories, namely stochastic (mostly cancer) and deterministic risks (skin injury). According to the "as low as reasonably achievable" principle, the radiation dose can be lowered only if the necessary image quality can be maintained. Our work improves upon the existing patch-based denoising algorithms by utilizing a more sophisticated noise model to exploit non-local self-similarity better and this in turn improves the performance of low-rank approximation. The novelty of the proposed approach lies in its properly designed and parameterized noise model and the elimination of initial estimates. This reduces the computational cost significantly. The algorithm has been evaluated on 500 clinical images (7 patients, 20 sequences, 3 clinical sites), taken at ultra-low dose levels, i.e. 50% of the standard low dose level, during electrophysiology procedures. An average improvement in the contrast-to-noise ratio (CNR) by a factor of around 3.5 has been found. This is associated with an image quality achieved at around 12 (square of 3.5) times the ultra-low dose level. Qualitative evaluation by X-ray image quality experts suggests that the method produces denoised images that comply with the required image quality criteria. The results are consistent with the number of patches used, and they demonstrate that it is possible to use motion estimation techniques and "recycle" photons from previous frames to improve the image quality of the current frame. Our results are comparable in terms of CNR to Video Block Matching 3D-a state-of-the-art denoising method. But qualitative analysis by experts confirms that the denoised ultra-low dose X-ray images obtained using our method are more realistic with respect to appearance.

  5. Application of composite dictionary multi-atom matching in gear fault diagnosis.

    PubMed

    Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng

    2011-01-01

    The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.

  6. MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.

    PubMed

    Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K

    2015-04-01

    Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.

  7. Seismic data fusion anomaly detection

    NASA Astrophysics Data System (ADS)

    Harrity, Kyle; Blasch, Erik; Alford, Mark; Ezekiel, Soundararajan; Ferris, David

    2014-06-01

    Detecting anomalies in non-stationary signals has valuable applications in many fields including medicine and meteorology. These include uses such as identifying possible heart conditions from an Electrocardiography (ECG) signals or predicting earthquakes via seismographic data. Over the many choices of anomaly detection algorithms, it is important to compare possible methods. In this paper, we examine and compare two approaches to anomaly detection and see how data fusion methods may improve performance. The first approach involves using an artificial neural network (ANN) to detect anomalies in a wavelet de-noised signal. The other method uses a perspective neural network (PNN) to analyze an arbitrary number of "perspectives" or transformations of the observed signal for anomalies. Possible perspectives may include wavelet de-noising, Fourier transform, peak-filtering, etc.. In order to evaluate these techniques via signal fusion metrics, we must apply signal preprocessing techniques such as de-noising methods to the original signal and then use a neural network to find anomalies in the generated signal. From this secondary result it is possible to use data fusion techniques that can be evaluated via existing data fusion metrics for single and multiple perspectives. The result will show which anomaly detection method, according to the metrics, is better suited overall for anomaly detection applications. The method used in this study could be applied to compare other signal processing algorithms.

  8. ERP denoising in multichannel EEG data using contrasts between signal and noise subspaces.

    PubMed

    Ivannikov, Andriy; Kalyakin, Igor; Hämäläinen, Jarmo; Leppänen, Paavo H T; Ristaniemi, Tapani; Lyytinen, Heikki; Kärkkäinen, Tommi

    2009-06-15

    In this paper, a new method intended for ERP denoising in multichannel EEG data is discussed. The denoising is done by separating ERP/noise subspaces in multidimensional EEG data by a linear transformation and the following dimension reduction by ignoring noise components during inverse transformation. The separation matrix is found based on the assumption that ERP sources are deterministic for all repetitions of the same type of stimulus within the experiment, while the other noise sources do not obey the determinancy property. A detailed derivation of the technique is given together with the analysis of the results of its application to a real high-density EEG data set. The interpretation of the results and the performance of the proposed method under conditions, when the basic assumptions are violated - e.g. the problem is underdetermined - are also discussed. Moreover, we study how the factors of the number of channels and trials used by the method influence the effectiveness of ERP/noise subspaces separation. In addition, we explore also the impact of different data resampling strategies on the performance of the considered algorithm. The results can help in determining the optimal parameters of the equipment/methods used to elicit and reliably estimate ERPs.

  9. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  10. Global Binary Optimization on Graphs for Classification of High Dimensional Data

    DTIC Science & Technology

    2014-09-01

    Buades et al . in [10] introduce a new non-local means algorithm for image denoising and compare it to some of the best methods. In [28], Grady de...scribes a random walk algorithm for image seg- mentation using the solution to a Dirichlet prob- lem. Elmoataz et al . present generalizations of the...graph Laplacian [19] for image denoising and man- ifold smoothing. Couprie et al . in [16] propose a parameterized graph-based energy function that unifies

  11. Noise estimation for hyperspectral imagery using spectral unmixing and synthesis

    NASA Astrophysics Data System (ADS)

    Demirkesen, C.; Leloglu, Ugur M.

    2014-10-01

    Most hyperspectral image (HSI) processing algorithms assume a signal to noise ratio model in their formulation which makes them dependent on accurate noise estimation. Many techniques have been proposed to estimate the noise. A very comprehensive comparative study on the subject is done by Gao et al. [1]. In a nut-shell, most techniques are based on the idea of calculating standard deviation from assumed-to-be homogenous regions in the image. Some of these algorithms work on a regular grid parameterized with a window size w, while others make use of image segmentation in order to obtain homogenous regions. This study focuses not only to the statistics of the noise but to the estimation of the noise itself. A noise estimation technique motivated from a recent HSI de-noising approach [2] is proposed in this study. The denoising algorithm is based on estimation of the end-members and their fractional abundances using non-negative least squares method. The end-members are extracted using the well-known simplex volume optimization technique called NFINDR after manual selection of number of end-members and the image is reconstructed using the estimated endmembers and abundances. Actually, image de-noising and noise estimation are two sides of the same coin: Once we denoise an image, we can estimate the noise by calculating the difference of the de-noised image and the original noisy image. In this study, the noise is estimated as described above. To assess the accuracy of this method, the methodology in [1] is followed, i.e., synthetic images are created by mixing end-member spectra and noise. Since best performing method for noise estimation was spectral and spatial de-correlation (SSDC) originally proposed in [3], the proposed method is compared to SSDC. The results of the experiments conducted with synthetic HSIs suggest that the proposed noise estimation strategy outperforms the existing techniques in terms of mean and standard deviation of absolute error of the estimated noise. Finally, it is shown that the proposed technique demonstrated a robust behavior to the change of its single parameter, namely the number of end-members.

  12. Image denoising by a direct variational minimization

    NASA Astrophysics Data System (ADS)

    Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan

    2011-12-01

    In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.

  13. Second-order oriented partial-differential equations for denoising in electronic-speckle-pattern interferometry fringes.

    PubMed

    Tang, Chen; Han, Lin; Ren, Hongwei; Zhou, Dongjian; Chang, Yiming; Wang, Xiaohang; Cui, Xiaolong

    2008-10-01

    We derive the second-order oriented partial-differential equations (PDEs) for denoising in electronic-speckle-pattern interferometry fringe patterns from two points of view. The first is based on variational methods, and the second is based on controlling diffusion direction. Our oriented PDE models make the diffusion along only the fringe orientation. The main advantage of our filtering method, based on oriented PDE models, is that it is very easy to implement compared with the published filtering methods along the fringe orientation. We demonstrate the performance of our oriented PDE models via application to two computer-simulated and experimentally obtained speckle fringes and compare with related PDE models.

  14. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement

    PubMed Central

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G.

    2015-01-01

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method. PMID:26262620

  15. Evaluation of effectiveness of wavelet based denoising schemes using ANN and SVM for bearing condition classification.

    PubMed

    Vijay, G S; Kumar, H S; Srinivasa Pai, P; Sriram, N S; Rao, Raj B K N

    2012-01-01

    The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR) and reducing the root-mean-square error (RMSE). In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN) and the Support Vector Machine (SVM), for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB) test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher's Criterion (FC). Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  16. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  17. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  18. Accurate prediction of subcellular location of apoptosis proteins combining Chou's PseAAC and PsePSSM based on wavelet denoising.

    PubMed

    Yu, Bin; Li, Shan; Qiu, Wen-Ying; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-12-08

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research.

  19. Medical image denoising via optimal implementation of non-local means on hybrid parallel architecture.

    PubMed

    Nguyen, Tuan-Anh; Nakib, Amir; Nguyen, Huy-Nam

    2016-06-01

    The Non-local means denoising filter has been established as gold standard for image denoising problem in general and particularly in medical imaging due to its efficiency. However, its computation time limited its applications in real world application, especially in medical imaging. In this paper, a distributed version on parallel hybrid architecture is proposed to solve the computation time problem and a new method to compute the filters' coefficients is also proposed, where we focused on the implementation and the enhancement of filters' parameters via taking the neighborhood of the current voxel more accurately into account. In terms of implementation, our key contribution consists in reducing the number of shared memory accesses. The different tests of the proposed method were performed on the brain-web database for different levels of noise. Performances and the sensitivity were quantified in terms of speedup, peak signal to noise ratio, execution time, the number of floating point operations. The obtained results demonstrate the efficiency of the proposed method. Moreover, the implementation is compared to that of other techniques, recently published in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Accurate prediction of subcellular location of apoptosis proteins combining Chou’s PseAAC and PsePSSM based on wavelet denoising

    PubMed Central

    Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Wang, Ming-Hui; Zhang, Yan

    2017-01-01

    Apoptosis proteins subcellular localization information are very important for understanding the mechanism of programmed cell death and the development of drugs. The prediction of subcellular localization of an apoptosis protein is still a challenging task because the prediction of apoptosis proteins subcellular localization can help to understand their function and the role of metabolic processes. In this paper, we propose a novel method for protein subcellular localization prediction. Firstly, the features of the protein sequence are extracted by combining Chou's pseudo amino acid composition (PseAAC) and pseudo-position specific scoring matrix (PsePSSM), then the feature information of the extracted is denoised by two-dimensional (2-D) wavelet denoising. Finally, the optimal feature vectors are input to the SVM classifier to predict subcellular location of apoptosis proteins. Quite promising predictions are obtained using the jackknife test on three widely used datasets and compared with other state-of-the-art methods. The results indicate that the method proposed in this paper can remarkably improve the prediction accuracy of apoptosis protein subcellular localization, which will be a supplementary tool for future proteomics research. PMID:29296195

  1. Video Denoising via Dynamic Video Layering

    NASA Astrophysics Data System (ADS)

    Guo, Han; Vaswani, Namrata

    2018-07-01

    Video denoising refers to the problem of removing "noise" from a video sequence. Here the term "noise" is used in a broad sense to refer to any corruption or outlier or interference that is not the quantity of interest. In this work, we develop a novel approach to video denoising that is based on the idea that many noisy or corrupted videos can be split into three parts - the "low-rank layer", the "sparse layer", and a small residual (which is small and bounded). We show, using extensive experiments, that our denoising approach outperforms the state-of-the-art denoising algorithms.

  2. Automatic target recognition and detection in infrared imagery under cluttered background

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.

    2017-10-01

    Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.

  3. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  4. A new method of SC image processing for confluence estimation.

    PubMed

    Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina

    2017-10-01

    Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Comprehensive Detection of Gas Plumes from Multibeam Water Column Images with Minimisation of Noise Interferences

    PubMed Central

    Zhao, Jianhu; Zhang, Hongmei; Wang, Shiqi

    2017-01-01

    Multibeam echosounder systems (MBES) can record backscatter strengths of gas plumes in the water column (WC) images that may be an indicator of possible occurrence of gas at certain depths. Manual or automatic detection is generally adopted in finding gas plumes, but frequently results in low efficiency and high false detection rates because of WC images that are polluted by noise. To improve the efficiency and reliability of the detection, a comprehensive detection method is proposed in this paper. In the proposed method, the characteristics of WC background noise are first analyzed and given. Then, the mean standard deviation threshold segmentations are respectively used for the denoising of time-angle and depth-angle images, an intersection operation is performed for the two segmented images to further weaken noise in the WC data, and the gas plumes in the WC data are detected from the intersection image by the morphological constraint. The proposed method was tested by conducting shallow-water and deepwater experiments. In these experiments, the detections were conducted automatically and higher correct detection rates than the traditional methods were achieved. The performance of the proposed method is analyzed and discussed. PMID:29186014

  6. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    PubMed

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  7. Denoising of chaotic signal using independent component analysis and empirical mode decomposition with circulate translating

    NASA Astrophysics Data System (ADS)

    Wen-Bo, Wang; Xiao-Dong, Zhang; Yuchan, Chang; Xiang-Li, Wang; Zhao, Wang; Xi, Chen; Lei, Zheng

    2016-01-01

    In this paper, a new method to reduce noises within chaotic signals based on ICA (independent component analysis) and EMD (empirical mode decomposition) is proposed. The basic idea is decomposing chaotic signals and constructing multidimensional input vectors, firstly, on the base of EMD and its translation invariance. Secondly, it makes the independent component analysis on the input vectors, which means that a self adapting denoising is carried out for the intrinsic mode functions (IMFs) of chaotic signals. Finally, all IMFs compose the new denoised chaotic signal. Experiments on the Lorenz chaotic signal composed of different Gaussian noises and the monthly observed chaotic sequence on sunspots were put into practice. The results proved that the method proposed in this paper is effective in denoising of chaotic signals. Moreover, it can correct the center point in the phase space effectively, which makes it approach the real track of the chaotic attractor. Project supported by the National Science and Technology, China (Grant No. 2012BAJ15B04), the National Natural Science Foundation of China (Grant Nos. 41071270 and 61473213), the Natural Science Foundation of Hubei Province, China (Grant No. 2015CFB424), the State Key Laboratory Foundation of Satellite Ocean Environment Dynamics, China (Grant No. SOED1405), the Hubei Provincial Key Laboratory Foundation of Metallurgical Industry Process System Science, China (Grant No. Z201303), and the Hubei Key Laboratory Foundation of Transportation Internet of Things, Wuhan University of Technology, China (Grant No.2015III015-B02).

  8. A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

    PubMed Central

    Yang, Qiyao; Wang, Zhiguo; Zhang, Guoxu

    2017-01-01

    The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one. PMID:28316979

  9. Preprocessing with image denoising and histogram equalization for endoscopy image analysis using texture analysis.

    PubMed

    Hiroyasu, Tomoyuki; Hayashinuma, Katsutoshi; Ichikawa, Hiroshi; Yagi, Nobuaki

    2015-08-01

    A preprocessing method for endoscopy image analysis using texture analysis is proposed. In a previous study, we proposed a feature value that combines a co-occurrence matrix and a run-length matrix to analyze the extent of early gastric cancer from images taken with narrow-band imaging endoscopy. However, the obtained feature value does not identify lesion zones correctly due to the influence of noise and halation. Therefore, we propose a new preprocessing method with a non-local means filter for de-noising and contrast limited adaptive histogram equalization. We have confirmed that the pattern of gastric mucosa in images can be improved by the proposed method. Furthermore, the lesion zone is shown more correctly by the obtained color map.

  10. Performance comparison of denoising filters for source camera identification

    NASA Astrophysics Data System (ADS)

    Cortiana, A.; Conotter, V.; Boato, G.; De Natale, F. G. B.

    2011-02-01

    Source identification for digital content is one of the main branches of digital image forensics. It relies on the extraction of the photo-response non-uniformity (PRNU) noise as a unique intrinsic fingerprint that efficiently characterizes the digital device which generated the content. Such noise is estimated as the difference between the content and its de-noised version obtained via denoising filter processing. This paper proposes a performance comparison of different denoising filters for source identification purposes. In particular, results achieved with a sophisticated 3D filter are presented and discussed with respect to state-of-the-art denoising filters previously employed in such a context.

  11. Fourth-order partial differential equation noise removal on welding images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halim, Suhaila Abd; Ibrahim, Arsmah; Sulong, Tuan Nurul Norazura Tuan

    2015-10-22

    Partial differential equation (PDE) has become one of the important topics in mathematics and is widely used in various fields. It can be used for image denoising in the image analysis field. In this paper, a fourth-order PDE is discussed and implemented as a denoising method on digital images. The fourth-order PDE is solved computationally using finite difference approach and then implemented on a set of digital radiographic images with welding defects. The performance of the discretized model is evaluated using Peak Signal to Noise Ratio (PSNR). Simulation is carried out on the discretized model on different level of Gaussianmore » noise in order to get the maximum PSNR value. The convergence criteria chosen to determine the number of iterations required is measured based on the highest PSNR value. Results obtained show that the fourth-order PDE model produced promising results as an image denoising tool compared with median filter.« less

  12. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform

    PubMed Central

    Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud

    2012-01-01

    Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804

  13. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform.

    PubMed

    Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud

    2012-11-01

    ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.

  14. Prediction of load threshold of fibre-reinforced laminated composite panels subjected to low velocity drop-weight impact using efficient data filtering techniques

    NASA Astrophysics Data System (ADS)

    Farooq, Umar; Myler, Peter

    This work is concerned with physical testing of carbon fibrous laminated composite panels with low velocity drop-weight impacts from flat and round nose impactors. Eight, sixteen, and twenty-four ply panels were considered. Non-destructive damage inspections of tested specimens were conducted to approximate impact-induced damage. Recorded data were correlated to load-time, load-deflection, and energy-time history plots to interpret impact induced damage. Data filtering techniques were also applied to the noisy data that unavoidably generate due to limitations of testing and logging systems. Built-in, statistical, and numerical filters effectively predicted load thresholds for eight and sixteen ply laminates. However, flat nose impact of twenty-four ply laminates produced clipped data that can only be de-noised involving oscillatory algorithms. Data filtering and extrapolation of such data have received rare attention in the literature that needs to be investigated. The present work demonstrated filtering and extrapolation of the clipped data using Fast Fourier Convolution algorithm to predict load thresholds. Selected results were compared to the damage zones identified with C-scan and acceptable agreements have been observed. Based on the results it is proposed that use of advanced data filtering and analysis methods to data collected by the available resources has effectively enhanced data interpretations without resorting to additional resources. The methodology could be useful for efficient and reliable data analysis and impact-induced damage prediction of similar cases' data.

  15. Denoising the Speaking Brain: Toward a Robust Technique for Correcting Artifact-Contaminated fMRI Data under Severe Motion

    PubMed Central

    Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y.; Mattay, Govind S.; Braun, Allen R.

    2014-01-01

    A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. PMID:25225001

  16. Denoising the speaking brain: toward a robust technique for correcting artifact-contaminated fMRI data under severe motion.

    PubMed

    Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y; Mattay, Govind S; Braun, Allen R

    2014-12-01

    A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; and 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A neural network-based method for spectral distortion correction in photon counting x-ray CT

    NASA Astrophysics Data System (ADS)

    Touch, Mengheng; Clark, Darin P.; Barber, William; Badea, Cristian T.

    2016-08-01

    Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables both 4 energy bins acquisition, as well as full-spectrum mode in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical effects in the detector and can be very noisy due to photon starvation in narrow energy bins. To address spectral distortions, we propose and demonstrate a novel artificial neural network (ANN)-based spectral distortion correction mechanism, which learns to undo the distortion in spectral CT, resulting in improved material decomposition accuracy. To address noise, post-reconstruction denoising based on bilateral filtration, which jointly enforces intensity gradient sparsity between spectral samples, is used to further improve the robustness of ANN training and material decomposition accuracy. Our ANN-based distortion correction method is calibrated using 3D-printed phantoms and a model of our spectral CT system. To enable realistic simulations and validation of our method, we first modeled the spectral distortions using experimental data acquired from 109Cd and 133Ba radioactive sources measured with our PCXD. Next, we trained an ANN to learn the relationship between the distorted spectral CT projections and the ideal, distortion-free projections in a calibration step. This required knowledge of the ground truth, distortion-free spectral CT projections, which were obtained by simulating a spectral CT scan of the digital version of a 3D-printed phantom. Once the training was completed, the trained ANN was used to perform distortion correction on any subsequent scans of the same system with the same parameters. We used joint bilateral filtration to perform noise reduction by jointly enforcing intensity gradient sparsity between the reconstructed images for each energy bin. Following reconstruction and denoising, the CT data was spectrally decomposed using the photoelectric effect, Compton scattering, and a K-edge material (i.e. iodine). The ANN-based distortion correction approach was tested using both simulations and experimental data acquired in phantoms and a mouse with our PCXD-based micro-CT system for 4 bins and full-spectrum acquisition modes. The iodine detectability and decomposition accuracy were assessed using the contrast-to-noise ratio and relative error in iodine concentration estimation metrics in images with and without distortion correction. In simulation, the material decomposition accuracy in the reconstructed data was vastly improved following distortion correction and denoising, with 50% and 20% reductions in material concentration measurement error in full-spectrum and 4 energy bins cases, respectively. Overall, experimental data confirms that full-spectrum mode provides superior results to 4-energy mode when the distortion corrections are applied. The material decomposition accuracy in the reconstructed data was vastly improved following distortion correction and denoising, with as much as a 41% reduction in material concentration measurement error for full-spectrum mode, while also bringing the iodine detectability to 4-6 mg ml-1. Distortion correction also improved the 4 bins mode data, but to a lesser extent. The results demonstrate the experimental feasibility and potential advantages of ANN-based distortion correction and joint bilateral filtration-based denoising for accurate K-edge imaging with a PCXD. Given the computational efficiency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.

  18. Comments on "Image denoising by sparse 3-D transform-domain collaborative filtering".

    PubMed

    Hou, Yingkun; Zhao, Chunxia; Yang, Deyun; Cheng, Yong

    2011-01-01

    In order to resolve the problem that the denoising performance has a sharp drop when noise standard deviation reaches 40, proposed to replace the wavelet transform by the DCT. In this comment, we argue that this replacement is unnecessary, and that the problem can be solved by adjusting some numerical parameters. We also present this parameter modification approach here. Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.

  19. Multitaper Spectral Analysis and Wavelet Denoising Applied to Helioseismic Data

    NASA Technical Reports Server (NTRS)

    Komm, R. W.; Gu, Y.; Hill, F.; Stark, P. B.; Fodor, I. K.

    1999-01-01

    Estimates of solar normal mode frequencies from helioseismic observations can be improved by using Multitaper Spectral Analysis (MTSA) to estimate spectra from the time series, then using wavelet denoising of the log spectra. MTSA leads to a power spectrum estimate with reduced variance and better leakage properties than the conventional periodogram. Under the assumption of stationarity and mild regularity conditions, the log multitaper spectrum has a statistical distribution that is approximately Gaussian, so wavelet denoising is asymptotically an optimal method to reduce the noise in the estimated spectra. We find that a single m-upsilon spectrum benefits greatly from MTSA followed by wavelet denoising, and that wavelet denoising by itself can be used to improve m-averaged spectra. We compare estimates using two different 5-taper estimates (Stepian and sine tapers) and the periodogram estimate, for GONG time series at selected angular degrees l. We compare those three spectra with and without wavelet-denoising, both visually, and in terms of the mode parameters estimated from the pre-processed spectra using the GONG peak-fitting algorithm. The two multitaper estimates give equivalent results. The number of modes fitted well by the GONG algorithm is 20% to 60% larger (depending on l and the temporal frequency) when applied to the multitaper estimates than when applied to the periodogram. The estimated mode parameters (frequency, amplitude and width) are comparable for the three power spectrum estimates, except for modes with very small mode widths (a few frequency bins), where the multitaper spectra broadened the modest compared with the periodogram. We tested the influence of the number of tapers used and found that narrow modes at low n values are broadened to the extent that they can no longer be fit if the number of tapers is too large. For helioseismic time series of this length and temporal resolution, the optimal number of tapers is less than 10.

  20. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    PubMed

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  1. Ultrasound speckle reduction based on fractional order differentiation.

    PubMed

    Shao, Dangguo; Zhou, Ting; Liu, Fan; Yi, Sanli; Xiang, Yan; Ma, Lei; Xiong, Xin; He, Jianfeng

    2017-07-01

    Ultrasound images show a granular pattern of noise known as speckle that diminishes their quality and results in difficulties in diagnosis. To preserve edges and features, this paper proposes a fractional differentiation-based image operator to reduce speckle in ultrasound. An image de-noising model based on fractional partial differential equations with balance relation between k (gradient modulus threshold that controls the conduction) and v (the order of fractional differentiation) was constructed by the effective combination of fractional calculus theory and a partial differential equation, and the numerical algorithm of it was achieved using a fractional differential mask operator. The proposed algorithm has better speckle reduction and structure preservation than the three existing methods [P-M model, the speckle reducing anisotropic diffusion (SRAD) technique, and the detail preserving anisotropic diffusion (DPAD) technique]. And it is significantly faster than bilateral filtering (BF) in producing virtually the same experimental results. Ultrasound phantom testing and in vivo imaging show that the proposed method can improve the quality of an ultrasound image in terms of tissue SNR, CNR, and FOM values.

  2. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Minimum risk wavelet shrinkage operator for Poisson image denoising.

    PubMed

    Cheng, Wu; Hirakawa, Keigo

    2015-05-01

    The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.

  4. Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter

    NASA Astrophysics Data System (ADS)

    Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.

    2013-02-01

    Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.

  5. De-noising of 3D multiple-coil MR images using modified LMMSE estimator.

    PubMed

    Yaghoobi, Nima; Hasanzadeh, Reza P R

    2018-06-20

    De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.

  6. Analysis of de-noising methods to improve the precision of the ILSF BPM electronic readout system

    NASA Astrophysics Data System (ADS)

    Shafiee, M.; Feghhi, S. A. H.; Rahighi, J.

    2016-12-01

    In order to have optimum operation and precise control system at particle accelerators, it is required to measure the beam position with the precision of sub-μm. We developed a BPM electronic readout system at Iranian Light Source Facility and it has been experimentally tested at ALBA accelerator facility. The results show the precision of 0.54 μm in beam position measurements. To improve the precision of this beam position monitoring system to sub-μm level, we have studied different de-noising methods such as principal component analysis, wavelet transforms, filtering by FIR, and direct averaging method. An evaluation of the noise reduction was given to testify the ability of these methods. The results show that the noise reduction based on Daubechies wavelet transform is better than other algorithms, and the method is suitable for signal noise reduction in beam position monitoring system.

  7. [Extraction of evoked related potentials by using the combination of independent component analysis and wavelet analysis].

    PubMed

    Zou, Ling; Chen, Shuyue; Sun, Yuqiang; Ma, Zhenghua

    2010-08-01

    In this paper we present a new method of combining Independent Component Analysis (ICA) and Wavelet de-noising algorithm to extract Evoked Related Potentials (ERPs). First, the extended Infomax-ICA algorithm is used to analyze EEG signals and obtain the independent components (Ics); Then, the Wave Shrink (WS) method is applied to the demixed Ics as an intermediate step; the EEG data were rebuilt by using the inverse ICA based on the new Ics; the ERPs were extracted by using de-noised EEG data after being averaged several trials. The experimental results showed that the combined method and ICA method could remove eye artifacts and muscle artifacts mixed in the ERPs, while the combined method could retain the brain neural activity mixed in the noise Ics and could extract the weak ERPs efficiently from strong background artifacts.

  8. An anisotropic diffusion method for denoising dynamic susceptibility contrast-enhanced magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Murase, Kenya; Yamazaki, Youichi; Shinohara, Masaaki; Kawakami, Kazunori; Kikuchi, Keiichi; Miki, Hitoshi; Mochizuki, Teruhito; Ikezoe, Junpei

    2001-10-01

    The purpose of this study was to present an application of a novel denoising technique for improving the accuracy of cerebral blood flow (CBF) images generated from dynamic susceptibility contrast-enhanced magnetic resonance imaging (DSC-MRI). The method presented in this study was based on anisotropic diffusion (AD). The usefulness of this method was firstly investigated using computer simulations. We applied this method to patient data acquired using a 1.5 T MR system. After a bolus injection of Gd-DTPA, we obtained 40-50 dynamic images with a 1.32-2.08 s time resolution in 4-6 slices. The dynamic images were processed using the AD method, and then the CBF images were generated using pixel-by-pixel deconvolution analysis. For comparison, the CBF images were also generated with or without processing the dynamic images using a median or Gaussian filter. In simulation studies, the standard deviation of the CBF values obtained after processing by the AD method was smaller than that of the CBF values obtained without any processing, while the mean value agreed well with the true CBF value. Although the median and Gaussian filters also reduced image noise, the mean CBF values were considerably underestimated compared with the true values. Clinical studies also suggested that the AD method was capable of reducing the image noise while preserving the quantitative accuracy of CBF images. In conclusion, the AD method appears useful for denoising DSC-MRI, which will make the CBF images generated from DSC-MRI more reliable.

  9. Effect of denoising on supervised lung parenchymal clusters

    NASA Astrophysics Data System (ADS)

    Jayamani, Padmapriya; Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising technique that does not deteriorate the integrity of supervised clusters.

  10. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  11. Image denoising and deblurring using multispectral data

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.

    2017-05-01

    Currently decision-making systems get widespread. These systems are based on the analysis video sequences and also additional data. They are volume, change size, the behavior of one or a group of objects, temperature gradient, the presence of local areas with strong differences, and others. Security and control system are main areas of application. A noise on the images strongly influences the subsequent processing and decision making. This paper considers the problem of primary signal processing for solving the tasks of image denoising and deblurring of multispectral data. The additional information from multispectral channels can improve the efficiency of object classification. In this paper we use method of combining information about the objects obtained by the cameras in different frequency bands. We apply method based on simultaneous minimization L2 and the first order square difference sequence of estimates to denoising and restoring the blur on the edges. In case of loss of the information will be applied an approach based on the interpolation of data taken from the analysis of objects located in other areas and information obtained from multispectral camera. The effectiveness of the proposed approach is shown in a set of test images.

  12. Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan

    2017-09-01

    It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.

  13. A New Pixels Flipping Method for Huge Watermarking Capacity of the Invoice Font Image

    PubMed Central

    Li, Li; Hou, Qingzheng; Lu, Jianfeng; Dai, Junping; Mao, Xiaoyang; Chang, Chin-Chen

    2014-01-01

    Invoice printing just has two-color printing, so invoice font image can be seen as binary image. To embed watermarks into invoice image, the pixels need to be flipped. The more huge the watermark is, the more the pixels need to be flipped. We proposed a new pixels flipping method in invoice image for huge watermarking capacity. The pixels flipping method includes one novel interpolation method for binary image, one flippable pixels evaluation mechanism, and one denoising method based on gravity center and chaos degree. The proposed interpolation method ensures that the invoice image keeps features well after scaling. The flippable pixels evaluation mechanism ensures that the pixels keep better connectivity and smoothness and the pattern has highest structural similarity after flipping. The proposed denoising method makes invoice font image smoother and fiter for human vision. Experiments show that the proposed flipping method not only keeps the invoice font structure well but also improves watermarking capacity. PMID:25489606

  14. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    PubMed

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  15. A hybrid algorithm for speckle noise reduction of ultrasound images.

    PubMed

    Singh, Karamjeet; Ranade, Sukhjeet Kaur; Singh, Chandan

    2017-09-01

    Medical images are contaminated by multiplicative speckle noise which significantly reduce the contrast of ultrasound images and creates a negative effect on various image interpretation tasks. In this paper, we proposed a hybrid denoising approach which collaborate the both local and nonlocal information in an efficient manner. The proposed hybrid algorithm consist of three stages in which at first stage the use of local statistics in the form of guided filter is used to reduce the effect of speckle noise initially. Then, an improved speckle reducing bilateral filter (SRBF) is developed to further reduce the speckle noise from the medical images. Finally, to reconstruct the diffused edges we have used the efficient post-processing technique which jointly considered the advantages of both bilateral and nonlocal mean (NLM) filter for the attenuation of speckle noise efficiently. The performance of proposed hybrid algorithm is evaluated on synthetic, simulated and real ultrasound images. The experiments conducted on various test images demonstrate that our proposed hybrid approach outperforms the various traditional speckle reduction approaches included recently proposed NLM and optimized Bayesian-based NLM. The results of various quantitative, qualitative measures and by visual inspection of denoise synthetic and real ultrasound images demonstrate that the proposed hybrid algorithm have strong denoising capability and able to preserve the fine image details such as edge of a lesion better than previously developed methods for speckle noise reduction. The denoising and edge preserving capability of hybrid algorithm is far better than existing traditional and recently proposed speckle reduction (SR) filters. The success of proposed algorithm would help in building the lay foundation for inventing the hybrid algorithms for denoising of ultrasound images. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A Laplacian based image filtering using switching noise detector.

    PubMed

    Ranjbaran, Ali; Hassan, Anwar Hasni Abu; Jafarpour, Mahboobe; Ranjbaran, Bahar

    2015-01-01

    This paper presents a Laplacian-based image filtering method. Using a local noise estimator function in an energy functional minimizing scheme we show that Laplacian that has been known as an edge detection function can be used for noise removal applications. The algorithm can be implemented on a 3x3 window and easily tuned by number of iterations. Image denoising is simplified to the reduction of the pixels value with their related Laplacian value weighted by local noise estimator. The only parameter which controls smoothness is the number of iterations. Noise reduction quality of the introduced method is evaluated and compared with some classic algorithms like Wiener and Total Variation based filters for Gaussian noise. And also the method compared with the state-of-the-art method BM3D for some images. The algorithm appears to be easy, fast and comparable with many classic denoising algorithms for Gaussian noise.

  17. Fast Acquisition and Reconstruction of Optical Coherence Tomography Images via Sparse Representation

    PubMed Central

    Li, Shutao; McNabb, Ryan P.; Nie, Qing; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.; Farsiu, Sina

    2014-01-01

    In this paper, we present a novel technique, based on compressive sensing principles, for reconstruction and enhancement of multi-dimensional image data. Our method is a major improvement and generalization of the multi-scale sparsity based tomographic denoising (MSBTD) algorithm we recently introduced for reducing speckle noise. Our new technique exhibits several advantages over MSBTD, including its capability to simultaneously reduce noise and interpolate missing data. Unlike MSBTD, our new method does not require an a priori high-quality image from the target imaging subject and thus offers the potential to shorten clinical imaging sessions. This novel image restoration method, which we termed sparsity based simultaneous denoising and interpolation (SBSDI), utilizes sparse representation dictionaries constructed from previously collected datasets. We tested the SBSDI algorithm on retinal spectral domain optical coherence tomography images captured in the clinic. Experiments showed that the SBSDI algorithm qualitatively and quantitatively outperforms other state-of-the-art methods. PMID:23846467

  18. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  19. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.

  20. Conjunction of radial basis function interpolator and artificial intelligence models for time-space modeling of contaminant transport in porous media

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram; Dabrowska, Dominika; Sadikoglu, Fahreddin

    2017-05-01

    As an innovation, both black box and physical-based models were incorporated into simulating groundwater flow and contaminant transport. Time series of groundwater level (GL) and chloride concentration (CC) observed at different piezometers of study plain were firstly de-noised by the wavelet-based de-noising approach. The effect of de-noised data on the performance of artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) was evaluated. Wavelet transform coherence was employed for spatial clustering of piezometers. Then for each cluster, ANN and ANFIS models were trained to predict GL and CC values. Finally, considering the predicted water heads of piezometers as interior conditions, the radial basis function as a meshless method which solves partial differential equations of GFCT, was used to estimate GL and CC values at any point within the plain where there is not any piezometer. Results indicated that efficiency of ANFIS based spatiotemporal model was more than ANN based model up to 13%.

  1. Sparsity-based Poisson denoising with dictionary learning.

    PubMed

    Giryes, Raja; Elad, Michael

    2014-12-01

    The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.

  2. Automated protein NMR structure determination using wavelet de-noised NOESY spectra.

    PubMed

    Dancea, Felician; Günther, Ulrich

    2005-11-01

    A major time-consuming step of protein NMR structure determination is the generation of reliable NOESY cross peak lists which usually requires a significant amount of manual interaction. Here we present a new algorithm for automated peak picking involving wavelet de-noised NOESY spectra in a process where the identification of peaks is coupled to automated structure determination. The core of this method is the generation of incremental peak lists by applying different wavelet de-noising procedures which yield peak lists of a different noise content. In combination with additional filters which probe the consistency of the peak lists, good convergence of the NOESY-based automated structure determination could be achieved. These algorithms were implemented in the context of the ARIA software for automated NOE assignment and structure determination and were validated for a polysulfide-sulfur transferase protein of known structure. The procedures presented here should be commonly applicable for efficient protein NMR structure determination and automated NMR peak picking.

  3. Enhancing speech recognition using improved particle swarm optimization based hidden Markov model.

    PubMed

    Selvaraj, Lokesh; Ganesan, Balakrishnan

    2014-01-01

    Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO) is suggested. The suggested methodology contains four stages, namely, (i) denoising, (ii) feature mining (iii), vector quantization, and (iv) IPSO based hidden Markov model (HMM) technique (IP-HMM). At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC), mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  4. Looking for the Signal: A guide to iterative noise and artefact removal in X-ray tomographic reconstructions of porous geomaterials

    NASA Astrophysics Data System (ADS)

    Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.

    2017-07-01

    X-ray micro- and nanotomography has evolved into a quantitative analysis tool rather than a mere qualitative visualization technique for the study of porous natural materials. Tomographic reconstructions are subject to noise that has to be handled by image filters prior to quantitative analysis. Typically, denoising filters are designed to handle random noise, such as Gaussian or Poisson noise. In tomographic reconstructions, noise has been projected from Radon space to Euclidean space, i.e. post reconstruction noise cannot be expected to be random but to be correlated. Reconstruction artefacts, such as streak or ring artefacts, aggravate the filtering process so algorithms performing well with random noise are not guaranteed to provide satisfactory results for X-ray tomography reconstructions. With sufficient image resolution, the crystalline origin of most geomaterials results in tomography images of objects that are untextured. We developed a denoising framework for these kinds of samples that combines a noise level estimate with iterative nonlocal means denoising. This allows splitting the denoising task into several weak denoising subtasks where the later filtering steps provide a controlled level of texture removal. We describe a hands-on explanation for the use of this iterative denoising approach and the validity and quality of the image enhancement filter was evaluated in a benchmarking experiment with noise footprints of a varying level of correlation and residual artefacts. They were extracted from real tomography reconstructions. We found that our denoising solutions were superior to other denoising algorithms, over a broad range of contrast-to-noise ratios on artificial piecewise constant signals.

  5. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  6. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    PubMed

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Design, development and testing of a low-cost sEMG system and its use in recording muscle activity in human gait.

    PubMed

    Supuk, Tamara Grujic; Skelin, Ana Kuzmanic; Cic, Maja

    2014-05-07

    Surface electromyography (sEMG) is an important measurement technique used in biomechanical, rehabilitation and sport environments. In this article the design, development and testing of a low-cost wearable sEMG system are described. The hardware architecture consists of a two-cascade small-sized bioamplifier with a total gain of 2,000 and band-pass of 3 to 500 Hz. The sampling frequency of the system is 1,000 Hz. Since real measured EMG signals are usually corrupted by various types of noises (motion artifacts, white noise and electromagnetic noise present at 50 Hz and higher harmonics), we have tested several denoising techniques, both on artificial and measured EMG signals. Results showed that a wavelet-based technique implementing Daubechies5 wavelet and soft sqtwolog thresholding is the most appropriate for EMG signals denoising. To test the system performance, EMG activities of six dominant muscles of ten healthy subjects during gait were measured (gluteus maximus, biceps femoris, sartorius, rectus femoris, tibialis anterior and medial gastrocnemius). The obtained EMG envelopes presented against the duration of gait cycle were compared favourably with the EMG data available in the literature, suggesting that the proposed system is suitable for a wide range of applications in biomechanics.

  8. Design, Development and Testing of a Low-Cost sEMG System and Its Use in Recording Muscle Activity in Human Gait

    PubMed Central

    Supuk, Tamara Grujic; Skelin, Ana Kuzmanic; Cic, Maja

    2014-01-01

    Surface electromyography (sEMG) is an important measurement technique used in biomechanical, rehabilitation and sport environments. In this article the design, development and testing of a low-cost wearable sEMG system are described. The hardware architecture consists of a two-cascade small-sized bioamplifier with a total gain of 2,000 and band-pass of 3 to 500 Hz. The sampling frequency of the system is 1,000 Hz. Since real measured EMG signals are usually corrupted by various types of noises (motion artifacts, white noise and electromagnetic noise present at 50 Hz and higher harmonics), we have tested several denoising techniques, both on artificial and measured EMG signals. Results showed that a wavelet—based technique implementing Daubechies5 wavelet and soft sqtwolog thresholding is the most appropriate for EMG signals denoising. To test the system performance, EMG activities of six dominant muscles of ten healthy subjects during gait were measured (gluteus maximus, biceps femoris, sartorius, rectus femoris, tibialis anterior and medial gastrocnemius). The obtained EMG envelopes presented against the duration of gait cycle were compared favourably with the EMG data available in the literature, suggesting that the proposed system is suitable for a wide range of applications in biomechanics. PMID:24811078

  9. Evaluation of Denoising Strategies to Address Motion-Correlated Artifacts in Resting-State Functional Magnetic Resonance Imaging Data from the Human Connectome Project

    PubMed Central

    Kandala, Sridhar; Nolan, Dan; Laumann, Timothy O.; Power, Jonathan D.; Adeyemo, Babatunde; Harms, Michael P.; Petersen, Steven E.; Barch, Deanna M.

    2016-01-01

    Abstract Like all resting-state functional connectivity data, the data from the Human Connectome Project (HCP) are adversely affected by structured noise artifacts arising from head motion and physiological processes. Functional connectivity estimates (Pearson's correlation coefficients) were inflated for high-motion time points and for high-motion participants. This inflation occurred across the brain, suggesting the presence of globally distributed artifacts. The degree of inflation was further increased for connections between nearby regions compared with distant regions, suggesting the presence of distance-dependent spatially specific artifacts. We evaluated several denoising methods: censoring high-motion time points, motion regression, the FMRIB independent component analysis-based X-noiseifier (FIX), and mean grayordinate time series regression (MGTR; as a proxy for global signal regression). The results suggest that FIX denoising reduced both types of artifacts, but left substantial global artifacts behind. MGTR significantly reduced global artifacts, but left substantial spatially specific artifacts behind. Censoring high-motion time points resulted in a small reduction of distance-dependent and global artifacts, eliminating neither type. All denoising strategies left differences between high- and low-motion participants, but only MGTR substantially reduced those differences. Ultimately, functional connectivity estimates from HCP data showed spatially specific and globally distributed artifacts, and the most effective approach to address both types of motion-correlated artifacts was a combination of FIX and MGTR. PMID:27571276

  10. Intelligent identification of remnant ridge edges in region west of Yongxing Island, South China Sea

    NASA Astrophysics Data System (ADS)

    Wang, Weiwei; Guo, Jing; Cai, Guanqiang; Wang, Dawei

    2018-02-01

    Edge detection enables identification of geomorphologic unit boundaries and thus assists with geomorphical mapping. In this paper, an intelligent edge identification method is proposed and image processing techniques are applied to multi-beam bathymetry data. To accomplish this, a color image is generated by the bathymetry, and a weighted method is used to convert the color image to a gray image. As the quality of the image has a significant influence on edge detection, different filter methods are applied to the gray image for de-noising. The peak signal-to-noise ratio and mean square error are calculated to evaluate which filter method is most appropriate for depth image filtering and the edge is subsequently detected using an image binarization method. Traditional image binarization methods cannot manage the complicated uneven seafloor, and therefore a binarization method is proposed that is based on the difference between image pixel values; the appropriate threshold for image binarization is estimated according to the probability distribution of pixel value differences between two adjacent pixels in horizontal and vertical directions, respectively. Finally, an eight-neighborhood frame is adopted to thin the binary image, connect the intermittent edge, and implement contour extraction. Experimental results show that the method described here can recognize the main boundaries of geomorphologic units. In addition, the proposed automatic edge identification method avoids use of subjective judgment, and reduces time and labor costs.

  11. Single-trial extraction of cognitive evoked potentials by combination of third-order correlation and wavelet denoising.

    PubMed

    Zhang, Z; Tian, X

    2005-01-01

    The application of a recently proposed denoising implementation for obtaining cognitive evoked potentials (CEPs) at the single-trial level is shown. The aim of this investigation is to develop the technique of extracting CEPs by combining both the third-order correlation and the wavelet denoising methods. First, the noisy CEPs was passed through a finite impulse response filter whose impulse response is matched with the shape of the noise-free signal. It was shown that it is possible to estimate the filter impulse response on basis of a select third-order correlation slice (TOCS) of the input noisy CEPs. Second, the output from the third-order correlation filter is decomposed with bi-orthogonal splines at 5 levels. The CEPs is reconstructed by wavelet final approximation a5. We study its performance in simulated data as well as in cognitive evoked potentials of normal rat and Alzheimer's disease (AD) model rat. For the simulated data, the method gives a significantly better reconstruction of the single-trial cognitive evoked potentials responses in comparison with the simulated data. Moreover, with this approach we obtain a significantly better estimation of the amplitudes and latencies of the simulated CEPs. For the real data, the method clearly improves the visualization of single-trial CEPs. This allows the calculation of better averages as well as the study of systematic or unsystematic variations between trials.

  12. MSSA de-noising of horizon time structure to improve the curvature attribute analysis

    NASA Astrophysics Data System (ADS)

    Tiwari, R. K.; Rekapalli, R.; Vedanti, N.

    2017-12-01

    Although the seismic attributes are useful for identifying sub-surface structural features like faults, fractures, lineaments and sharp stratigraphy etc., the different kinds of noises arising from unknown physical sources during the data acquisition and processing creates acute problems in physical interpretation of complex crustal structures. Hence, we propose to study effect of noise on curvature attribute analysis of seismic time structure data. We propose here Multichannel Singular Spectrum Analysis (MSSA) de-noising algorithm as a pre filtering scheme to reduce effect of noise. To demonstrate the procedure, first, we compute the most positive and negative curvature on a synthetic time structure with surface features resembling anticlines, synclines and faults and then adding the known percentage of noise. We noticed that the curvatures estimated from the noisy data reveal considerable deviations from the curvature of pure synthetic data. This suggests that there is a strong impact of noise on the curvature estimates. Further, we have employed 2D median filter and MSSA methods to filter the noisy time structure and then computed the curvatures. The comparisons of curvatures estimated from de-noised data suggest that the results obtained from MSSA de-noised data match well with the curvatures of pure synthetic data. Finally, we present an example of real data analysis from Utsira Top (UT) horizon of Southern Viking Graben, Norway to identify the time-lapse changes in UT horizon after CO2 injection. We applied the MSSA de-noising algorithm on UT horizon time structure and amplitude data of pre and post CO2 injection. Our analyses suggest modest but clearly visible, structural changes in the UT horizon after CO2 injection at a few locations, which seem to be associated with the locations of change in seismic amplitudes. Thus, the results from both the synthetic and real field data suggest that the MSSA based de-noising algorithm is robust for filtering the horizon time structures for accurate curvature attributes analysis and better interpretation of structural changes in geological features. Key Words: Curvature attributes, MSSA, Seismic Horizon, 2D-median filter, Utsira Horizon.

  13. Denoised Wigner distribution deconvolution via low-rank matrix completion

    DOE PAGES

    Lee, Justin; Barbastathis, George

    2016-08-23

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less

  14. Denoised Wigner distribution deconvolution via low-rank matrix completion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Justin; Barbastathis, George

    Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less

  15. Non local means denoising in photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Siregar, Syahril; Nagaoka, Ryo; Haq, Israr Ul; Saijo, Yoshifumi

    2018-07-01

    Photoacoustic (PA) imaging has the ability to visualize human organs with high spatial resolution and high contrast. Like digital images, PA images are contaminated with random noise due to some parameters. The band-pass filter does not effectively remove the noise because noise is randomly distributed in the bandwidth frequency. We present noise removal method in PA images by using non local means denoising (NLMD) method. The NLMD can be used if there are similarities or redundancies in the image. PA images contain of blood vessel which repeating on the small patch. The method was tested on PA images of carbon nanotubes in micropipe, in vivo mice brain and in vivo mice ear. We estimated the suggested input parameters of NLMD, so it can be automatically applied after scanning the image in PA imaging system. Our results declared that the NLMD enhanced the image quality of PA images.

  16. A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising

    PubMed Central

    Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin

    2015-01-01

    Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932

  17. Efficient bias correction for magnetic resonance image denoising.

    PubMed

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  18. A new method for fusion, denoising and enhancement of x-ray images retrieved from Talbot-Lau grating interferometry.

    PubMed

    Scholkmann, Felix; Revol, Vincent; Kaufmann, Rolf; Baronowski, Heidrun; Kottler, Christian

    2014-03-21

    This paper introduces a new image denoising, fusion and enhancement framework for combining and optimal visualization of x-ray attenuation contrast (AC), differential phase contrast (DPC) and dark-field contrast (DFC) images retrieved from x-ray Talbot-Lau grating interferometry. The new image fusion framework comprises three steps: (i) denoising each input image (AC, DPC and DFC) through adaptive Wiener filtering, (ii) performing a two-step image fusion process based on the shift-invariant wavelet transform, i.e. first fusing the AC with the DPC image and then fusing the resulting image with the DFC image, and finally (iii) enhancing the fused image to obtain a final image using adaptive histogram equalization, adaptive sharpening and contrast optimization. Application examples are presented for two biological objects (a human tooth and a cherry) and the proposed method is compared to two recently published AC/DPC/DFC image processing techniques. In conclusion, the new framework for the processing of AC, DPC and DFC allows the most relevant features of all three images to be combined in one image while reducing the noise and enhancing adaptively the relevant image features. The newly developed framework may be used in technical and medical applications.

  19. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    PubMed

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  20. A Morphological Hessian Based Approach for Retinal Blood Vessels Segmentation and Denoising Using Region Based Otsu Thresholding

    PubMed Central

    BahadarKhan, Khan; A Khaliq, Amir; Shahid, Muhammad

    2016-01-01

    Diabetic Retinopathy (DR) harm retinal blood vessels in the eye causing visual deficiency. The appearance and structure of blood vessels in retinal images play an essential part in the diagnoses of an eye sicknesses. We proposed a less computational unsupervised automated technique with promising results for detection of retinal vasculature by using morphological hessian based approach and region based Otsu thresholding. Contrast Limited Adaptive Histogram Equalization (CLAHE) and morphological filters have been used for enhancement and to remove low frequency noise or geometrical objects, respectively. The hessian matrix and eigenvalues approach used has been in a modified form at two different scales to extract wide and thin vessel enhanced images separately. Otsu thresholding has been further applied in a novel way to classify vessel and non-vessel pixels from both enhanced images. Finally, postprocessing steps has been used to eliminate the unwanted region/segment, non-vessel pixels, disease abnormalities and noise, to obtain a final segmented image. The proposed technique has been analyzed on the openly accessible DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (STructured Analysis of the REtina) databases along with the ground truth data that has been precisely marked by the experts. PMID:27441646

  1. Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Luan, X.

    2017-12-01

    Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.

  2. The experimental research on response characteristics of coal samples under the uniaxial loading process

    NASA Astrophysics Data System (ADS)

    Jia, Bing; Wei, Jian-Ping; Wen, Zhi-Hui; Wang, Yun-Gang; Jia, Lin-Xing

    2017-11-01

    In order to study the response characteristics of infrasound in coal samples under the uniaxial loading process, coal samples were collected from GengCun mine. Coal rock stress loading device, acoustic emission tested system and infrasound tested system were used to test the infrasonic signal and acoustic emission signal under uniaxial loading process. The tested results were analyzed by the methods of wavelet filter, threshold denoise, time-frequency analysis and so on. The results showed that in the loading process, the change of the infrasonic wave displayed the characteristics of stage, and it could be divided into three stages: initial stage with a certain amount infrasound events, middle stage with few infrasound events, and late stage gradual decrease. It had a good consistency with changing characteristics of acoustic emission. At the same time, the frequency of infrasound was very low. It can propagate over a very long distance with little attenuation, and the characteristics of the infrasound before the destruction of the coal samples were obvious. A method of using the infrasound characteristics to predict the destruction of coal samples was proposed. This is of great significance to guide the prediction of geological hazards in coal mines.

  3. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  4. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  5. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise.

    PubMed

    Le Montagner, Yoann; Angelini, Elsa D; Olivo-Marin, Jean-Christophe

    2014-03-01

    The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.

  6. Local denoising of digital speckle pattern interferometry fringes by multiplicative correlation and weighted smoothing splines.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2005-05-10

    We evaluate the use of smoothing splines with a weighted roughness measure for local denoising of the correlation fringes produced in digital speckle pattern interferometry. In particular, we also evaluate the performance of the multiplicative correlation operation between two speckle patterns that is proposed as an alternative procedure to generate the correlation fringes. It is shown that the application of a normalization algorithm to the smoothed correlation fringes reduces the excessive bias generated in the previous filtering stage. The evaluation is carried out by use of computer-simulated fringes that are generated for different average speckle sizes and intensities of the reference beam, including decorrelation effects. A comparison with filtering methods based on the continuous wavelet transform is also presented. Finally, the performance of the smoothing method in processing experimental data is illustrated.

  7. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    NASA Astrophysics Data System (ADS)

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  8. Dereverberation and denoising based on generalized spectral subtraction by multi-channel LMS algorithm using a small-scale microphone array

    NASA Astrophysics Data System (ADS)

    Wang, Longbiao; Odani, Kyohei; Kai, Atsuhiko

    2012-12-01

    A blind dereverberation method based on power spectral subtraction (SS) using a multi-channel least mean squares algorithm was previously proposed to suppress the reverberant speech without additive noise. The results of isolated word speech recognition experiments showed that this method achieved significant improvements over conventional cepstral mean normalization (CMN) in a reverberant environment. In this paper, we propose a blind dereverberation method based on generalized spectral subtraction (GSS), which has been shown to be effective for noise reduction, instead of power SS. Furthermore, we extend the missing feature theory (MFT), which was initially proposed to enhance the robustness of additive noise, to dereverberation. A one-stage dereverberation and denoising method based on GSS is presented to simultaneously suppress both the additive noise and nonstationary multiplicative noise (reverberation). The proposed dereverberation method based on GSS with MFT is evaluated on a large vocabulary continuous speech recognition task. When the additive noise was absent, the dereverberation method based on GSS with MFT using only 2 microphones achieves a relative word error reduction rate of 11.4 and 32.6% compared to the dereverberation method based on power SS and the conventional CMN, respectively. For the reverberant and noisy speech, the dereverberation and denoising method based on GSS achieves a relative word error reduction rate of 12.8% compared to the conventional CMN with GSS-based additive noise reduction method. We also analyze the effective factors of the compensation parameter estimation for the dereverberation method based on SS, such as the number of channels (the number of microphones), the length of reverberation to be suppressed, and the length of the utterance used for parameter estimation. The experimental results showed that the SS-based method is robust in a variety of reverberant environments for both isolated and continuous speech recognition and under various parameter estimation conditions.

  9. Lifting wavelet method of target detection

    NASA Astrophysics Data System (ADS)

    Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin

    2009-11-01

    Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.

  10. Cloud-scale genomic signals processing classification analysis for gene expression microarray data.

    PubMed

    Harvey, Benjamin; Soo-Yeon Ji

    2014-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring inference though analysis of DNA/mRNA sequence data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological inference by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale classification analysis of microarray data using Wavelet thresholding in a Cloud environment to identify significantly expressed features. This paper proposes a novel methodology that uses Wavelet based Denoising to initialize a threshold for determination of significantly expressed genes for classification. Additionally, this research was implemented and encompassed within cloud-based distributed processing environment. The utilization of Cloud computing and Wavelet thresholding was used for the classification 14 tumor classes from the Global Cancer Map (GCM). The results proved to be more accurate than using a predefined p-value for differential expression classification. This novel methodology analyzed Wavelet based threshold features of gene expression in a Cloud environment, furthermore classifying the expression of samples by analyzing gene patterns, which inform us of biological processes. Moreover, enabling researchers to face the present and forthcoming challenges that may arise in the analysis of data in functional genomics of large microarray datasets.

  11. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  12. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  13. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    PubMed

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed algorithm had the lowest MSEPWRD for all noise types at low input SNRs. Therefore, the morphology and diagnostic information of ECG signals were much better conserved compared with EKF/EKS frameworks, especially in non-Gaussian nonstationary situations.

  14. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint.

    PubMed

    Gao, Zhi; Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Ramesh, Bharath; Zhai, Ruifang

    2018-05-06

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency.

  15. A Real-Time De-Noising Algorithm for E-Noses in a Wireless Sensor Network

    PubMed Central

    Qu, Jianfeng; Chai, Yi; Yang, Simon X.

    2009-01-01

    A wireless e-nose network system is developed for the special purpose of monitoring odorant gases and accurately estimating odor strength in and around livestock farms. This system is to simultaneously acquire accurate odor strength values remotely at various locations, where each node is an e-nose that includes four metal-oxide semiconductor (MOS) gas sensors. A modified Kalman filtering technique is proposed for collecting raw data and de-noising based on the output noise characteristics of those gas sensors. The measurement noise variance is obtained in real time by data analysis using the proposed slip windows average method. The optimal system noise variance of the filter is obtained by using the experiments data. The Kalman filter theory on how to acquire MOS gas sensors data is discussed. Simulation results demonstrate that the proposed method can adjust the Kalman filter parameters and significantly reduce the noise from the gas sensors. PMID:22399946

  16. Retinal blood vessel extraction using tunable bandpass filter and fuzzy conditional entropy.

    PubMed

    Sil Kar, Sudeshna; Maity, Santi P

    2016-09-01

    Extraction of blood vessels on retinal images plays a significant role for screening of different opthalmologic diseases. However, accurate extraction of the entire and individual type of vessel silhouette from the noisy images with poorly illuminated background is a complicated task. To this aim, an integrated system design platform is suggested in this work for vessel extraction using a sequential bandpass filter followed by fuzzy conditional entropy maximization on matched filter response. At first noise is eliminated from the image under consideration through curvelet based denoising. To include the fine details and the relatively less thick vessel structures, the image is passed through a bank of sequential bandpass filter structure optimized for contrast enhancement. Fuzzy conditional entropy on matched filter response is then maximized to find the set of multiple optimal thresholds to extract the different types of vessel silhouettes from the background. Differential Evolution algorithm is used to determine the optimal gain in bandpass filter and the combination of the fuzzy parameters. Using the multiple thresholds, retinal image is classified as the thick, the medium and the thin vessels including neovascularization. Performance evaluated on different publicly available retinal image databases shows that the proposed method is very efficient in identifying the diverse types of vessels. Proposed method is also efficient in extracting the abnormal and the thin blood vessels in pathological retinal images. The average values of true positive rate, false positive rate and accuracy offered by the method is 76.32%, 1.99% and 96.28%, respectively for the DRIVE database and 72.82%, 2.6% and 96.16%, respectively for the STARE database. Simulation results demonstrate that the proposed method outperforms the existing methods in detecting the various types of vessels and the neovascularization structures. The combination of curvelet transform and tunable bandpass filter is found to be very much effective in edge enhancement whereas fuzzy conditional entropy efficiently distinguishes vessels of different widths. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Stochastic resonance in an underdamped system with FitzHug-Nagumo potential for weak signal detection

    NASA Astrophysics Data System (ADS)

    López, Cristian; Zhong, Wei; Lu, Siliang; Cong, Feiyun; Cortese, Ignacio

    2017-12-01

    Vibration signals are widely used for bearing fault detection and diagnosis. When signals are acquired in the field, usually, the faulty periodic signal is weak and is concealed by noise. Various de-noising methods have been developed to extract the target signal from the raw signal. Stochastic resonance (SR) is a technique that changed the traditional denoising process, in which the weak periodic fault signal can be identified by adding an expression, the potential, to the raw signal and solving a differential equation problem. However, current SR methods have some deficiencies such us limited filtering performance, low frequency input signal and sequential search for optimum parameters. Consequently, in this study, we explore the application of SR based on the FitzHug-Nagumo (FHN) potential in rolling bearing vibration signals. Besides, we improve the search of the SR optimum parameters by the use of particle swarm optimization (PSO). The effectiveness of the proposed method is verified by using both simulated and real bearing data sets.

  18. Denoising and segmentation of retinal layers in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Dash, Puspita; Sigappi, A. N.

    2018-04-01

    Optical Coherence Tomography (OCT) is an imaging technique used to localize the intra-retinal boundaries for the diagnostics of macular diseases. Due to speckle noise, low image contrast and accurate segmentation of individual retinal layers is difficult. Due to this, a method for retinal layer segmentation from OCT images is presented. This paper proposes a pre-processing filtering approach for denoising and segmentation methods for segmenting retinal layers OCT images using graph based segmentation technique. These techniques are used for segmentation of retinal layers for normal as well as patients with Diabetic Macular Edema. The algorithm based on gradient information and shortest path search is applied to optimize the edge selection. In this paper the four main layers of the retina are segmented namely Internal limiting membrane (ILM), Retinal pigment epithelium (RPE), Inner nuclear layer (INL) and Outer nuclear layer (ONL). The proposed method is applied on a database of OCT images of both ten normal and twenty DME affected patients and the results are found to be promising.

  19. [Study on Differential Optical Absorption Spectroscopy Data Processing Based on Chirp-Z Transformation].

    PubMed

    Zheng, Hai-ming; Li, Guang-jie; Wu, Hao

    2015-06-01

    Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.

  20. Numerical method based on transfer function for eliminating water vapor noise from terahertz spectra.

    PubMed

    Huang, Y; Sun, P; Zhang, Z; Jin, C

    2017-07-10

    Water vapor noise in the air affects the accuracy of optical parameters extracted from terahertz (THz) time-domain spectroscopy. In this paper, a numerical method was proposed to eliminate water vapor noise from the THz spectra. According to the Van Vleck-Weisskopf function and the linear absorption spectrum of water molecules in the HITRAN database, we simulated the water vapor absorption spectrum and real refractive index spectrum with a particular line width. The continuum effect of water vapor molecules was also considered. Theoretical transfer function of a different humidity was constructed through the theoretical calculation of the water vapor absorption coefficient and the real refractive index. The THz signal of the Lacidipine sample containing water vapor background noise in the continuous frequency domain of 0.5-1.8 THz was denoised by use of the method. The results show that the optical parameters extracted from the denoised signal are closer to the optical parameters in the dry nitrogen environment.

  1. Sliding window denoising K-Singular Value Decomposition and its application on rolling bearing impact fault diagnosis

    NASA Astrophysics Data System (ADS)

    Yang, Honggang; Lin, Huibin; Ding, Kang

    2018-05-01

    The performance of sparse features extraction by commonly used K-Singular Value Decomposition (K-SVD) method depends largely on the signal segment selected in rolling bearing diagnosis, furthermore, the calculating speed is relatively slow and the dictionary becomes so redundant when the fault signal is relatively long. A new sliding window denoising K-SVD (SWD-KSVD) method is proposed, which uses only one small segment of time domain signal containing impacts to perform sliding window dictionary learning and select an optimal pattern with oscillating information of the rolling bearing fault according to a maximum variance principle. An inner product operation between the optimal pattern and the whole fault signal is performed to enhance the characteristic of the impacts' occurrence moments. Lastly, the signal is reconstructed at peak points of the inner product to realize the extraction of the rolling bearing fault features. Both simulation and experiments verify that the method could extract the fault features effectively.

  2. Data-driven mono-component feature identification via modified nonlocal means and MEWT for mechanical drivetrain fault diagnosis

    NASA Astrophysics Data System (ADS)

    Pan, Jun; Chen, Jinglong; Zi, Yanyang; Yuan, Jing; Chen, Binqiang; He, Zhengjia

    2016-12-01

    It is significant to perform condition monitoring and fault diagnosis on rolling mills in steel-making plant to ensure economic benefit. However, timely fault identification of key parts in a complicated industrial system under operating condition is still a challenging task since acquired condition signals are usually multi-modulated and inevitably mixed with strong noise. Therefore, a new data-driven mono-component identification method is proposed in this paper for diagnostic purpose. First, the modified nonlocal means algorithm (NLmeans) is proposed to reduce noise in vibration signals without destroying its original Fourier spectrum structure. During the modified NLmeans, two modifications are investigated and performed to improve denoising effect. Then, the modified empirical wavelet transform (MEWT) is applied on the de-noised signal to adaptively extract empirical mono-component modes. Finally, the modes are analyzed for mechanical fault identification based on Hilbert transform. The results show that the proposed data-driven method owns superior performance during system operation compared with the MEWT method.

  3. Adaptive nonlocal means filtering based on local noise level for CT denoising

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhoubo; Trzasko, Joshua D.; Lake, David S.

    2014-01-15

    Purpose: To develop and evaluate an image-domain noise reduction method based on a modified nonlocal means (NLM) algorithm that is adaptive to local noise level of CT images and to implement this method in a time frame consistent with clinical workflow. Methods: A computationally efficient technique for local noise estimation directly from CT images was developed. A forward projection, based on a 2D fan-beam approximation, was used to generate the projection data, with a noise model incorporating the effects of the bowtie filter and automatic exposure control. The noise propagation from projection data to images was analytically derived. The analyticalmore » noise map was validated using repeated scans of a phantom. A 3D NLM denoising algorithm was modified to adapt its denoising strength locally based on this noise map. The performance of this adaptive NLM filter was evaluated in phantom studies in terms of in-plane and cross-plane high-contrast spatial resolution, noise power spectrum (NPS), subjective low-contrast spatial resolution using the American College of Radiology (ACR) accreditation phantom, and objective low-contrast spatial resolution using a channelized Hotelling model observer (CHO). Graphical processing units (GPU) implementation of this noise map calculation and the adaptive NLM filtering were developed to meet demands of clinical workflow. Adaptive NLM was piloted on lower dose scans in clinical practice. Results: The local noise level estimation matches the noise distribution determined from multiple repetitive scans of a phantom, demonstrated by small variations in the ratio map between the analytical noise map and the one calculated from repeated scans. The phantom studies demonstrated that the adaptive NLM filter can reduce noise substantially without degrading the high-contrast spatial resolution, as illustrated by modulation transfer function and slice sensitivity profile results. The NPS results show that adaptive NLM denoising preserves the shape and peak frequency of the noise power spectrum better than commercial smoothing kernels, and indicate that the spatial resolution at low contrast levels is not significantly degraded. Both the subjective evaluation using the ACR phantom and the objective evaluation on a low-contrast detection task using a CHO model observer demonstrate an improvement on low-contrast performance. The GPU implementation can process and transfer 300 slice images within 5 min. On patient data, the adaptive NLM algorithm provides more effective denoising of CT data throughout a volume than standard NLM, and may allow significant lowering of radiation dose. After a two week pilot study of lower dose CT urography and CT enterography exams, both GI and GU radiology groups elected to proceed with permanent implementation of adaptive NLM in their GI and GU CT practices. Conclusions: This work describes and validates a computationally efficient technique for noise map estimation directly from CT images, and an adaptive NLM filtering based on this noise map, on phantom and patient data. Both the noise map calculation and the adaptive NLM filtering can be performed in times that allow integration with clinical workflow. The adaptive NLM algorithm provides effective denoising of CT data throughout a volume, and may allow significant lowering of radiation dose.« less

  4. A joint resonance frequency estimation and in-band noise reduction method for enhancing the detectability of bearing fault signals

    NASA Astrophysics Data System (ADS)

    Bozchalooi, I. Soltani; Liang, Ming

    2008-05-01

    The vibration signal measured from a bearing contains vital information for the prognostic and health assessment purposes. However, when bearings are installed as part of a complex mechanical system, the measured signal is often heavily clouded by various noises due to the compounded effect of interferences of other machine elements and background noises present in the measuring device. As such, reliable condition monitoring would not be possible without proper de-noising. This is particularly true for incipient bearing faults with very weak signature signals. A new de-noising scheme is proposed in this paper to enhance the vibration signals acquired from faulty bearings. This de-noising scheme features a spectral subtraction to trim down the in-band noise prior to wavelet filtering. The Gabor wavelet is used in the wavelet transform and its parameters, i.e., scale and shape factor are selected in separate steps. The proper scale is found based on a novel resonance estimation algorithm. This algorithm makes use of the information derived from the variable shaft rotational speed though such variation is highly undesirable in fault detection since it complicates the process substantially. The shape factor value is then selected by minimizing a smoothness index. This index is defined as the ratio of the geometric mean to the arithmetic mean of the wavelet coefficient moduli. De-noising results are presented for simulated signals and experimental data acquired from both normal and faulty bearings with defective outer race, inner race, and rolling element.

  5. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  6. A shape-preserving oriented partial differential equation based on a new fidelity term for electronic speckle pattern interferometry fringe patterns denoising

    NASA Astrophysics Data System (ADS)

    Xu, Wenjun; Tang, Chen; Zheng, Tingyue; Qiu, Yue

    2018-07-01

    Oriented partial differential equations (OPDEs) have been demonstrated to be a powerful tool for preserving the integrity of fringes while filtering electronic speckle pattern interferometry (ESPI) fringe patterns. However, the main drawback of OPDEs-based methods is that many iterations are often needed, which causes the change in the shape of fringes. Change in the shape of fringes will affect the accuracy of subsequent fringe analysis. In this paper, we focus on preserving the shape of fringes while filtering, suggested here for the first time. We propose a shape-preserving OPDE for ESPI fringe patterns denoising by introducing a new fidelity term to the previous second-order single oriented PDE (SOOPDE). In our proposed fidelity term, the evolution image is subtracted from the shrinkage result of original noisy image by shearlet transform. Our proposed shape-preserving OPDE is capable of eliminating noise effectively, keeping the integrity of fringes, and more importantly, preserving the shape of fringes. We test the proposed shape-preserving OPDE on three computer-simulated and three experimentally obtained ESPI fringe patterns with poor quality. Furthermore, we compare our model with three representative filtering methods, including the widely used SOOPDE, shearlet transform and coherence-enhancing diffusion (CED). We also compare our proposed fidelity term with the traditional fidelity term. Experimental results show that the proposed shape-preserving OPDE not only yields filtered images with visual quality on par with those by CED which is the state-of-the-art method for ESPI fringe patterns denoising, but also keeps the shape of ESPI fringe patterns.

  7. A fast non-local means algorithm based on integral image and reconstructed similar kernel

    NASA Astrophysics Data System (ADS)

    Lin, Zheng; Song, Enmin

    2018-03-01

    Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.

  8. 3D seismic data de-noising and reconstruction using Multichannel Time Slice Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Rekapalli, Rajesh; Tiwari, R. K.; Sen, Mrinal K.; Vedanti, Nimisha

    2017-05-01

    Noises and data gaps complicate the seismic data processing and subsequently cause difficulties in the geological interpretation. We discuss a recent development and application of the Multi-channel Time Slice Singular Spectrum Analysis (MTSSSA) for 3D seismic data de-noising in time domain. In addition, L1 norm based simultaneous data gap filling of 3D seismic data using MTSSSA also discussed. We discriminated the noises from single individual time slices of 3D volumes by analyzing Eigen triplets of the trajectory matrix. We first tested the efficacy of the method on 3D synthetic seismic data contaminated with noise and then applied to the post stack seismic reflection data acquired from the Sleipner CO2 storage site (pre and post CO2 injection) from Norway. Our analysis suggests that the MTSSSA algorithm is efficient to enhance the S/N for better identification of amplitude anomalies along with simultaneous data gap filling. The bright spots identified in the de-noised data indicate upward migration of CO2 towards the top of the Utsira formation. The reflections identified applying MTSSSA to pre and post injection data correlate well with the geology of the Southern Viking Graben (SVG).

  9. Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation

    DTIC Science & Technology

    2012-05-01

    deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While

  10. A universal denoising and peak picking algorithm for LC-MS based on matched filtration in the chromatographic time domain.

    PubMed

    Andreev, Victor P; Rejtar, Tomas; Chen, Hsuan-Shen; Moskovets, Eugene V; Ivanov, Alexander R; Karger, Barry L

    2003-11-15

    A new denoising and peak picking algorithm (MEND, matched filtration with experimental noise determination) for analysis of LC-MS data is described. The algorithm minimizes both random and chemical noise in order to determine MS peaks corresponding to sample components. Noise characteristics in the data set are experimentally determined and used for efficient denoising. MEND is shown to enable low-intensity peaks to be detected, thus providing additional useful information for sample analysis. The process of denoising, performed in the chromatographic time domain, does not distort peak shapes in the m/z domain, allowing accurate determination of MS peak centroids, including low-intensity peaks. MEND has been applied to denoising of LC-MALDI-TOF-MS and LC-ESI-TOF-MS data for tryptic digests of protein mixtures. MEND is shown to suppress chemical and random noise and baseline fluctuations, as well as filter out false peaks originating from the matrix (MALDI) or mobile phase (ESI). In addition, MEND is shown to be effective for protein expression analysis by allowing selection of a large number of differentially expressed ICAT pairs, due to increased signal-to-noise ratio and mass accuracy.

  11. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    PubMed

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  12. HARDI denoising using nonlocal means on S2

    NASA Astrophysics Data System (ADS)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  13. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  14. Unmixing-Based Denoising as a Pre-Processing Step for Coral Reef Analysis

    NASA Astrophysics Data System (ADS)

    Cerra, D.; Traganos, D.; Gege, P.; Reinartz, P.

    2017-05-01

    Coral reefs, among the world's most biodiverse and productive submerged habitats, have faced several mass bleaching events due to climate change during the past 35 years. In the course of this century, global warming and ocean acidification are expected to cause corals to become increasingly rare on reef systems. This will result in a sharp decrease in the biodiversity of reef communities and carbonate reef structures. Coral reefs may be mapped, characterized and monitored through remote sensing. Hyperspectral images in particular excel in being used in coral monitoring, being characterized by very rich spectral information, which results in a strong discrimination power to characterize a target of interest, and separate healthy corals from bleached ones. Being submerged habitats, coral reef systems are difficult to analyse in airborne or satellite images, as relevant information is conveyed in bands in the blue range which exhibit lower signal-to-noise ratio (SNR) with respect to other spectral ranges; furthermore, water is absorbing most of the incident solar radiation, further decreasing the SNR. Derivative features, which are important in coral analysis, result greatly affected by the resulting noise present in relevant spectral bands, justifying the need of new denoising techniques able to keep local spatial and spectral features. In this paper, Unmixing-based Denoising (UBD) is used to enable analysis of a hyperspectral image acquired over a coral reef system in the Red Sea based on derivative features. UBD reconstructs pixelwise a dataset with reduced noise effects, by forcing each spectrum to a linear combination of other reference spectra, exploiting the high dimensionality of hyperspectral datasets. Results show clear enhancements with respect to traditional denoising methods based on spatial and spectral smoothing, facilitating the coral detection task.

  15. Removal of intensity bias in magnitude spin-echo MRI images by nonlinear diffusion filtering

    NASA Astrophysics Data System (ADS)

    Samsonov, Alexei A.; Johnson, Chris R.

    2004-05-01

    MRI data analysis is routinely done on the magnitude part of complex images. While both real and imaginary image channels contain Gaussian noise, magnitude MRI data are characterized by Rice distribution. However, conventional filtering methods often assume image noise to be zero mean and Gaussian distributed. Estimation of an underlying image using magnitude data produces biased result. The bias may lead to significant image errors, especially in areas of low signal-to-noise ratio (SNR). The incorporation of the Rice PDF into a noise filtering procedure can significantly complicate the method both algorithmically and computationally. In this paper, we demonstrate that inherent image phase smoothness of spin-echo MRI images could be utilized for separate filtering of real and imaginary complex image channels to achieve unbiased image denoising. The concept is demonstrated with a novel nonlinear diffusion filtering scheme developed for complex image filtering. In our proposed method, the separate diffusion processes are coupled through combined diffusion coefficients determined from the image magnitude. The new method has been validated with simulated and real MRI data. The new method has provided efficient denoising and bias removal in conventional and black-blood angiography MRI images obtained using fast spin echo acquisition protocols.

  16. [A mobile sensor for remote detection of natural gas leakage].

    PubMed

    Zhang, Shuai; Liu, Wen-qing; Zhang, Yu-jun; Kan, Rui-feng; Ruan, Jun; Wang, Li-ming; Yu, Dian-qiang; Dong, Jin-ting; Han, Xiao-lei; Cui, Yi-ben; Liu, Jian-guo

    2012-02-01

    The detection of natural gas pipeline leak becomes a significant issue for body security, environmental protection and security of state property. However, the leak detection is difficult, because of the pipeline's covering many areas, operating conditions and complicated environment. A mobile sensor for remote detection of natural gas leakage based on scanning wavelength differential absorption spectroscopy (SWDAS) is introduced. The improved soft threshold wavelet denoising was proposed by analyzing the characteristics of reflection spectrum. And the results showed that the signal to noise ratio (SNR) was increased three times. When light intensity is 530 nA, the minimum remote sensitivity will be 80 ppm x m. A widely used SWDAS can make quantitative remote sensing of natural gas leak and locate the leak source precisely in a faster, safer and more intelligent way.

  17. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  18. Study on Underwater Image Denoising Algorithm Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Jian, Sun; Wen, Wang

    2017-02-01

    This paper analyzes the application of MATLAB in underwater image processing, the transmission characteristics of the underwater laser light signal and the kinds of underwater noise has been described, the common noise suppression algorithm: Wiener filter, median filter, average filter algorithm is brought out. Then the advantages and disadvantages of each algorithm in image sharpness and edge protection areas have been compared. A hybrid filter algorithm based on wavelet transform has been proposed which can be used for Color Image Denoising. At last the PSNR and NMSE of each algorithm has been given out, which compares the ability to de-noising

  19. Multiview point clouds denoising based on interference elimination

    NASA Astrophysics Data System (ADS)

    Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu

    2018-03-01

    Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.

  20. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Hulya Kodal; Aslan, Zafer; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  1. Color Image Restoration Using Nonlocal Mumford-Shah Regularizers

    NASA Astrophysics Data System (ADS)

    Jung, Miyoun; Bresson, Xavier; Chan, Tony F.; Vese, Luminita A.

    We introduce several color image restoration algorithms based on the Mumford-Shah model and nonlocal image information. The standard Ambrosio-Tortorelli and Shah models are defined to work in a small local neighborhood, which are sufficient to denoise smooth regions with sharp boundaries. However, textures are not local in nature and require semi-local/non-local information to be denoised efficiently. Inspired from recent work (NL-means of Buades, Coll, Morel and NL-TV of Gilboa, Osher), we extend the standard models of Ambrosio-Tortorelli and Shah approximations to Mumford-Shah functionals to work with nonlocal information, for better restoration of fine structures and textures. We present several applications of the proposed nonlocal MS regularizers in image processing such as color image denoising, color image deblurring in the presence of Gaussian or impulse noise, color image inpainting, and color image super-resolution. In the formulation of nonlocal variational models for the image deblurring with impulse noise, we propose an efficient preprocessing step for the computation of the weight function w. In all the applications, the proposed nonlocal regularizers produce superior results over the local ones, especially in image inpainting with large missing regions. Experimental results and comparisons between the proposed nonlocal methods and the local ones are shown.

  2. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising

    PubMed Central

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-01-01

    Co-prime arrays can estimate the directions of arrival (DOAs) of O(MN) sources with O(M+N) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach. PMID:28509886

  3. An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.

    PubMed

    Guo, Muran; Chen, Tao; Wang, Ben

    2017-05-16

    Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.

  4. Methodological improvements in voxel-based analysis of diffusion tensor images: applications to study the impact of apolipoprotein E on white matter integrity.

    PubMed

    Newlander, Shawn M; Chu, Alan; Sinha, Usha S; Lu, Po H; Bartzokis, George

    2014-02-01

    To identify regional differences in apparent diffusion coefficient (ADC) and fractional anisotropy (FA) using customized preprocessing before voxel-based analysis (VBA) in 14 normal subjects with the specific genes that decrease (apolipoprotein [APO] E ε2) and that increase (APOE ε4) the risk of Alzheimer's disease. Diffusion tensor images (DTI) acquired at 1.5 Tesla were denoised with a total variation tensor regularization algorithm before affine and nonlinear registration to generate a common reference frame for the image volumes of all subjects. Anisotropic and isotropic smoothing with varying kernel sizes was applied to the aligned data before VBA to determine regional differences between cohorts segregated by allele status. VBA on the denoised tensor data identified regions of reduced FA in APOE ε4 compared with the APOE ε2 healthy older carriers. The most consistent results were obtained using the denoised tensor and anisotropic smoothing before statistical testing. In contrast, isotropic smoothing identified regional differences for small filter sizes alone, emphasizing that this method introduces bias in FA values for higher kernel sizes. Voxel-based DTI analysis can be performed on low signal to noise ratio images to detect subtle regional differences in cohorts using the proposed preprocessing techniques. Copyright © 2013 Wiley Periodicals, Inc.

  5. The application of wavelet denoising in material discrimination system

    NASA Astrophysics Data System (ADS)

    Fu, Kenneth; Ranta, Dale; Guest, Clark; Das, Pankaj

    2010-01-01

    Recently, the need for cargo inspection imaging systems to provide a material discrimination function has become desirable. This is done by scanning the cargo container with x-rays at two different energy levels. The ratio of attenuations of the two energy scans can provide information on the composition of the material. However, with the statistical error from noise, the accuracy of such systems can be low. Because the moving source emits two energies of x-rays alternately, images from the two scans will not be identical. That means edges of objects in the two images are not perfectly aligned. Moreover, digitization creates blurry-edge artifacts. Different energy x-rays produce different edge spread functions. Those combined effects contribute to a source of false classification namely, the "edge effect." Other types of false classification are caused by noise, mainly Poisson noise associated with photons. The Poisson noise in xray images can be dealt with using either a Wiener filter or a wavelet shrinkage denoising approach. In this paper, we propose a method that uses the wavelet shrinkage denoising approach to enhance the performance of the material identification system. Test results show that this wavelet-based approach has improved performance in object detection and eliminating false positives due to the edge effects.

  6. Research on vibration signal analysis and extraction method of gear local fault

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Wang, D.; Ma, J. F.; Shao, W.

    2018-02-01

    Gear is the main connection parts and power transmission parts in the mechanical equipment. If the fault occurs, it directly affects the running state of the whole machine and even endangers the personal safety. So it has important theoretical significance and practical value to study on the extraction of the gear fault signal and fault diagnosis of the gear. In this paper, the gear local fault as the research object, set up the vibration model of gear fault vibration mechanism, derive the vibration mechanism of the gear local fault and analyzes the similarities and differences of the vibration signal between the gear non fault and the gears local faults. In the MATLAB environment, the wavelet transform algorithm is used to denoise the fault signal. Hilbert transform is used to demodulate the fault vibration signal. The results show that the method can denoise the strong noise mechanical vibration signal and extract the local fault feature information from the fault vibration signal..

  7. Fast Sparse Coding for Range Data Denoising with Sparse Ridges Constraint

    PubMed Central

    Lao, Mingjie; Sang, Yongsheng; Wen, Fei; Zhai, Ruifang

    2018-01-01

    Light detection and ranging (LiDAR) sensors have been widely deployed on intelligent systems such as unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) to perform localization, obstacle detection, and navigation tasks. Thus, research into range data processing with competitive performance in terms of both accuracy and efficiency has attracted increasing attention. Sparse coding has revolutionized signal processing and led to state-of-the-art performance in a variety of applications. However, dictionary learning, which plays the central role in sparse coding techniques, is computationally demanding, resulting in its limited applicability in real-time systems. In this study, we propose sparse coding algorithms with a fixed pre-learned ridge dictionary to realize range data denoising via leveraging the regularity of laser range measurements in man-made environments. Experiments on both synthesized data and real data demonstrate that our method obtains accuracy comparable to that of sophisticated sparse coding methods, but with much higher computational efficiency. PMID:29734793

  8. Rolling Bearing Fault Diagnosis Based on an Improved HTT Transform

    PubMed Central

    Tang, Guiji; Tian, Tian; Zhou, Chong

    2018-01-01

    When rolling bearing failure occurs, vibration signals generally contain different signal components, such as impulsive fault feature signals, background noise and harmonic interference signals. One of the most challenging aspects of rolling bearing fault diagnosis is how to inhibit noise and harmonic interference signals, while enhancing impulsive fault feature signals. This paper presents a novel bearing fault diagnosis method, namely an improved Hilbert time–time (IHTT) transform, by combining a Hilbert time–time (HTT) transform with principal component analysis (PCA). Firstly, the HTT transform was performed on vibration signals to derive a HTT transform matrix. Then, PCA was employed to de-noise the HTT transform matrix in order to improve the robustness of the HTT transform. Finally, the diagonal time series of the de-noised HTT transform matrix was extracted as the enhanced impulsive fault feature signal and the contained fault characteristic information was identified through further analyses of amplitude and envelope spectrums. Both simulated and experimental analyses validated the superiority of the presented method for detecting bearing failures. PMID:29662013

  9. CUDA-based acceleration of collateral filtering in brain MR images

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Yuan; Chang, Herng-Hua

    2017-02-01

    Image denoising is one of the fundamental and essential tasks within image processing. In medical imaging, finding an effective algorithm that can remove random noise in MR images is important. This paper proposes an effective noise reduction method for brain magnetic resonance (MR) images. Our approach is based on the collateral filter which is a more powerful method than the bilateral filter in many cases. However, the computation of the collateral filter algorithm is quite time-consuming. To solve this problem, we improved the collateral filter algorithm with parallel computing using GPU. We adopted CUDA, an application programming interface for GPU by NVIDIA, to accelerate the computation. Our experimental evaluation on an Intel Xeon CPU E5-2620 v3 2.40GHz with a NVIDIA Tesla K40c GPU indicated that the proposed implementation runs dramatically faster than the traditional collateral filter. We believe that the proposed framework has established a general blueprint for achieving fast and robust filtering in a wide variety of medical image denoising applications.

  10. An approach to analyze the breast tissues in infrared images using nonlinear adaptive level sets and Riesz transform features.

    PubMed

    Prabha, S; Suganthi, S S; Sujatha, C M

    2015-01-01

    Breast thermography is a potential imaging method for the early detection of breast cancer. The pathological conditions can be determined by measuring temperature variations in the abnormal breast regions. Accurate delineation of breast tissues is reported as a challenging task due to inherent limitations of infrared images such as low contrast, low signal to noise ratio and absence of clear edges. Segmentation technique is attempted to delineate the breast tissues by detecting proper lower breast boundaries and inframammary folds. Characteristic features are extracted to analyze the asymmetrical thermal variations in normal and abnormal breast tissues. An automated analysis of thermal variations of breast tissues is attempted using nonlinear adaptive level sets and Riesz transform. Breast thermal images are initially subjected to Stein's unbiased risk estimate based orthonormal wavelet denoising. These denoised images are enhanced using contrast-limited adaptive histogram equalization method. The breast tissues are then segmented using non-linear adaptive level set method. The phase map of enhanced image is integrated into the level set framework for final boundary estimation. The segmented results are validated against the corresponding ground truth images using overlap and regional similarity metrics. The segmented images are further processed with Riesz transform and structural texture features are derived from the transformed coefficients to analyze pathological conditions of breast tissues. Results show that the estimated average signal to noise ratio of denoised images and average sharpness of enhanced images are improved by 38% and 6% respectively. The interscale consideration adopted in the denoising algorithm is able to improve signal to noise ratio by preserving edges. The proposed segmentation framework could delineate the breast tissues with high degree of correlation (97%) between the segmented and ground truth areas. Also, the average segmentation accuracy and sensitivity are found to be 98%. Similarly, the maximum regional overlap between segmented and ground truth images obtained using volume similarity measure is observed to be 99%. Directionality as a feature, showed a considerable difference between normal and abnormal tissues which is found to be 11%. The proposed framework for breast thermal image analysis that is aided with necessary preprocessing is found to be useful in assisting the early diagnosis of breast abnormalities.

  11. Locally Based Kernel PLS Regression De-noising with Application to Event-Related Potentials

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Tino, Peter

    2002-01-01

    The close relation of signal de-noising and regression problems dealing with the estimation of functions reflecting dependency between a set of inputs and dependent outputs corrupted with some level of noise have been employed in our approach.

  12. Spectral filtering of gradient for l2-norm frequency-domain elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    Oh, Ju-Won; Min, Dong-Joo

    2013-05-01

    To enhance the robustness of the l2-norm elastic full-waveform inversion (FWI), we propose a denoise function that is incorporated into single-frequency gradients. Because field data are noisy and modelled data are noise-free, the denoise function is designed based on the ratio of modelled data to field data summed over shots and receivers. We first take the sums of the modelled data and field data over shots, then take the sums of the absolute values of the resultant modelled data and field data over the receivers. Due to the monochromatic property of wavefields at each frequency, signals in both modelled and field data tend to be cancelled out or maintained, whereas certain types of noise, particularly random noise, can be amplified in field data. As a result, the spectral distribution of the denoise function is inversely proportional to the ratio of noise to signal at each frequency, which helps prevent the noise-dominant gradients from contributing to model parameter updates. Numerical examples show that the spectral distribution of the denoise function resembles a frequency filter that is determined by the spectrum of the signal-to-noise (S/N) ratio during the inversion process, with little human intervention. The denoise function is applied to the elastic FWI of synthetic data, with three types of random noise generated by the modified version of the Marmousi-2 model: white, low-frequency and high-frequency random noises. Based on the spectrum of S/N ratios at each frequency, the denoise function mainly suppresses noise-dominant single-frequency gradients, which improves the inversion results at the cost of spatial resolution.

  13. iSAP: Interactive Sparse Astronomical Data Analysis Packages

    NASA Astrophysics Data System (ADS)

    Fourt, O.; Starck, J.-L.; Sureau, F.; Bobin, J.; Moudden, Y.; Abrial, P.; Schmitt, J.

    2013-03-01

    iSAP consists of three programs, written in IDL, which together are useful for spherical data analysis. MR/S (MultiResolution on the Sphere) contains routines for wavelet, ridgelet and curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and Independent Component Analysis on the Sphere. MR/S has been designed for the PLANCK project, but can be used for many other applications. SparsePol (Polarized Spherical Wavelets and Curvelets) has routines for polarized wavelet, polarized ridgelet and polarized curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and blind source separation on the Sphere. SparsePol has been designed for the PLANCK project. MS-VSTS (Multi-Scale Variance Stabilizing Transform on the Sphere), designed initially for the FERMI project, is useful for spherical mono-channel and multi-channel data analysis when the data are contaminated by a Poisson noise. It contains routines for wavelet/curvelet denoising, wavelet deconvolution, multichannel wavelet denoising and deconvolution.

  14. Application of adaptive filters in denoising magnetocardiogram signals

    NASA Astrophysics Data System (ADS)

    Khan, Pathan Fayaz; Patel, Rajesh; Sengottuvel, S.; Saipriya, S.; Swain, Pragyna Parimita; Gireesan, K.

    2017-05-01

    Magnetocardiography (MCG) is the measurement of weak magnetic fields from the heart using Superconducting QUantum Interference Devices (SQUID). Though the measurements are performed inside magnetically shielded rooms (MSR) to reduce external electromagnetic disturbances, interferences which are caused by sources inside the shielded room could not be attenuated. The work presented here reports the application of adaptive filters to denoise MCG signals. Two adaptive noise cancellation approaches namely least mean squared (LMS) algorithm and recursive least squared (RLS) algorithm are applied to denoise MCG signals and the results are compared. It is found that both the algorithms effectively remove noisy wiggles from MCG traces; significantly improving the quality of the cardiac features in MCG traces. The calculated signal-to-noise ratio (SNR) for the denoised MCG traces is found to be slightly higher in the LMS algorithm as compared to the RLS algorithm. The results encourage the use of adaptive techniques to suppress noise due to power line frequency and its harmonics which occur frequently in biomedical measurements.

  15. SVD and Hankel matrix based de-noising approach for ball bearing fault detection and its assessment using artificial faults

    NASA Astrophysics Data System (ADS)

    Golafshan, Reza; Yuce Sanliturk, Kenan

    2016-03-01

    Ball bearings remain one of the most crucial components in industrial machines and due to their critical role, it is of great importance to monitor their conditions under operation. However, due to the background noise in acquired signals, it is not always possible to identify probable faults. This incapability in identifying the faults makes the de-noising process one of the most essential steps in the field of Condition Monitoring (CM) and fault detection. In the present study, Singular Value Decomposition (SVD) and Hankel matrix based de-noising process is successfully applied to the ball bearing time domain vibration signals as well as to their spectrums for the elimination of the background noise and the improvement the reliability of the fault detection process. The test cases conducted using experimental as well as the simulated vibration signals demonstrate the effectiveness of the proposed de-noising approach for the ball bearing fault detection.

  16. Bearing faults identification and resonant band demodulation based on wavelet de-noising methods and envelope analysis

    NASA Astrophysics Data System (ADS)

    Abdelrhman, Ahmed M.; Sei Kien, Yong; Salman Leong, M.; Meng Hee, Lim; Al-Obaidi, Salah M. Ali

    2017-07-01

    The vibration signals produced by rotating machinery contain useful information for condition monitoring and fault diagnosis. Fault severities assessment is a challenging task. Wavelet Transform (WT) as a multivariate analysis tool is able to compromise between the time and frequency information in the signals and served as a de-noising method. The CWT scaling function gives different resolutions to the discretely signals such as very fine resolution at lower scale but coarser resolution at a higher scale. However, the computational cost increased as it needs to produce different signal resolutions. DWT has better low computation cost as the dilation function allowed the signals to be decomposed through a tree of low and high pass filters and no further analysing the high-frequency components. In this paper, a method for bearing faults identification is presented by combing Continuous Wavelet Transform (CWT) and Discrete Wavelet Transform (DWT) with envelope analysis for bearing fault diagnosis. The experimental data was sampled by Case Western Reserve University. The analysis result showed that the proposed method is effective in bearing faults detection, identify the exact fault’s location and severity assessment especially for the inner race and outer race faults.

  17. Identification of Anisomerous Motor Imagery EEG Signals Based on Complex Algorithms

    PubMed Central

    Zhang, Zhiwen; Duan, Feng; Zhou, Xin; Meng, Zixuan

    2017-01-01

    Motor imagery (MI) electroencephalograph (EEG) signals are widely applied in brain-computer interface (BCI). However, classified MI states are limited, and their classification accuracy rates are low because of the characteristics of nonlinearity and nonstationarity. This study proposes a novel MI pattern recognition system that is based on complex algorithms for classifying MI EEG signals. In electrooculogram (EOG) artifact preprocessing, band-pass filtering is performed to obtain the frequency band of MI-related signals, and then, canonical correlation analysis (CCA) combined with wavelet threshold denoising (WTD) is used for EOG artifact preprocessing. We propose a regularized common spatial pattern (R-CSP) algorithm for EEG feature extraction by incorporating the principle of generic learning. A new classifier combining the K-nearest neighbor (KNN) and support vector machine (SVM) approaches is used to classify four anisomerous states, namely, imaginary movements with the left hand, right foot, and right shoulder and the resting state. The highest classification accuracy rate is 92.5%, and the average classification accuracy rate is 87%. The proposed complex algorithm identification method can significantly improve the identification rate of the minority samples and the overall classification performance. PMID:28874909

  18. Generative adversarial network based telecom fraud detection at the receiving bank.

    PubMed

    Zheng, Yu-Jun; Zhou, Xiao-Han; Sheng, Wei-Guo; Xue, Yu; Chen, Sheng-Yong

    2018-06-01

    Recently telecom fraud has become a serious problem especially in developing countries such as China. At present, it can be very difficult to coordinate different agencies to prevent fraud completely. In this paper we study how to detect large transfers that are sent from victims deceived by fraudsters at the receiving bank. We propose a new generative adversarial network (GAN) based model to calculate for each large transfer a probability that it is fraudulent, such that the bank can take appropriate measures to prevent potential fraudsters to take the money if the probability exceeds a threshold. The inference model uses a deep denoising autoencoder to effectively learn the complex probabilistic relationship among the input features, and employs adversarial training that establishes a minimax game between a discriminator and a generator to accurately discriminate between positive samples and negative samples in the data distribution. We show that the model outperforms a set of well-known classification methods in experiments, and its applications in two commercial banks have reduced losses of about 10 million RMB in twelve weeks and significantly improved their business reputation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  20. Wavelet denoising during optical coherence tomography of the prostate nerves using the complex wavelet transform.

    PubMed

    Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M

    2008-01-01

    Preservation of the cavernous nerves during prostate cancer surgery is critical in preserving sexual function after surgery. Optical coherence tomography (OCT) of the prostate nerves has recently been studied for potential use in nerve-sparing prostate surgery. In this study, the discrete wavelet transform and complex dual-tree wavelet transform are implemented for wavelet shrinkage denoising in OCT images of the rat prostate. Applying the complex dual-tree wavelet transform provides improved results for speckle noise reduction in the OCT prostate image. Image quality metrics of the cavernous nerves and signal-to-noise ratio (SNR) were improved significantly using this complex wavelet denoising technique.

  1. Image denoising by sparse 3-D transform-domain collaborative filtering.

    PubMed

    Dabov, Kostadin; Foi, Alessandro; Katkovnik, Vladimir; Egiazarian, Karen

    2007-08-01

    We propose a novel image denoising strategy based on an enhanced sparse representation in transform domain. The enhancement of the sparsity is achieved by grouping similar 2-D image fragments (e.g., blocks) into 3-D data arrays which we call "groups." Collaborative filtering is a special procedure developed to deal with these 3-D groups. We realize it using the three successive steps: 3-D transformation of a group, shrinkage of the transform spectrum, and inverse 3-D transformation. The result is a 3-D estimate that consists of the jointly filtered grouped image blocks. By attenuating the noise, the collaborative filtering reveals even the finest details shared by grouped blocks and, at the same time, it preserves the essential unique features of each individual block. The filtered blocks are then returned to their original positions. Because these blocks are overlapping, for each pixel, we obtain many different estimates which need to be combined. Aggregation is a particular averaging procedure which is exploited to take advantage of this redundancy. A significant improvement is obtained by a specially developed collaborative Wiener filtering. An algorithm based on this novel denoising strategy and its efficient implementation are presented in full detail; an extension to color-image denoising is also developed. The experimental results demonstrate that this computationally scalable algorithm achieves state-of-the-art denoising performance in terms of both peak signal-to-noise ratio and subjective visual quality.

  2. Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree

    NASA Astrophysics Data System (ADS)

    Zheng, Amin; Cheung, Gene; Florencio, Dinei

    2018-07-01

    With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.

  3. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  4. Speckle reduction during all-fiber common-path optical coherence tomography of the cavernous nerves

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M.

    2009-02-01

    Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery, which are responsible for erectile function, may improve nerve preservation and postoperative sexual potency. In this study, we use a rat prostate, ex vivo, to evaluate the feasibility of optical coherence tomography (OCT) as a diagnostic tool for real-time imaging and identification of the cavernous nerves. A novel OCT system based on an all single-mode fiber common-path interferometer-based scanning system is used for this purpose. A wavelet shrinkage denoising technique using Stein's unbiased risk estimator (SURE) algorithm to calculate a data-adaptive threshold is implemented for speckle noise reduction in the OCT image. The signal-to-noise ratio (SNR) was improved by 9 dB and the image quality metrics of the cavernous nerves also improved significantly.

  5. A novel coupling of noise reduction algorithms for particle flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.

    2016-09-15

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less

  6. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms.

    PubMed

    Maggioni, Matteo; Boracchi, Giacomo; Foi, Alessandro; Egiazarian, Karen

    2012-09-01

    We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.

  7. Fault Detection of a Roller-Bearing System through the EMD of a Wavelet Denoised Signal

    PubMed Central

    Ahn, Jong-Hyo; Kwak, Dae-Ho; Koh, Bong-Hwan

    2014-01-01

    This paper investigates fault detection of a roller bearing system using a wavelet denoising scheme and proper orthogonal value (POV) of an intrinsic mode function (IMF) covariance matrix. The IMF of the bearing vibration signal is obtained through empirical mode decomposition (EMD). The signal screening process in the wavelet domain eliminates noise-corrupted portions that may lead to inaccurate prognosis of bearing conditions. We segmented the denoised bearing signal into several intervals, and decomposed each of them into IMFs. The first IMF of each segment is collected to become a covariance matrix for calculating the POV. We show that covariance matrices from healthy and damaged bearings exhibit different POV profiles, which can be a damage-sensitive feature. We also illustrate the conventional approach of feature extraction, of observing the kurtosis value of the measured signal, to compare the functionality of the proposed technique. The study demonstrates the feasibility of wavelet-based de-noising, and shows through laboratory experiments that tracking the proper orthogonal values of the covariance matrix of the IMF can be an effective and reliable measure for monitoring bearing fault. PMID:25196008

  8. Sub-band denoising and spline curve fitting method for hemodynamic measurement in perfusion MRI

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Huang, Hsiao-Ling; Hsu, Yuan-Yu; Chen, Chi-Chen; Chen, Ing-Yi; Wu, Liang-Chi; Liu, Ren-Shyan; Lin, Kang-Ping

    2003-05-01

    In clinical research, non-invasive MR perfusion imaging is capable of investigating brain perfusion phenomenon via various hemodynamic measurements, such as cerebral blood volume (CBV), cerebral blood flow (CBF), and mean trasnit time (MTT). These hemodynamic parameters are useful in diagnosing brain disorders such as stroke, infarction and periinfarct ischemia by further semi-quantitative analysis. However, the accuracy of quantitative analysis is usually affected by poor signal-to-noise ratio image quality. In this paper, we propose a hemodynamic measurement method based upon sub-band denoising and spline curve fitting processes to improve image quality for better hemodynamic quantitative analysis results. Ten sets of perfusion MRI data and corresponding PET images were used to validate the performance. For quantitative comparison, we evaluate gray/white matter CBF ratio. As a result, the hemodynamic semi-quantitative analysis result of mean gray to white matter CBF ratio is 2.10 +/- 0.34. The evaluated ratio of brain tissues in perfusion MRI is comparable to PET technique is less than 1-% difference in average. Furthermore, the method features excellent noise reduction and boundary preserving in image processing, and short hemodynamic measurement time.

  9. Dynamic Denoising of Tracking Sequences

    PubMed Central

    Michailovich, Oleg; Tannenbaum, Allen

    2009-01-01

    In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881

  10. Analysis of the Biceps Brachii Muscle by Varying the Arm Movement Level and Load Resistance Band

    PubMed Central

    Abdullah, Shahrum Shah; Jali, Mohd Hafiz

    2017-01-01

    Biceps brachii muscle illness is one of the common physical disabilities that requires rehabilitation exercises in order to build up the strength of the muscle after surgery. It is also important to monitor the condition of the muscle during the rehabilitation exercise through electromyography (EMG) signals. The purpose of this study was to analyse and investigate the selection of the best mother wavelet (MWT) function and depth of the decomposition level in the wavelet denoising EMG signals through the discrete wavelet transform (DWT) method at each decomposition level. In this experimental work, six healthy subjects comprised of males and females (26 ± 3.0 years and BMI of 22 ± 2.0) were selected as a reference for persons with the illness. The experiment was conducted for three sets of resistance band loads, namely, 5 kg, 9 kg, and 16 kg, as a force during the biceps brachii muscle contraction. Each subject was required to perform three levels of the arm angle positions (30°, 90°, and 150°) for each set of resistance band load. The experimental results showed that the Daubechies5 (db5) was the most appropriate DWT method together with a 6-level decomposition with a soft heursure threshold for the biceps brachii EMG signal analysis. PMID:29138687

  11. Analysis of the Biceps Brachii Muscle by Varying the Arm Movement Level and Load Resistance Band.

    PubMed

    Burhan, Nuradebah; Kasno, Mohammad 'Afif; Ghazali, Rozaimi; Said, Md Radzai; Abdullah, Shahrum Shah; Jali, Mohd Hafiz

    2017-01-01

    Biceps brachii muscle illness is one of the common physical disabilities that requires rehabilitation exercises in order to build up the strength of the muscle after surgery. It is also important to monitor the condition of the muscle during the rehabilitation exercise through electromyography (EMG) signals. The purpose of this study was to analyse and investigate the selection of the best mother wavelet (MWT) function and depth of the decomposition level in the wavelet denoising EMG signals through the discrete wavelet transform (DWT) method at each decomposition level. In this experimental work, six healthy subjects comprised of males and females (26 ± 3.0 years and BMI of 22 ± 2.0) were selected as a reference for persons with the illness. The experiment was conducted for three sets of resistance band loads, namely, 5 kg, 9 kg, and 16 kg, as a force during the biceps brachii muscle contraction. Each subject was required to perform three levels of the arm angle positions (30°, 90°, and 150°) for each set of resistance band load. The experimental results showed that the Daubechies5 (db5) was the most appropriate DWT method together with a 6-level decomposition with a soft heursure threshold for the biceps brachii EMG signal analysis.

  12. Simultaneous Denoising, Deconvolution, and Demixing of Calcium Imaging Data

    PubMed Central

    Pnevmatikakis, Eftychios A.; Soudry, Daniel; Gao, Yuanjun; Machado, Timothy A.; Merel, Josh; Pfau, David; Reardon, Thomas; Mu, Yu; Lacefield, Clay; Yang, Weijian; Ahrens, Misha; Bruno, Randy; Jessell, Thomas M.; Peterka, Darcy S.; Yuste, Rafael; Paninski, Liam

    2016-01-01

    SUMMARY We present a modular approach for analyzing calcium imaging recordings of large neuronal ensembles. Our goal is to simultaneously identify the locations of the neurons, demix spatially overlapping components, and denoise and deconvolve the spiking activity from the slow dynamics of the calcium indicator. Our approach relies on a constrained nonnegative matrix factorization that expresses the spatiotemporal fluorescence activity as the product of a spatial matrix that encodes the spatial footprint of each neuron in the optical field and a temporal matrix that characterizes the calcium concentration of each neuron over time. This framework is combined with a novel constrained deconvolution approach that extracts estimates of neural activity from fluorescence traces, to create a spatiotemporal processing algorithm that requires minimal parameter tuning. We demonstrate the general applicability of our method by applying it to in vitro and in vivo multineuronal imaging data, whole-brain light-sheet imaging data, and dendritic imaging data. PMID:26774160

  13. An Interactive Procedure to Preserve the Desired Edges during the Image Processing of Noise Reduction

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Yu; Huang, Hsuan-Yu; Lee, Lin-Tsang

    2010-12-01

    The paper propose a new procedure including four stages in order to preserve the desired edges during the image processing of noise reduction. A denoised image can be obtained from a noisy image at the first stage of the procedure. At the second stage, an edge map can be obtained by the Canny edge detector to find the edges of the object contours. Manual modification of an edge map at the third stage is optional to capture all the desired edges of the object contours. At the final stage, a new method called Edge Preserved Inhomogeneous Diffusion Equation (EPIDE) is used to smooth the noisy images or the previously denoised image at the first stage for achieving the edge preservation. The Optical Character Recognition (OCR) results in the experiments show that the proposed procedure has the best recognition result because of the capability of edge preservation.

  14. Application of improved wavelet total variation denoising for rolling bearing incipient fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Jia, M. P.

    2018-06-01

    When incipient fault appear in the rolling bearing, the fault feature is too small and easily submerged in the strong background noise. In this paper, wavelet total variation denoising based on kurtosis (Kurt-WATV) is studied, which can extract the incipient fault feature of the rolling bearing more effectively. The proposed algorithm contains main steps: a) establish a sparse diagnosis model, b) represent periodic impulses based on the redundant wavelet dictionary, c) solve the joint optimization problem by alternating direction method of multipliers (ADMM), d) obtain the reconstructed signal using kurtosis value as criterion and then select optimal wavelet subbands. This paper uses overcomplete rational-dilation wavelet transform (ORDWT) as a dictionary, and adjusts the control parameters to achieve the concentration in the time-frequency plane. Incipient fault of rolling bearing is used as an example, and the result shows that the effectiveness and superiority of the proposed Kurt- WATV bearing fault diagnosis algorithm.

  15. Automated retinal layer segmentation and characterization

    NASA Astrophysics Data System (ADS)

    Luisi, Jonathan; Briley, David; Boretsky, Adam; Motamedi, Massoud

    2014-05-01

    Spectral Domain Optical Coherence Tomography (SD-OCT) is a valuable diagnostic tool in both clinical and research settings. The depth-resolved intensity profiles generated by light backscattered from discrete layers of the retina provide a non-invasive method of investigating progressive diseases and injury within the eye. This study demonstrates the application of steerable convolution filters capable of automatically separating gradient orientations to identify edges and delineate tissue boundaries. The edge maps were recombined to measure thickness of individual retinal layers. This technique was successfully applied to longitudinally monitor changes in retinal morphology in a mouse model of laser-induced choroidal neovascularization (CNV) and human data from age-related macular degeneration patients. The steerable filters allow for direct segmentation of noisy images, while novel recombination of weaker segmentations allow for denoising post-segmentation. The segmentation before denoising strategy allows the rapid detection of thin retinal layers even under suboptimal imaging conditions.

  16. Multichannel Poisson denoising and deconvolution on the sphere: application to the Fermi Gamma-ray Space Telescope

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2012-10-01

    A multiscale representation-based denoising method for spherical data contaminated with Poisson noise, the multiscale variance stabilizing transform on the sphere (MS-VSTS), has been previously proposed. This paper first extends this MS-VSTS to spherical two and one dimensions data (2D-1D), where the two first dimensions are longitude and latitude, and the third dimension is a meaningful physical index such as energy or time. We then introduce a novel multichannel deconvolution built upon the 2D-1D MS-VSTS, which allows us to get rid of both the noise and the blur introduced by the point spread function (PSF) in each energy (or time) band. The method is applied to simulated data from the Large Area Telescope (LAT), the main instrument of the Fermi Gamma-ray Space Telescope, which detects high energy gamma-rays in a very wide energy range (from 20 MeV to more than 300 GeV), and whose PSF is strongly energy-dependent (from about 3.5 at 100 MeV to less than 0.1 at 10 GeV).

  17. Estimation of signal-dependent noise level function in transform domain via a sparse recovery model.

    PubMed

    Yang, Jingyu; Gan, Ziqiao; Wu, Zhaoyang; Hou, Chunping

    2015-05-01

    This paper proposes a novel algorithm to estimate the noise level function (NLF) of signal-dependent noise (SDN) from a single image based on the sparse representation of NLFs. Noise level samples are estimated from the high-frequency discrete cosine transform (DCT) coefficients of nonlocal-grouped low-variation image patches. Then, an NLF recovery model based on the sparse representation of NLFs under a trained basis is constructed to recover NLF from the incomplete noise level samples. Confidence levels of the NLF samples are incorporated into the proposed model to promote reliable samples and weaken unreliable ones. We investigate the behavior of the estimation performance with respect to the block size, sampling rate, and confidence weighting. Simulation results on synthetic noisy images show that our method outperforms existing state-of-the-art schemes. The proposed method is evaluated on real noisy images captured by three types of commodity imaging devices, and shows consistently excellent SDN estimation performance. The estimated NLFs are incorporated into two well-known denoising schemes, nonlocal means and BM3D, and show significant improvements in denoising SDN-polluted images.

  18. A Template-Based Protein Structure Reconstruction Method Using Deep Autoencoder Learning.

    PubMed

    Li, Haiou; Lyu, Qiang; Cheng, Jianlin

    2016-12-01

    Protein structure prediction is an important problem in computational biology, and is widely applied to various biomedical problems such as protein function study, protein design, and drug design. In this work, we developed a novel deep learning approach based on a deeply stacked denoising autoencoder for protein structure reconstruction. We applied our approach to a template-based protein structure prediction using only the 3D structural coordinates of homologous template proteins as input. The templates were identified for a target protein by a PSI-BLAST search. 3DRobot (a program that automatically generates diverse and well-packed protein structure decoys) was used to generate initial decoy models for the target from the templates. A stacked denoising autoencoder was trained on the decoys to obtain a deep learning model for the target protein. The trained deep model was then used to reconstruct the final structural model for the target sequence. With target proteins that have highly similar template proteins as benchmarks, the GDT-TS score of the predicted structures is greater than 0.7, suggesting that the deep autoencoder is a promising method for protein structure reconstruction.

  19. A Novel Passive Wireless Sensing Method for Concrete Chloride Ion Concentration Monitoring.

    PubMed

    Zhou, Shuangxi; Sheng, Wei; Deng, Fangming; Wu, Xiang; Fu, Zhihui

    2017-12-11

    In this paper, a novel approach for concrete chloride ion concentration measuring based on passive and wireless sensor tag is proposed. The chloride ion sensor based on RFID communication protocol is consisting of an energy harvesting and management circuit, a low dropout voltage regulator, a MCU, a RFID tag chip and a pair of electrodes. The proposed sensor harvests energy radiated by the RFID reader to power its circuitry. To improve the stability of power supply, a three-stage boost rectifier is customized to rectify the harvested power into dc power and step-up the voltage. Since the measured data is wirelessly transmitted, it contains miscellaneous noises which would decrease the accuracy of measuring. Thus, in this paper, the wavelet denoising method is adopted to denoise the raw data. Besides, a monitoring software is developed to display the measurement results in real-time. The measurement results indicate that the proposed passive sensor tag can achieve a reliable communication distance of 16.3 m and can reliably measure the chloride ion concentration in concrete.

  20. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  1. A fast method to emulate an iterative POCS image reconstruction algorithm.

    PubMed

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  2. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG

    PubMed Central

    Lee, Kwang Jin; Lee, Boreom

    2016-01-01

    Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR. PMID:27376296

  3. Sequential Total Variation Denoising for the Extraction of Fetal ECG from Single-Channel Maternal Abdominal ECG.

    PubMed

    Lee, Kwang Jin; Lee, Boreom

    2016-07-01

    Fetal heart rate (FHR) is an important determinant of fetal health. Cardiotocography (CTG) is widely used for measuring the FHR in the clinical field. However, fetal movement and blood flow through the maternal blood vessels can critically influence Doppler ultrasound signals. Moreover, CTG is not suitable for long-term monitoring. Therefore, researchers have been developing algorithms to estimate the FHR using electrocardiograms (ECGs) from the abdomen of pregnant women. However, separating the weak fetal ECG signal from the abdominal ECG signal is a challenging problem. In this paper, we propose a method for estimating the FHR using sequential total variation denoising and compare its performance with that of other single-channel fetal ECG extraction methods via simulation using the Fetal ECG Synthetic Database (FECGSYNDB). Moreover, we used real data from PhysioNet fetal ECG databases for the evaluation of the algorithm performance. The R-peak detection rate is calculated to evaluate the performance of our algorithm. Our approach could not only separate the fetal ECG signals from the abdominal ECG signals but also accurately estimate the FHR.

  4. XQ-NLM: Denoising Diffusion MRI Data via x-q Space Non-Local Patch Matching.

    PubMed

    Chen, Geng; Wu, Yafeng; Shen, Dinggang; Yap, Pew-Thian

    2016-10-01

    Noise is a major issue influencing quantitative analysis in diffusion MRI. The effects of noise can be reduced by repeated acquisitions, but this leads to long acquisition times that can be unrealistic in clinical settings. For this reason, post-acquisition denoising methods have been widely used to improve SNR. Among existing methods, non-local means (NLM) has been shown to produce good image quality with edge preservation. However, currently the application of NLM to diffusion MRI has been mostly focused on the spatial space (i.e., the x -space), despite the fact that diffusion data live in a combined space consisting of the x -space and the q -space (i.e., the space of wavevectors). In this paper, we propose to extend NLM to both x -space and q -space. We show how patch-matching, as required in NLM, can be performed concurrently in x-q space with the help of azimuthal equidistant projection and rotation invariant features. Extensive experiments on both synthetic and real data confirm that the proposed x-q space NLM (XQ-NLM) outperforms the classic NLM.

  5. High-quality compressive ghost imaging

    NASA Astrophysics Data System (ADS)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  6. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Nonlinear Image Denoising Methodologies

    DTIC Science & Technology

    2002-05-01

    53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution

  8. Identifying and quantifying main components of physiological noise in functional near infrared spectroscopy on the prefrontal cortex.

    PubMed

    Kirilina, Evgeniya; Yu, Na; Jelzow, Alexander; Wabnitz, Heidrun; Jacobs, Arthur M; Tachtsidis, Ilias

    2013-01-01

    Functional Near-Infrared Spectroscopy (fNIRS) is a promising method to study functional organization of the prefrontal cortex. However, in order to realize the high potential of fNIRS, effective discrimination between physiological noise originating from forehead skin haemodynamic and cerebral signals is required. Main sources of physiological noise are global and local blood flow regulation processes on multiple time scales. The goal of the present study was to identify the main physiological noise contributions in fNIRS forehead signals and to develop a method for physiological de-noising of fNIRS data. To achieve this goal we combined concurrent time-domain fNIRS and peripheral physiology recordings with wavelet coherence analysis (WCA). Depth selectivity was achieved by analyzing moments of photon time-of-flight distributions provided by time-domain fNIRS. Simultaneously, mean arterial blood pressure (MAP), heart rate (HR), and skin blood flow (SBF) on the forehead were recorded. WCA was employed to quantify the impact of physiological processes on fNIRS signals separately for different time scales. We identified three main processes contributing to physiological noise in fNIRS signals on the forehead. The first process with the period of about 3 s is induced by respiration. The second process is highly correlated with time lagged MAP and HR fluctuations with a period of about 10 s often referred as Mayer waves. The third process is local regulation of the facial SBF time locked to the task-evoked fNIRS signals. All processes affect oxygenated haemoglobin concentration more strongly than that of deoxygenated haemoglobin. Based on these results we developed a set of physiological regressors, which were used for physiological de-noising of fNIRS signals. Our results demonstrate that proposed de-noising method can significantly improve the sensitivity of fNIRS to cerebral signals.

  9. A novel method for 3D measurement of RFID multi-tag network based on matching vision and wavelet

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiao; Yu, Xiaolei; Zhao, Zhimin; Wang, Donghua; Zhang, Wenjie; Liu, Zhenlu; Lu, Dongsheng; Dong, Dingbang

    2018-07-01

    In the field of radio frequency identification (RFID), the three-dimensional (3D) distribution of RFID multi-tag networks has a significant impact on their reading performance. At the same time, in order to realize the anti-collision of RFID multi-tag networks in practical engineering applications, the 3D distribution of RFID multi-tag networks must be measured. In this paper, a novel method for the 3D measurement of RFID multi-tag networks is proposed. A dual-CCD system (vertical and horizontal cameras) is used to obtain images of RFID multi-tag networks from different angles. Then, the wavelet threshold denoising method is used to remove noise in the obtained images. The template matching method is used to determine the two-dimensional coordinates and vertical coordinate of each tag. The 3D coordinates of each tag are obtained subsequently. Finally, a model of the nonlinear relation between the 3D coordinate distribution of the RFID multi-tag network and the corresponding reading distance is established using the wavelet neural network. The experiment results show that the average prediction relative error is 0.71% and the time cost is 2.17 s. The values of the average prediction relative error and time cost are smaller than those of the particle swarm optimization neural network and genetic algorithm–back propagation neural network. The time cost of the wavelet neural network is about 1% of that of the other two methods. The method proposed in this paper has a smaller relative error. The proposed method can improve the real-time performance of RFID multi-tag networks and the overall dynamic performance of multi-tag networks.

  10. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  11. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  12. RBF neural network prediction on weak electrical signals in Aloe vera var. chinensis

    NASA Astrophysics Data System (ADS)

    Wang, Lanzhou; Zhao, Jiayin; Wang, Miao

    2008-10-01

    A Gaussian radial base function (RBF) neural network forecast on signals in the Aloe vera var. chinensis by the wavelet soft-threshold denoised as the time series and using the delayed input window chosen at 50, is set up to forecast backward. There was the maximum amplitude at 310.45μV, minimum -75.15μV, average value -2.69μV and <1.5Hz at frequency in Aloe vera var. chinensis respectively. The electrical signal in Aloe vera var. chinensis is a sort of weak, unstable and low frequency signals. A result showed that it is feasible to forecast plant electrical signals for the timing by the RBF. The forecast data can be used as the preferences for the intelligent autocontrol system based on the adaptive characteristic of plants to achieve the energy saving on the agricultural production in the plastic lookum or greenhouse.

  13. Improved biliary detection and diagnosis through intelligent machine analysis.

    PubMed

    Logeswaran, Rajasvaran

    2012-09-01

    This paper reports on work undertaken to improve automated detection of bile ducts in magnetic resonance cholangiopancreatography (MRCP) images, with the objective of conducting preliminary classification of the images for diagnosis. The proposed I-BDeDIMA (Improved Biliary Detection and Diagnosis through Intelligent Machine Analysis) scheme is a multi-stage framework consisting of successive phases of image normalization, denoising, structure identification, object labeling, feature selection and disease classification. A combination of multiresolution wavelet, dynamic intensity thresholding, segment-based region growing, region elimination, statistical analysis and neural networks, is used in this framework to achieve good structure detection and preliminary diagnosis. Tests conducted on over 200 clinical images with known diagnosis have shown promising results of over 90% accuracy. The scheme outperforms related work in the literature, making it a viable framework for computer-aided diagnosis of biliary diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  15. Improving label-free detection of circulating melanoma cells by photoacoustic flow cytometry

    NASA Astrophysics Data System (ADS)

    Zhou, Huan; Wang, Qiyan; Pang, Kai; Zhou, Quanyu; Yang, Ping; He, Hao; Wei, Xunbin

    2018-02-01

    Melanoma is a kind of a malignant tumor of melanocytes with the properties of high mortality and high metastasis rate. The circulating melanoma cells with the high content of melanin can be detected by light absorption to diagnose and treat cancer at an early stage. Compared with conventional detection methods such as in vivo flow cytometry (IVFC) based on fluorescence, the in vivo photoacoustic flow cytometry (PAFC) utilizes melanin cells as biomarkers to collect the photoacoustic (PA) signals without toxic fluorescent dyes labeling in a non-invasive way. The information of target tumor cells is helpful for data analysis and cell counting. However, the raw signals in PAFC system contain numerous noises such as environmental noise, device noise and in vivo motion noise. Conventional denoising algorithms such as wavelet denoising (WD) method and means filter (MF) method are based on the local information to extract the data of clinical interest, which remove the subtle feature and leave many noises. To address the above questions, the nonlocal means (NLM) method based on nonlocal data has been proposed to suppress the noise in PA signals. Extensive experiments on in vivo PA signals from the mice with the injection of B16F10 cells in caudal vein have been conducted. All the results indicate that the NLM method has superior noise reduction performance and subtle information reservation.

  16. A Novel Method for Vertical Acceleration Noise Suppression of a Thrust-Vectored VTOL UAV.

    PubMed

    Li, Huanyu; Wu, Linfeng; Li, Yingjie; Li, Chunwen; Li, Hangyu

    2016-12-02

    Acceleration is of great importance in motion control for unmanned aerial vehicles (UAVs), especially during the takeoff and landing stages. However, the measured acceleration is inevitably polluted by severe noise. Therefore, a proper noise suppression procedure is required. This paper presents a novel method to reduce the noise in the measured vertical acceleration for a thrust-vectored tail-sitter vertical takeoff and landing (VTOL) UAV. In the new procedure, a Kalman filter is first applied to estimate the UAV mass by using the information in the vertical thrust and measured acceleration. The UAV mass is then used to compute an estimate of UAV vertical acceleration. The estimated acceleration is finally fused with the measured acceleration to obtain the minimum variance estimate of vertical acceleration. By doing this, the new approach incorporates the thrust information into the acceleration estimate. The method is applied to the data measured in a VTOL UAV takeoff experiment. Two other denoising approaches developed by former researchers are also tested for comparison. The results demonstrate that the new method is able to suppress the acceleration noise substantially. It also maintains the real-time performance in the final estimated acceleration, which is not seen in the former denoising approaches. The acceleration treated with the new method can be readily used in the motion control applications for UAVs to achieve improved accuracy.

  17. A Novel Method for Vertical Acceleration Noise Suppression of a Thrust-Vectored VTOL UAV

    PubMed Central

    Li, Huanyu; Wu, Linfeng; Li, Yingjie; Li, Chunwen; Li, Hangyu

    2016-01-01

    Acceleration is of great importance in motion control for unmanned aerial vehicles (UAVs), especially during the takeoff and landing stages. However, the measured acceleration is inevitably polluted by severe noise. Therefore, a proper noise suppression procedure is required. This paper presents a novel method to reduce the noise in the measured vertical acceleration for a thrust-vectored tail-sitter vertical takeoff and landing (VTOL) UAV. In the new procedure, a Kalman filter is first applied to estimate the UAV mass by using the information in the vertical thrust and measured acceleration. The UAV mass is then used to compute an estimate of UAV vertical acceleration. The estimated acceleration is finally fused with the measured acceleration to obtain the minimum variance estimate of vertical acceleration. By doing this, the new approach incorporates the thrust information into the acceleration estimate. The method is applied to the data measured in a VTOL UAV takeoff experiment. Two other denoising approaches developed by former researchers are also tested for comparison. The results demonstrate that the new method is able to suppress the acceleration noise substantially. It also maintains the real-time performance in the final estimated acceleration, which is not seen in the former denoising approaches. The acceleration treated with the new method can be readily used in the motion control applications for UAVs to achieve improved accuracy. PMID:27918422

  18. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  19. A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes.

    PubMed

    van Gennip, Yves; Athavale, Prashant; Gilles, Jérôme; Choksi, Rustum

    2015-09-01

    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.

  20. Post-Processing of Low Dose Mammography Images

    DTIC Science & Technology

    2002-05-01

    method of restoring images in the presence of blur as well as noise ” (12:276). The deblurring and denoising characteristics make Wiener filtering...independent noise . The signal dependant scatter noise can be modeled as blur in the mammography image. A Wiener filter with deblurring characteristics can...centered on. This method is used to eradicate noise impulses with high 26 pixel values (2:7). For the research at hand, the median filter would

  1. 2D biological representations with reduced speckle obtained from two perpendicular ultrasonic arrays.

    PubMed

    Rodriguez-Hernandez, Miguel A; Gomez-Sacristan, Angel; Sempere-Payá, Víctor M

    2016-04-29

    Ultrasound diagnosis is a widely used medical tool. Among the various ultrasound techniques, ultrasonic imaging is particularly relevant. This paper presents an improvement to a two-dimensional (2D) ultrasonic system using measurements taken from perpendicular planes, where digital signal processing techniques are used to combine one-dimensional (1D) A-scans were acquired by individual transducers in arrays located in perpendicular planes. An algorithm used to combine measurements is improved based on the wavelet transform, which includes a denoising step during the 2D representation generation process. The inclusion of this new denoising stage generates higher quality 2D representations with a reduced level of speckling. The paper includes different 2D representations obtained from noisy A-scans and compares the improvements obtained by including the denoising stage.

  2. Advanced prior modeling for 3D bright field electron tomography

    NASA Astrophysics Data System (ADS)

    Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.

    2015-03-01

    Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.

  3. Denoising of gravitational wave signals via dictionary learning algorithms

    NASA Astrophysics Data System (ADS)

    Torres-Forné, Alejandro; Marquina, Antonio; Font, José A.; Ibáñez, José M.

    2016-12-01

    Gravitational wave astronomy has become a reality after the historical detections accomplished during the first observing run of the two advanced LIGO detectors. In the following years, the number of detections is expected to increase significantly with the full commissioning of the advanced LIGO, advanced Virgo and KAGRA detectors. The development of sophisticated data analysis techniques to improve the opportunities of detection for low signal-to-noise-ratio events is, hence, a most crucial effort. In this paper, we present one such technique, dictionary-learning algorithms, which have been extensively developed in the last few years and successfully applied mostly in the context of image processing. However, to the best of our knowledge, such algorithms have not yet been employed to denoise gravitational wave signals. By building dictionaries from numerical relativity templates of both binary black holes mergers and bursts of rotational core collapse, we show how machine-learning algorithms based on dictionaries can also be successfully applied for gravitational wave denoising. We use a subset of signals from both catalogs, embedded in nonwhite Gaussian noise, to assess our techniques with a large sample of tests and to find the best model parameters. The application of our method to the actual signal GW150914 shows promising results. Dictionary-learning algorithms could be a complementary addition to the gravitational wave data analysis toolkit. They may be used to extract signals from noise and to infer physical parameters if the data are in good enough agreement with the morphology of the dictionary atoms.

  4. Recent advances in quantitative analysis of fluid interfaces in multiphase fluid flow measured by synchrotron-based x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Schlueter, S.; Sheppard, A.; Wildenschild, D.

    2013-12-01

    Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.

  5. 3-D discrete analytical ridgelet transform.

    PubMed

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  6. Wavelet-Based Adaptive Denoising of Phonocardiographic Records

    DTIC Science & Technology

    2001-10-25

    phonocardiography, including the recording of fetal heart sounds on the maternal abdominal surface. Keywords - phonocardiography, wavelets, denoising, signal... fetal heart rate monitoring [2], [7], [8]. Unfortunately, heart sound records are very often disturbed by various factors, which can prohibit their...recorded the acoustic signals. The first microphone was inserted into the focus of a stethoscope and it recorded the acoustic signals of the heart ( heart

  7. Quantitative Damage Detection and Sparse Sensor Array Optimization of Carbon Fiber Reinforced Resin Composite Laminates for Wind Turbine Blade Structural Health Monitoring

    PubMed Central

    Li, Xiang; Yang, Zhibo; Chen, Xuefeng

    2014-01-01

    The active structural health monitoring (SHM) approach for the complex composite laminate structures of wind turbine blades (WTBs), addresses the important and complicated problem of signal noise. After illustrating the wind energy industry's development perspectives and its crucial requirement for SHM, an improved redundant second generation wavelet transform (IRSGWT) pre-processing algorithm based on neighboring coefficients is introduced for feeble signal denoising. The method can avoid the drawbacks of conventional wavelet methods that lose information in transforms and the shortcomings of redundant second generation wavelet (RSGWT) denoising that can lead to error propagation. For large scale WTB composites, how to minimize the number of sensors while ensuring accuracy is also a key issue. A sparse sensor array optimization of composites for WTB applications is proposed that can reduce the number of transducers that must be used. Compared to a full sixteen transducer array, the optimized eight transducer configuration displays better accuracy in identifying the correct position of simulated damage (mass of load) on composite laminates with anisotropic characteristics than a non-optimized array. It can help to guarantee more flexible and qualified monitoring of the areas that more frequently suffer damage. The proposed methods are verified experimentally on specimens of carbon fiber reinforced resin composite laminates. PMID:24763210

  8. Hybrid Wavelet De-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series

    NASA Astrophysics Data System (ADS)

    WANG, D.; Wang, Y.; Zeng, X.

    2017-12-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.

  9. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  10. Bayesian demosaicing using Gaussian scale mixture priors with local adaptivity in the dual tree complex wavelet packet transform domain

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried

    2013-02-01

    In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.

  11. Toward automated denoising of single molecular Förster resonance energy transfer data

    NASA Astrophysics Data System (ADS)

    Lee, Hao-Chih; Lin, Bo-Lin; Chang, Wei-Hau; Tu, I.-Ping

    2012-01-01

    A wide-field two-channel fluorescence microscope is a powerful tool as it allows for the study of conformation dynamics of hundreds to thousands of immobilized single molecules by Förster resonance energy transfer (FRET) signals. To date, the data reduction from a movie to a final set containing meaningful single-molecule FRET (smFRET) traces involves human inspection and intervention at several critical steps, greatly hampering the efficiency at the post-imaging stage. To facilitate the data reduction from smFRET movies to smFRET traces and to address the noise-limited issues, we developed a statistical denoising system toward fully automated processing. This data reduction system has embedded several novel approaches. First, as to background subtraction, high-order singular value decomposition (HOSVD) method is employed to extract spatial and temporal features. Second, to register and map the two color channels, the spots representing bleeding through the donor channel to the acceptor channel are used. Finally, correlation analysis and likelihood ratio statistic for the change point detection (CPD) are developed to study the two channels simultaneously, resolve FRET states, and report the dwelling time of each state. The performance of our method has been checked using both simulation and real data.

  12. Sparsity-based image monitoring of crystal size distribution during crystallization

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.

    2017-07-01

    To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.

  13. Graph cuts for curvature based image denoising.

    PubMed

    Bae, Egil; Shi, Juan; Tai, Xue-Cheng

    2011-05-01

    Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.

  14. Improving the signal analysis for in vivo photoacoustic flow cytometry

    NASA Astrophysics Data System (ADS)

    Niu, Zhenyu; Yang, Ping; Wei, Dan; Tang, Shuo; Wei, Xunbin

    2015-03-01

    At early stage of cancer, a small number of circulating tumor cells (CTCs) appear in the blood circulation. Thus, early detection of malignant circulating tumor cells has great significance for timely treatment to reduce the cancer death rate. We have developed an in vivo photoacoustic flow cytometry (PAFC) to monitor the metastatic process of CTCs and record the signals from target cells. Information of target cells which is helpful to the early therapy would be obtained through analyzing and processing the signals. The raw signal detected from target cells often contains some noise caused by electronic devices, such as background noise and thermal noise. We choose the Wavelet denoising method to effectively distinguish the target signal from background noise. Processing in time domain and frequency domain would be combined to analyze the signal after denoising. This algorithm contains time domain filter and frequency transformation. The frequency spectrum image of the signal contains distinctive features that can be used to analyze the property of target cells or particles. The PAFC technique can detect signals from circulating tumor cells or other particles. The processing methods have a great potential for analyzing signals accurately and rapidly.

  15. A New Method for Suppressing Periodic Narrowband Interference Based on the Chaotic van der Pol Oscillator

    NASA Astrophysics Data System (ADS)

    Lu, Jia; Zhang, Xiaoxing; Xiong, Hao

    The chaotic van der Pol oscillator is a powerful tool for detecting defects in electric systems by using online partial discharge (PD) monitoring. This paper focuses on realizing weak PD signal detection in the strong periodic narrowband interference by using high sensitivity to the periodic narrowband interference signals and immunity to white noise and PD signals of chaotic systems. A new approach to removing the periodic narrowband interference by using a van der Pol chaotic oscillator is described by analyzing the motion characteristic of the chaotic oscillator on the basis of the van der Pol equation. Furthermore, the Floquet index for measuring the amplitude of periodic narrowband signals is redefined. The denoising signal processed by the chaotic van der Pol oscillators is further processed by wavelet analysis. Finally, the denoising results verify that the periodic narrowband and white noise interference can be removed efficiently by combining the theory of the chaotic van der Pol oscillator and wavelet analysis.

  16. Wavelets, ridgelets, and curvelets for Poisson noise removal.

    PubMed

    Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc

    2008-07-01

    In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.

  17. An enhanced approach for biomedical image restoration using image fusion techniques

    NASA Astrophysics Data System (ADS)

    Karam, Ghada Sabah; Abbas, Fatma Ismail; Abood, Ziad M.; Kadhim, Kadhim K.; Karam, Nada S.

    2018-05-01

    Biomedical image is generally noisy and little blur due to the physical mechanisms of the acquisition process, so one of the common degradations in biomedical image is their noise and poor contrast. The idea of biomedical image enhancement is to improve the quality of the image for early diagnosis. In this paper we are using Wavelet Transformation to remove the Gaussian noise from biomedical images: Positron Emission Tomography (PET) image and Radiography (Radio) image, in different color spaces (RGB, HSV, YCbCr), and we perform the fusion of the denoised images resulting from the above denoising techniques using add image method. Then some quantive performance metrics such as signal -to -noise ratio (SNR), peak signal-to-noise ratio (PSNR), and Mean Square Error (MSE), etc. are computed. Since this statistical measurement helps in the assessment of fidelity and image quality. The results showed that our approach can be applied of Image types of color spaces for biomedical images.

  18. Detecting intrinsic dynamics of traffic flow with recurrence analysis and empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Xiong, Hui; Shang, Pengjian; Bian, Songhan

    2017-05-01

    In this paper, we apply the empirical mode decomposition (EMD) method to the recurrence plot (RP) and recurrence quantification analysis (RQA), to evaluate the frequency- and time-evolving dynamics of the traffic flow. Based on the cumulative intrinsic mode functions extracted by the EMD, the frequency-evolving RP regarding different oscillation of modes suggests that apparent dynamics of the data considered are mainly dominated by its components of medium- and low-frequencies while severely affected by fast oscillated noises contained in the signal. Noises are then eliminated to analyze the intrinsic dynamics and consequently, the denoised time-evolving RQA diversely characterizes the properties of the signal and marks crucial points more accurately where white bands in the RP occur, whereas a strongly qualitative agreement exists between all the non-denoised RQA measures. Generally, the EMD combining with the recurrence analysis sheds more reliable, abundant and inherent lights into the traffic flow, which is meaningful to the empirical analysis of complex systems.

  19. Enhancing seismic P phase arrival picking based on wavelet denoising and kurtosis picker

    NASA Astrophysics Data System (ADS)

    Shang, Xueyi; Li, Xibing; Weng, Lei

    2018-01-01

    P phase arrival picking of weak signals is still challenging in seismology. A wavelet denoising is proposed to enhance seismic P phase arrival picking, and the kurtosis picker is applied on the wavelet-denoised signal to identify P phase arrival. It has been called the WD-K picker. The WD-K picker, which is different from those traditional wavelet-based pickers on the basis of a single wavelet component or certain main wavelet components, takes full advantage of the reconstruction of main detail wavelet components and the approximate wavelet component. The proposed WD-K picker considers more wavelet components and presents a better P phase arrival feature. The WD-K picker has been evaluated on 500 micro-seismic signals recorded in the Chinese Yongshaba mine. The comparison between the WD-K pickings and manual pickings shows the good picking accuracy of the WD-K picker. Furthermore, the WD-K picking performance has been compared with the main detail wavelet component combining-based kurtosis (WDC-K) picker, the single wavelet component-based kurtosis (SW-K) picker, and certain main wavelet component-based maximum kurtosis (MMW-K) picker. The comparison has demonstrated that the WD-K picker has better picking accuracy than the other three-wavelet and kurtosis-based pickers, thus showing the enhanced ability of wavelet denoising.

  20. Wavelet-based de-noising algorithm for images acquired with parallel magnetic resonance imaging (MRI).

    PubMed

    Delakis, Ioannis; Hammad, Omer; Kitney, Richard I

    2007-07-07

    Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.

  1. Multiscale properties of weighted total variation flow with applications to denoising and registration.

    PubMed

    Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A

    2015-07-01

    Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images. An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration algorithm is also demonstrated for aligning viability assessment magnetic resonance (MR) images from 8 patients with previous myocardial infarctions. Copyright © 2015. Published by Elsevier B.V.

  2. Removal of muscle artifact from EEG data: comparison between stochastic (ICA and CCA) and deterministic (EMD and wavelet-based) approaches

    NASA Astrophysics Data System (ADS)

    Safieddine, Doha; Kachenoura, Amar; Albera, Laurent; Birot, Gwénaël; Karfoul, Ahmad; Pasnicu, Anca; Biraben, Arnaud; Wendling, Fabrice; Senhadji, Lotfi; Merlet, Isabelle

    2012-12-01

    Electroencephalographic (EEG) recordings are often contaminated with muscle artifacts. This disturbing myogenic activity not only strongly affects the visual analysis of EEG, but also most surely impairs the results of EEG signal processing tools such as source localization. This article focuses on the particular context of the contamination epileptic signals (interictal spikes) by muscle artifact, as EEG is a key diagnosis tool for this pathology. In this context, our aim was to compare the ability of two stochastic approaches of blind source separation, namely independent component analysis (ICA) and canonical correlation analysis (CCA), and of two deterministic approaches namely empirical mode decomposition (EMD) and wavelet transform (WT) to remove muscle artifacts from EEG signals. To quantitatively compare the performance of these four algorithms, epileptic spike-like EEG signals were simulated from two different source configurations and artificially contaminated with different levels of real EEG-recorded myogenic activity. The efficiency of CCA, ICA, EMD, and WT to correct the muscular artifact was evaluated both by calculating the normalized mean-squared error between denoised and original signals and by comparing the results of source localization obtained from artifact-free as well as noisy signals, before and after artifact correction. Tests on real data recorded in an epileptic patient are also presented. The results obtained in the context of simulations and real data show that EMD outperformed the three other algorithms for the denoising of data highly contaminated by muscular activity. For less noisy data, and when spikes arose from a single cortical source, the myogenic artifact was best corrected with CCA and ICA. Otherwise when spikes originated from two distinct sources, either EMD or ICA offered the most reliable denoising result for highly noisy data, while WT offered the better denoising result for less noisy data. These results suggest that the performance of muscle artifact correction methods strongly depend on the level of data contamination, and of the source configuration underlying EEG signals. Eventually, some insights into the numerical complexity of these four algorithms are given.

  3. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  4. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  5. Learning implicit brain MRI manifolds with deep learning

    NASA Astrophysics Data System (ADS)

    Bermudez, Camilo; Plassard, Andrew J.; Davis, Larry T.; Newton, Allen T.; Resnick, Susan M.; Landman, Bennett A.

    2018-03-01

    An important task in image processing and neuroimaging is to extract quantitative information from the acquired images in order to make observations about the presence of disease or markers of development in populations. Having a low-dimensional manifold of an image allows for easier statistical comparisons between groups and the synthesis of group representatives. Previous studies have sought to identify the best mapping of brain MRI to a low-dimensional manifold, but have been limited by assumptions of explicit similarity measures. In this work, we use deep learning techniques to investigate implicit manifolds of normal brains and generate new, high-quality images. We explore implicit manifolds by addressing the problems of image synthesis and image denoising as important tools in manifold learning. First, we propose the unsupervised synthesis of T1-weighted brain MRI using a Generative Adversarial Network (GAN) by learning from 528 examples of 2D axial slices of brain MRI. Synthesized images were first shown to be unique by performing a cross-correlation with the training set. Real and synthesized images were then assessed in a blinded manner by two imaging experts providing an image quality score of 1-5. The quality score of the synthetic image showed substantial overlap with that of the real images. Moreover, we use an autoencoder with skip connections for image denoising, showing that the proposed method results in higher PSNR than FSL SUSAN after denoising. This work shows the power of artificial networks to synthesize realistic imaging data, which can be used to improve image processing techniques and provide a quantitative framework to structural changes in the brain.

  6. Parallel transformation of K-SVD solar image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Liang, Youwen; Tian, Yu; Li, Mei

    2017-02-01

    The images obtained by observing the sun through a large telescope always suffered with noise due to the low SNR. K-SVD denoising algorithm can effectively remove Gauss white noise. Training dictionaries for sparse representations is a time consuming task, due to the large size of the data involved and to the complexity of the training algorithms. In this paper, an OpenMP parallel programming language is proposed to transform the serial algorithm to the parallel version. Data parallelism model is used to transform the algorithm. Not one atom but multiple atoms updated simultaneously is the biggest change. The denoising effect and acceleration performance are tested after completion of the parallel algorithm. Speedup of the program is 13.563 in condition of using 16 cores. This parallel version can fully utilize the multi-core CPU hardware resources, greatly reduce running time and easily to transplant in multi-core platform.

  7. 3D Data Denoising via Nonlocal Means Filter by Using Parallel GPU Strategies

    PubMed Central

    Cuomo, Salvatore; De Michele, Pasquale; Piccialli, Francesco

    2014-01-01

    Nonlocal Means (NLM) algorithm is widely considered as a state-of-the-art denoising filter in many research fields. Its high computational complexity leads researchers to the development of parallel programming approaches and the use of massively parallel architectures such as the GPUs. In the recent years, the GPU devices had led to achieving reasonable running times by filtering, slice-by-slice, and 3D datasets with a 2D NLM algorithm. In our approach we design and implement a fully 3D NonLocal Means parallel approach, adopting different algorithm mapping strategies on GPU architecture and multi-GPU framework, in order to demonstrate its high applicability and scalability. The experimental results we obtained encourage the usability of our approach in a large spectrum of applicative scenarios such as magnetic resonance imaging (MRI) or video sequence denoising. PMID:25045397

  8. The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm

    NASA Astrophysics Data System (ADS)

    Tan, Linglong; Li, Changkai; Wang, Yueqin

    2018-04-01

    SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.

  9. A primitive study of voxel feature generation by multiple stacked denoising autoencoders for detecting cerebral aneurysms on MRA

    NASA Astrophysics Data System (ADS)

    Nemoto, Mitsutaka; Hayashi, Naoto; Hanaoka, Shouhei; Nomura, Yukihiro; Miki, Soichiro; Yoshikawa, Takeharu; Ohtomo, Kuni

    2016-03-01

    The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.

  10. Multiclass Data Segmentation using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    37] that performs interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general...izations of the graph Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph...continuous setting carry over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence

  11. Identification Method of Mud Shale Fractures Base on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Xia, Weixu; Lai, Fuqiang; Luo, Han

    2018-01-01

    In recent years, inspired by seismic analysis technology, a new method for analysing mud shale fractures oil and gas reservoirs by logging properties has emerged. By extracting the high frequency attribute of the wavelet transform in the logging attribute, the formation information hidden in the logging signal is extracted, identified the fractures that are not recognized by conventional logging and in the identified fracture segment to show the “cycle jump”, “high value”, “spike” and other response effect is more obvious. Finally formed a complete wavelet denoising method and wavelet high frequency identification fracture method.

  12. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    PubMed

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  13. Denoising of Raman spectroscopy for biological samples based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    León-Bejarano, Fabiola; Ramírez-Elías, Miguel; Mendez, Martin O.; Dorantes-Méndez, Guadalupe; Rodríguez-Aranda, Ma. Del Carmen; Alba, Alfonso

    Raman spectroscopy of biological samples presents undesirable noise and fluorescence generated by the biomolecular excitation. The reduction of these types of noise is a fundamental task to obtain the valuable information of the sample under analysis. This paper proposes the application of the empirical mode decomposition (EMD) for noise elimination. EMD is a parameter-free and adaptive signal processing method useful for the analysis of nonstationary signals. EMD performance was compared with the commonly used Vancouver algorithm (VRA) through artificial data (Teflon), synthetic (Vitamin E and paracetamol) and biological (Mouse brain and human nails) Raman spectra. The correlation coefficient (ρ) was used as performance measure. Results on synthetic data showed a better performance of EMD (ρ=0.52) at high noise levels compared with VRA (ρ=0.19). The methods with simulated fluorescence added to artificial material exhibited a similar shape of fluorescence in both cases (ρ=0.95 for VRA and ρ=0.93 for EMD). For synthetic data, Raman spectra of vitamin E were used and the results showed a good performance comparing both methods (ρ=0.95 for EMD and ρ=0.99 for VRA). Finally, in biological data, EMD and VRA displayed a similar behavior (ρ=0.85 for EMD and ρ=0.96 for VRA), but with the advantage that EMD maintains small amplitude Raman peaks. The results suggest that EMD could be an effective method for denoising biological Raman spectra, EMD is able to retain information and correctly eliminates the fluorescence without parameter tuning.

  14. A second order derivative scheme based on Bregman algorithm class

    NASA Astrophysics Data System (ADS)

    Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia

    2016-10-01

    The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.

  15. Multiclass Data Segmentation Using Diffuse Interface Methods on Graphs

    DTIC Science & Technology

    2014-01-01

    interac- tive image segmentation using the solution to a combinatorial Dirichlet problem. Elmoataz et al . have developed general- izations of the graph...Laplacian [25] for image denoising and manifold smoothing. Couprie et al . in [18] define a conve- niently parameterized graph-based energy function that...over to the discrete graph representation. For general data segmentation, Bresson et al . in [8], present rigorous convergence results for two algorithms

  16. Poisson noise removal with pyramidal multi-scale transforms

    NASA Astrophysics Data System (ADS)

    Woiselle, Arnaud; Starck, Jean-Luc; Fadili, Jalal M.

    2013-09-01

    In this paper, we introduce a method to stabilize the variance of decimated transforms using one or two variance stabilizing transforms (VST). These VSTs are applied to the 3-D Meyer wavelet pyramidal transform which is the core of the first generation 3D curvelets. This allows us to extend these 3-D curvelets to handle Poisson noise, that we apply to the denoising of a simulated cosmological volume.

  17. Comparison of pre-processing techniques for fluorescence microscopy images of cells labeled for actin.

    PubMed

    Muralidhar, Gautam S; Channappayya, Sumohana S; Slater, John H; Blinka, Ellen M; Bovik, Alan C; Frey, Wolfgang; Markey, Mia K

    2008-11-06

    Automated analysis of fluorescence microscopy images of endothelial cells labeled for actin is important for quantifying changes in the actin cytoskeleton. The current manual approach is laborious and inefficient. The goal of our work is to develop automated image analysis methods, thereby increasing cell analysis throughput. In this study, we present preliminary results on comparing different algorithms for cell segmentation and image denoising.

  18. Boundary layer noise subtraction in hydrodynamic tunnel using robust principal component analysis.

    PubMed

    Amailland, Sylvain; Thomas, Jean-Hugh; Pézerat, Charles; Boucheron, Romuald

    2018-04-01

    The acoustic study of propellers in a hydrodynamic tunnel is of paramount importance during the design process, but can involve significant difficulties due to the boundary layer noise (BLN). Indeed, advanced denoising methods are needed to recover the acoustic signal in case of poor signal-to-noise ratio. The technique proposed in this paper is based on the decomposition of the wall-pressure cross-spectral matrix (CSM) by taking advantage of both the low-rank property of the acoustic CSM and the sparse property of the BLN CSM. Thus, the algorithm belongs to the class of robust principal component analysis (RPCA), which derives from the widely used principal component analysis. If the BLN is spatially decorrelated, the proposed RPCA algorithm can blindly recover the acoustical signals even for negative signal-to-noise ratio. Unfortunately, in a realistic case, acoustic signals recorded in a hydrodynamic tunnel show that the noise may be partially correlated. A prewhitening strategy is then considered in order to take into account the spatially coherent background noise. Numerical simulations and experimental results show an improvement in terms of BLN reduction in the large hydrodynamic tunnel. The effectiveness of the denoising method is also investigated in the context of acoustic source localization.

  19. Three-Dimensional Velocity Field De-Noising using Modal Projection

    NASA Astrophysics Data System (ADS)

    Frank, Sarah; Ameli, Siavash; Szeri, Andrew; Shadden, Shawn

    2017-11-01

    PCMRI and Doppler ultrasound are common modalities for imaging velocity fields inside the body (e.g. blood, air, etc) and PCMRI is increasingly being used for other fluid mechanics applications where optical imaging is difficult. This type of imaging is typically applied to internal flows, which are strongly influenced by domain geometry. While these technologies are evolving, it remains that measured data is noisy and boundary layers are poorly resolved. We have developed a boundary modal analysis method to de-noise 3D velocity fields such that the resulting field is divergence-free and satisfies no-slip/no-penetration boundary conditions. First, two sets of divergence-free modes are computed based on domain geometry. The first set accounts for flow through ``truncation boundaries'', and the second set of modes has no-slip/no-penetration conditions imposed on all boundaries. The modes are calculated by minimizing the velocity gradient throughout the domain while enforcing a divergence-free condition. The measured velocity field is then projected onto these modes using a least squares algorithm. This method is demonstrated on CFD simulations with artificial noise. Different degrees of noise and different numbers of modes are tested to reveal the capabilities of the approach. American Heart Association Award 17PRE33660202.

  20. Extended Kalman smoother with differential evolution technique for denoising of ECG signal.

    PubMed

    Panigrahy, D; Sahu, P K

    2016-09-01

    Electrocardiogram (ECG) signal gives a lot of information on the physiology of heart. In reality, noise from various sources interfere with the ECG signal. To get the correct information on physiology of the heart, noise cancellation of the ECG signal is required. In this paper, the effectiveness of extended Kalman smoother (EKS) with the differential evolution (DE) technique for noise cancellation of the ECG signal is investigated. DE is used as an automatic parameter selection method for the selection of ten optimized components of the ECG signal, and those are used to create the ECG signal according to the real ECG signal. These parameters are used by the EKS for the development of the state equation and also for initialization of the parameters of EKS. EKS framework is used for denoising the ECG signal from the single channel. The effectiveness of proposed noise cancellation technique has been evaluated by adding white, colored Gaussian noise and real muscle artifact noise at different SNR to some visually clean ECG signals from the MIT-BIH arrhythmia database. The proposed noise cancellation technique of ECG signal shows better signal to noise ratio (SNR) improvement, lesser mean square error (MSE) and percent of distortion (PRD) compared to other well-known methods.

  1. Brain-machine interfaces for controlling lower-limb powered robotic systems.

    PubMed

    He, Yongtian; Eguren, David; Azorín, José M; Grossman, Robert G; Luu, Trieu Phat; Contreras-Vidal, Jose L

    2018-04-01

    Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.

  2. Brain-machine interfaces for controlling lower-limb powered robotic systems

    NASA Astrophysics Data System (ADS)

    He, Yongtian; Eguren, David; Azorín, José M.; Grossman, Robert G.; Phat Luu, Trieu; Contreras-Vidal, Jose L.

    2018-04-01

    Objective. Lower-limb, powered robotics systems such as exoskeletons and orthoses have emerged as novel robotic interventions to assist or rehabilitate people with walking disabilities. These devices are generally controlled by certain physical maneuvers, for example pressing buttons or shifting body weight. Although effective, these control schemes are not what humans naturally use. The usability and clinical relevance of these robotics systems could be further enhanced by brain-machine interfaces (BMIs). A number of preliminary studies have been published on this topic, but a systematic understanding of the experimental design, tasks, and performance of BMI-exoskeleton systems for restoration of gait is lacking. Approach. To address this gap, we applied standard systematic review methodology for a literature search in PubMed and EMBASE databases and identified 11 studies involving BMI-robotics systems. The devices, user population, input and output of the BMIs and robot systems respectively, neural features, decoders, denoising techniques, and system performance were reviewed and compared. Main results. Results showed BMIs classifying walk versus stand tasks are the most common. The results also indicate that electroencephalography (EEG) is the only recording method for humans. Performance was not clearly presented in most of the studies. Several challenges were summarized, including EEG denoising, safety, responsiveness and others. Significance. We conclude that lower-body powered exoskeletons with automated gait intention detection based on BMIs open new possibilities in the assistance and rehabilitation fields, although the current performance, clinical benefits and several key challenging issues indicate that additional research and development is required to deploy these systems in the clinic and at home. Moreover, rigorous EEG denoising techniques, suitable performance metrics, consistent trial reporting, and more clinical trials are needed to advance the field.

  3. Sparse-coding denoising applied to reversible conformational switching of a porphyrin self-assembled monolayer induced by scanning tunnelling microscopy.

    PubMed

    Oliveira, J; Bragança, A M; Alcácer, L; Morgado, J; Figueiredo, M; Bioucas-Dias, J; Ferreira, Q

    2018-04-14

    Scanning tunnelling microscopy (STM) was used to induce conformational molecular switching on a self-assembled monolayer of zinc-octaethylporphyrin on a graphite/tetradecane interface at room temperature. A reversible conformational change controlled by applying a tip voltage was observed. Consecutive STM images acquired at alternating tip voltages showed that at 0.4 V the porphyrin monolayer presents a molecular arrangement formed by alternate rows with two different types of structural conformations and when the potential is increased to 0.7 V the monolayer presents only one type of conformation. In this paper, we characterize these porphyrin conformational dynamics by analyzing the STM images, which were improved for better quality and interpretation by means of a denoising algorithm, adapted to process STM images from state of the art image processing and analysis methods. STM remains the best technique to 'see' and to manipulate the matter at atomic scale. A very sharp tip a few angstroms of the surface can provide images of molecules and atoms with a powerful resolution. However, these images are strongly affected by noise which is necessary to correct and eliminate. This paper is about new computational tools specifically developed to denoise the images acquired with STM. The new algorithms were tested in STM images, obtained at room temperature, of porphyrin monolayer which presents reversible conformational change in function of the tip bias voltage. Images with high resolution, acquired in real time, show that the porphyrins have different molecular arrangements whether the tip voltage is 0.4 V or 0.7 V. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  4. Denoising, deconvolving, and decomposing photon observations. Derivation of the D3PO algorithm

    NASA Astrophysics Data System (ADS)

    Selig, Marco; Enßlin, Torsten A.

    2015-02-01

    The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution is not dependent on the underlying position space, the implementation of the D3PO algorithm uses the nifty package to ensure applicability to various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 × 32 arcmin2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components. A copy of the code is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/574/A74

  5. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Man Gyun; Oh, Seungrohk

    A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce themore » time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.« less

  7. Wavelet transform processing applied to partial discharge evaluation

    NASA Astrophysics Data System (ADS)

    Macedo, E. C. T.; Araújo, D. B.; da Costa, E. G.; Freire, R. C. S.; Lopes, W. T. A.; Torres, I. S. M.; de Souza Neto, J. M. R.; Bhatti, S. A.; Glover, I. A.

    2012-05-01

    Partial Discharge (PD) is characterized by high frequency current pulses that occur in high voltage (HV) electrical equipments originated from gas ionization process when damaged insulation is submitted to high values of electric field [1]. PD monitoring is a useful method of assessing the aging degree of the insulation, manufacturing defects or chemical/mechanical damage. Many sources of noise (e.g. radio transmissions, commutator noise from rotating machines, power electronics switching circuits, corona discharge, etc.) can directly affect the PD estimation. Among the many mathematical techniques that can be applied to de-noise PD signals, the wavelet transform is one of the most powerful. It can simultaneously supply information about the pulse occurrence, time and pulse spectrum, and also de-noise in-field measured PD signals. In this paper is described the application of wavelet transform in the suppression of the main types of noise that can affect the observation and analysis of PD signals in high voltage apparatus. In addition, is presented a study that indicates the appropriated mother-wavelet for this application based on the cross-correlation factor.

  8. Segmentation and detection of breast cancer in mammograms combining wavelet analysis and genetic algorithm.

    PubMed

    Pereira, Danilo Cesar; Ramos, Rodrigo Pereira; do Nascimento, Marcelo Zanchetta

    2014-04-01

    In Brazil, the National Cancer Institute (INCA) reports more than 50,000 new cases of the disease, with risk of 51 cases per 100,000 women. Radiographic images obtained from mammography equipments are one of the most frequently used techniques for helping in early diagnosis. Due to factors related to cost and professional experience, in the last two decades computer systems to support detection (Computer-Aided Detection - CADe) and diagnosis (Computer-Aided Diagnosis - CADx) have been developed in order to assist experts in detection of abnormalities in their initial stages. Despite the large number of researches on CADe and CADx systems, there is still a need for improved computerized methods. Nowadays, there is a growing concern with the sensitivity and reliability of abnormalities diagnosis in both views of breast mammographic images, namely cranio-caudal (CC) and medio-lateral oblique (MLO). This paper presents a set of computational tools to aid segmentation and detection of mammograms that contained mass or masses in CC and MLO views. An artifact removal algorithm is first implemented followed by an image denoising and gray-level enhancement method based on wavelet transform and Wiener filter. Finally, a method for detection and segmentation of masses using multiple thresholding, wavelet transform and genetic algorithm is employed in mammograms which were randomly selected from the Digital Database for Screening Mammography (DDSM). The developed computer method was quantitatively evaluated using the area overlap metric (AOM). The mean ± standard deviation value of AOM for the proposed method was 79.2 ± 8%. The experiments demonstrate that the proposed method has a strong potential to be used as the basis for mammogram mass segmentation in CC and MLO views. Another important aspect is that the method overcomes the limitation of analyzing only CC and MLO views. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Small-scale Scheimpflug lidar for aerosol extinction coefficient and vertical atmospheric transmittance detection.

    PubMed

    Sun, Guodong; Qin, Laian; Hou, Zaihong; Jing, Xu; He, Feng; Tan, Fengfu; Zhang, Silong

    2018-03-19

    In this paper, a new prototypical Scheimpflug lidar capable of detecting the aerosol extinction coefficient and vertical atmospheric transmittance at 1 km above the ground is described. The lidar system operates at 532 nm and can be used to detect aerosol extinction coefficients throughout an entire day. Then, the vertical atmospheric transmittance can be determined from the extinction coefficients with the equation of numerical integration in this area. CCD flat fielding of the image data is used to mitigate the effects of pixel sensitivity variation. An efficient method of two-dimensional wavelet transform according to a local threshold value has been proposed to reduce the Gaussian white noise in the lidar signal. Furthermore, a new iteration method of backscattering ratio based on genetic algorithm is presented to calculate the aerosol extinction coefficient and vertical atmospheric transmittance. Some simulations are performed to reduce the different levels of noise in the simulated signal in order to test the precision of the de-noising method and inversion algorithm. The simulation result shows that the root-mean-square errors of extinction coefficients are all less than 0.02 km -1 , and that the relative errors of the atmospheric transmittance between the model and inversion data are below 0.56% for all cases. The feasibility of the instrument and the inversion algorithm have also been verified by an optical experiment. The average relative errors of aerosol extinction coefficients between the Scheimpflug lidar and the conventional backscattering elastic lidar are 3.54% and 2.79% in the full overlap heights of two time points, respectively. This work opens up new possibilities of using a small-scale Scheimpflug lidar system for the remote sensing of atmospheric aerosols.

  10. Double Density Dual Tree Discrete Wavelet Transform implementation for Degraded Image Enhancement

    NASA Astrophysics Data System (ADS)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    Wavelet transform is a main tool for image processing applications in modern existence. A Double Density Dual Tree Discrete Wavelet Transform is used and investigated for image denoising. Images are considered for the analysis and the performance is compared with discrete wavelet transform and the Double Density DWT. Peak Signal to Noise Ratio values and Root Means Square error are calculated in all the three wavelet techniques for denoised images and the performance has evaluated. The proposed techniques give the better performance when comparing other two wavelet techniques.

  11. An l1-TV algorithm for deconvolution with salt and pepper noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlberg, Brendt; Rodriguez, Paul

    2008-01-01

    There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.

  12. A novel structured dictionary for fast processing of 3D medical images, with application to computed tomography restoration and denoising

    NASA Astrophysics Data System (ADS)

    Karimi, Davood; Ward, Rabab K.

    2016-03-01

    Sparse representation of signals in learned overcomplete dictionaries has proven to be a powerful tool with applications in denoising, restoration, compression, reconstruction, and more. Recent research has shown that learned overcomplete dictionaries can lead to better results than analytical dictionaries such as wavelets in almost all image processing applications. However, a major disadvantage of these dictionaries is that their learning and usage is very computationally intensive. In particular, finding the sparse representation of a signal in these dictionaries requires solving an optimization problem that leads to very long computational times, especially in 3D image processing. Moreover, the sparse representation found by greedy algorithms is usually sub-optimal. In this paper, we propose a novel two-level dictionary structure that improves the performance and the speed of standard greedy sparse coding methods. The first (i.e., the top) level in our dictionary is a fixed orthonormal basis, whereas the second level includes the atoms that are learned from the training data. We explain how such a dictionary can be learned from the training data and how the sparse representation of a new signal in this dictionary can be computed. As an application, we use the proposed dictionary structure for removing the noise and artifacts in 3D computed tomography (CT) images. Our experiments with real CT images show that the proposed method achieves results that are comparable with standard dictionary-based methods while substantially reducing the computational time.

  13. A disassembly-free method for evaluation of spiral bevel gear assembly

    NASA Astrophysics Data System (ADS)

    Jedliński, Łukasz; Jonak, Józef

    2017-05-01

    The paper presents a novel method for evaluation of assembly of spiral bevel gears. The examination of the approaches to the problem of gear control diagnostics without disassembly has revealed that residual processes in the form of vibrations (or noise) are currently the most suitable to this end. According to the literature, contact pattern is a complex parameter for describing gear position. Therefore, the task is to determine the correlation between contact pattern and gear vibrations. Although the vibration signal contains a great deal of information, it also has a complex spectral structure and contains interferences. For this reason, the proposed method has three variants which determine the effect of preliminary processing of the signal on the results. In Variant 2, stage 1, the vibration signal is subjected to multichannel denoising using a wavelet transform (WT), and in Variant 3 - to a combination of WT and principal component analysis (PCA). This denoising procedure does not occur in Variant 1. Next, we determine the features of the vibration signal in order to focus on information which is crucial regarding the objective of the study. Given the lack of unequivocal premises enabling selection of optimum features, we calculate twenty features, rank them and finally select the appropriate ones using an algorithm. Diagnostic rules were created using artificial neural networks. We investigated the suitability of three network types: multilayer perceptron (MLP), radial basis function (RBF) and support vector machine (SVM).

  14. Rolling element bearing defect diagnosis under variable speed operation through angle synchronous averaging of wavelet de-noised estimate

    NASA Astrophysics Data System (ADS)

    Mishra, C.; Samantaray, A. K.; Chakraborty, G.

    2016-05-01

    Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1

  15. Automatic Fabric Defect Detection with a Multi-Scale Convolutional Denoising Autoencoder Network Model.

    PubMed

    Mei, Shuang; Wang, Yudan; Wen, Guojun

    2018-04-02

    Fabric defect detection is a necessary and essential step of quality control in the textile manufacturing industry. Traditional fabric inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. In this paper, we propose an unsupervised learning-based automated approach to detect and localize fabric defects without any manual intervention. This approach is used to reconstruct image patches with a convolutional denoising autoencoder network at multiple Gaussian pyramid levels and to synthesize detection results from the corresponding resolution channels. The reconstruction residual of each image patch is used as the indicator for direct pixel-wise prediction. By segmenting and synthesizing the reconstruction residual map at each resolution level, the final inspection result can be generated. This newly developed method has several prominent advantages for fabric defect detection. First, it can be trained with only a small amount of defect-free samples. This is especially important for situations in which collecting large amounts of defective samples is difficult and impracticable. Second, owing to the multi-modal integration strategy, it is relatively more robust and accurate compared to general inspection methods (the results at each resolution level can be viewed as a modality). Third, according to our results, it can address multiple types of textile fabrics, from simple to more complex. Experimental results demonstrate that the proposed model is robust and yields good overall performance with high precision and acceptable recall rates.

  16. A hybrid wavelet de-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series.

    PubMed

    Wang, Dong; Borthwick, Alistair G; He, Handan; Wang, Yuankun; Zhu, Jieyu; Lu, Yuan; Xu, Pengcheng; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-01-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. Compared to three other generic methods, the results generated by WD-REPA model presented invariably smaller error measures which means the forecasting capability of the WD-REPA model is better than other models. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Enhancing micro-seismic P-phase arrival picking: EMD-cosine function-based denoising with an application to the AIC picker

    NASA Astrophysics Data System (ADS)

    Shang, Xueyi; Li, Xibing; Morales-Esteban, A.; Dong, Longjun

    2018-03-01

    Micro-seismic P-phase arrival picking is an elementary step into seismic event location, source mechanism analysis, and seismic tomography. However, a micro-seismic signal is often mixed with high frequency noises and power frequency noises (50 Hz), which could considerably reduce P-phase picking accuracy. To solve this problem, an Empirical Mode Decomposition (EMD)-cosine function denoising-based Akaike Information Criterion (AIC) picker (ECD-AIC picker) is proposed for picking the P-phase arrival time. Unlike traditional low pass filters which are ineffective when seismic data and noise bandwidths overlap, the EMD adaptively separates the seismic data and the noise into different Intrinsic Mode Functions (IMFs). Furthermore, the EMD-cosine function-based denoising retains the P-phase arrival amplitude and phase spectrum more reliably than any traditional low pass filter. The ECD-AIC picker was tested on 1938 sets of micro-seismic waveforms randomly selected from the Institute of Mine Seismology (IMS) database of the Chinese Yongshaba mine. The results have shown that the EMD-cosine function denoising can effectively estimate high frequency and power frequency noises and can be easily adapted to perform on signals with different shapes and forms. Qualitative and quantitative comparisons show that the combined ECD-AIC picker provides better picking results than both the ED-AIC picker and the AIC picker, and the comparisons also show more reliable source localization results when the ECD-AIC picker is applied, thus showing the potential of this combined P-phase picking technique.

  18. Cubesat-Derived Detection of Seagrasses Using Planet Imagery Following Unmixing-Based Denoising: is Small the Next Big?

    NASA Astrophysics Data System (ADS)

    Traganos, D.; Cerra, D.; Reinartz, P.

    2017-05-01

    Seagrasses are one of the most productive and widespread yet threatened coastal ecosystems on Earth. Despite their importance, they are declining due to various threats, which are mainly anthropogenic. Lack of data on their distribution hinders any effort to rectify this decline through effective detection, mapping and monitoring. Remote sensing can mitigate this data gap by allowing retrospective quantitative assessment of seagrass beds over large and remote areas. In this paper, we evaluate the quantitative application of Planet high resolution imagery for the detection of seagrasses in the Thermaikos Gulf, NW Aegean Sea, Greece. The low Signal-to-noise Ratio (SNR), which characterizes spectral bands at shorter wavelengths, prompts the application of the Unmixing-based denoising (UBD) as a pre-processing step for seagrass detection. A total of 15 spectral-temporal patterns is extracted from a Planet image time series to restore the corrupted blue and green band in the processed Planet image. Subsequently, we implement Lyzenga's empirical water column correction and Support Vector Machines (SVM) to evaluate quantitative benefits of denoising. Denoising aids detection of Posidonia oceanica seagrass species by increasing its producer and user accuracy by 31.7 % and 10.4 %, correspondingly, with a respective increase in its Kappa value from 0.3 to 0.48. In the near future, our objective is to improve accuracies in seagrass detection by applying more sophisticated, analytical water column correction algorithms to Planet imagery, developing time- and cost-effective monitoring of seagrass distribution that will enable in turn the effective management and conservation of these highly valuable and productive ecosystems.

  19. The pre-image problem in kernel methods.

    PubMed

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  20. Improving the quality of the ECG signal by filtering in wavelet transform domain

    NASA Astrophysics Data System (ADS)

    DzierŻak, RóŻa; Surtel, Wojciech; Dzida, Grzegorz; Maciejewski, Marcin

    2016-09-01

    The article concerns the research methods of noise reduction occurring in the ECG signals. The method is based on the use of filtration in wavelet transform domain. The study was conducted on two types of signal - received during the rest of the patient and obtained during physical activity. For each of the signals 3 types of filtration were used. The study was designed to determine the effectiveness of various wavelets for de-noising signals obtained in both cases. The results confirm the suitability of the method for improving the quality of the electrocardiogram in case of both types of signals.

  1. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  2. MRS3D: 3D Spherical Wavelet Transform on the Sphere

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2011-12-01

    Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. We present a new fast Discrete Spherical Fourier-Bessel Transform (DSFBT) based on both a discrete Bessel Transform and the HEALPIX angular pixelisation scheme. We tested the 3D wavelet transform and as a toy-application, applied a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and found we can successfully remove noise without much loss to the large scale structure. The new spherical 3D isotropic wavelet transform, called MRS3D, is ideally suited to analysing and denoising future 3D spherical cosmological surveys; it uses a novel discrete spherical Fourier-Bessel Transform. MRS3D is based on two packages, IDL and Healpix and can be used only if these two packages have been installed.

  3. Image denoising for real-time MRI.

    PubMed

    Klosowski, Jakob; Frahm, Jens

    2017-03-01

    To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Assessment of Restoration Methods of X-Ray Images with Emphasis on Medical Photogrammetric Usage

    NASA Astrophysics Data System (ADS)

    Hosseinian, S.; Arefi, H.

    2016-06-01

    Nowadays, various medical X-ray imaging methods such as digital radiography, computed tomography and fluoroscopy are used as important tools in diagnostic and operative processes especially in the computer and robotic assisted surgeries. The procedures of extracting information from these images require appropriate deblurring and denoising processes on the pre- and intra-operative images in order to obtain more accurate information. This issue becomes more considerable when the X-ray images are planned to be employed in the photogrammetric processes for 3D reconstruction from multi-view X-ray images since, accurate data should be extracted from images for 3D modelling and the quality of X-ray images affects directly on the results of the algorithms. For restoration of X-ray images, it is essential to consider the nature and characteristics of these kinds of images. X-ray images exhibit severe quantum noise due to limited X-ray photons involved. The assumptions of Gaussian modelling are not appropriate for photon-limited images such as X-ray images, because of the nature of signal-dependant quantum noise. These images are generally modelled by Poisson distribution which is the most common model for low-intensity imaging. In this paper, existing methods are evaluated. For this purpose, after demonstrating the properties of medical X-ray images, the more efficient and recommended methods for restoration of X-ray images would be described and assessed. After explaining these approaches, they are implemented on samples from different kinds of X-ray images. By considering the results, it is concluded that using PURE-LET, provides more effective and efficient denoising than other examined methods in this research.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  6. Diagnostic analysis of liver B ultrasonic texture features based on LM neural network

    NASA Astrophysics Data System (ADS)

    Chi, Qingyun; Hua, Hu; Liu, Menglin; Jiang, Xiuying

    2017-03-01

    In this study, B ultrasound images of 124 benign and malignant patients were randomly selected as the study objects. The B ultrasound images of the liver were treated by enhanced de-noising. By constructing the gray level co-occurrence matrix which reflects the information of each angle, Principal Component Analysis of 22 texture features were extracted and combined with LM neural network for diagnosis and classification. Experimental results show that this method is a rapid and effective diagnostic method for liver imaging, which provides a quantitative basis for clinical diagnosis of liver diseases.

  7. Bayesian denoising in digital radiography: a comparison in the dental field.

    PubMed

    Frosio, I; Olivieri, C; Lucchese, M; Borghese, N A; Boccacci, P

    2013-01-01

    We compared two Bayesian denoising algorithms for digital radiographs, based on Total Variation regularization and wavelet decomposition. The comparison was performed on simulated radiographs with different photon counts and frequency content and on real dental radiographs. Four different quality indices were considered to quantify the quality of the filtered radiographs. The experimental results suggested that Total Variation is more suited to preserve fine anatomical details, whereas wavelets produce images of higher quality at global scale; they also highlighted the need for more reliable image quality indices. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study.

    PubMed

    Dolz, Jose; Betrouni, Nacim; Quidet, Mathilde; Kharroubi, Dris; Leroy, Henri A; Reyns, Nicolas; Massoptier, Laurent; Vermandel, Maximilien

    2016-09-01

    Delineation of organs at risk (OARs) is a crucial step in surgical and treatment planning in brain cancer, where precise OARs volume delineation is required. However, this task is still often manually performed, which is time-consuming and prone to observer variability. To tackle these issues a deep learning approach based on stacking denoising auto-encoders has been proposed to segment the brainstem on magnetic resonance images in brain cancer context. Additionally to classical features used in machine learning to segment brain structures, two new features are suggested. Four experts participated in this study by segmenting the brainstem on 9 patients who underwent radiosurgery. Analysis of variance on shape and volume similarity metrics indicated that there were significant differences (p<0.05) between the groups of manual annotations and automatic segmentations. Experimental evaluation also showed an overlapping higher than 90% with respect to the ground truth. These results are comparable, and often higher, to those of the state of the art segmentation methods but with a considerably reduction of the segmentation time. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A Deep Denoising Autoencoder Approach to Improving the Intelligibility of Vocoded Speech in Cochlear Implant Simulation.

    PubMed

    Lai, Ying-Hui; Chen, Fei; Wang, Syu-Siang; Lu, Xugang; Tsao, Yu; Lee, Chin-Hui

    2017-07-01

    In a cochlear implant (CI) speech processor, noise reduction (NR) is a critical component for enabling CI users to attain improved speech perception under noisy conditions. Identifying an effective NR approach has long been a key topic in CI research. Recently, a deep denoising autoencoder (DDAE) based NR approach was proposed and shown to be effective in restoring clean speech from noisy observations. It was also shown that DDAE could provide better performance than several existing NR methods in standardized objective evaluations. Following this success with normal speech, this paper further investigated the performance of DDAE-based NR to improve the intelligibility of envelope-based vocoded speech, which simulates speech signal processing in existing CI devices. We compared the performance of speech intelligibility between DDAE-based NR and conventional single-microphone NR approaches using the noise vocoder simulation. The results of both objective evaluations and listening test showed that, under the conditions of nonstationary noise distortion, DDAE-based NR yielded higher intelligibility scores than conventional NR approaches. This study confirmed that DDAE-based NR could potentially be integrated into a CI processor to provide more benefits to CI users under noisy conditions.

  10. A complete passive blind image copy-move forensics scheme based on compound statistics features.

    PubMed

    Peng, Fei; Nie, Yun-ying; Long, Min

    2011-10-10

    Since most sensor pattern noise based image copy-move forensics methods require a known reference sensor pattern noise, it generally results in non-blinded passive forensics, which significantly confines the application circumstances. In view of this, a novel passive-blind image copy-move forensics scheme is proposed in this paper. Firstly, a color image is transformed into a grayscale one, and wavelet transform based de-noising filter is used to extract the sensor pattern noise, then the variance of the pattern noise, the signal noise ratio between the de-noised image and the pattern noise, the information entropy and the average energy gradient of the original grayscale image are chosen as features, non-overlapping sliding window operations are done to the images to divide them into different sub-blocks. Finally, the tampered areas are detected by analyzing the correlation of the features between the sub-blocks and the whole image. Experimental results and analysis show that the proposed scheme is completely passive-blind, has a good detection rate, and is robust against JPEG compression, noise, rotation, scaling and blurring. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Trackside acoustic diagnosis of axle box bearing based on kurtosis-optimization wavelet denoising

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2018-04-01

    As one of the key components of railway vehicles, the operation condition of the axle box bearing has a significant effect on traffic safety. The acoustic diagnosis is more suitable than vibration diagnosis for trackside monitoring. The acoustic signal generated by the train axle box bearing is an amplitude modulation and frequency modulation signal with complex train running noise. Although empirical mode decomposition (EMD) and some improved time-frequency algorithms have proved to be useful in bearing vibration signal processing, it is hard to extract the bearing fault signal from serious trackside acoustic background noises by using those algorithms. Therefore, a kurtosis-optimization-based wavelet packet (KWP) denoising algorithm is proposed, as the kurtosis is the key indicator of bearing fault signal in time domain. Firstly, the geometry based Doppler correction is applied to signals of each sensor, and with the signal superposition of multiple sensors, random noises and impulse noises, which are the interference of the kurtosis indicator, are suppressed. Then, the KWP is conducted. At last, the EMD and Hilbert transform is applied to extract the fault feature. Experiment results indicate that the proposed method consisting of KWP and EMD is superior to the EMD.

  12. Color TV: total variation methods for restoration of vector-valued images.

    PubMed

    Blomgren, P; Chan, T F

    1998-01-01

    We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.

  13. Restored low-dose digital breast tomosynthesis: a perception study

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.

    2018-03-01

    This work investigates the perception of noise from restored low-dose digital breast tomosynthesis (DBT) images. First, low-dose DBT projections were generated using a dose reduction simulation algorithm. A dataset of clinical images from the Hospital of the University of Pennsylvania was used for this purpose. Low-dose projections were then denoised with a denoising pipeline developed specifically for DBT images. Denoised and noisy projections were combined to generate images with signal-to-noise ratio comparable to the full-dose images. The quality of restored low-dose and full-dose projections were first compared in terms of an objective no-reference image quality metric previously validated for mammography. In the second analysis, regions of interest (ROIs) were selected from reconstructed full-dose and restored low-dose slices, and were displayed side-by-side on a high-resolution medical display. Five medical physics specialists were asked to choose the image containing less noise and less blur using a 2-AFC experiment. The objective metric shows that, after the proposed image restoration framework was applied, images with as little as 60% of the AEC dose yielded similar quality indices when compared to images acquired with the full-dose. In the 2-AFC experiments results showed that when the denoising framework was used, 30% reduction in dose was possible without any perceived difference in noise or blur. Note that this study evaluated the observers perception to noise and blur and does not claim that the dose of DBT examinations can be reduced with no harm to the detection of cancer. Future work is necessary to make any claims regarding detection, localization and characterization of lesions.

  14. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising

    PubMed Central

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework’s simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications. PMID:28350887

  15. Auditory steady state responses and cochlear implants: Modeling the artifact-response mixture in the perspective of denoising.

    PubMed

    Mina, Faten; Attina, Virginie; Duroc, Yvan; Veuillet, Evelyne; Truy, Eric; Thai-Van, Hung

    2017-01-01

    Auditory steady state responses (ASSRs) in cochlear implant (CI) patients are contaminated by the spread of a continuous CI electrical stimulation artifact. The aim of this work was to model the electrophysiological mixture of the CI artifact and the corresponding evoked potentials on scalp electrodes in order to evaluate the performance of denoising algorithms in eliminating the CI artifact in a controlled environment. The basis of the proposed computational framework is a neural mass model representing the nodes of the auditory pathways. Six main contributors to auditory evoked potentials from the cochlear level and up to the auditory cortex were taken into consideration. The simulated dynamics were then projected into a 3-layer realistic head model. 32-channel scalp recordings of the CI artifact-response were then generated by solving the electromagnetic forward problem. As an application, the framework's simulated 32-channel datasets were used to compare the performance of 4 commonly used Independent Component Analysis (ICA) algorithms: infomax, extended infomax, jade and fastICA in eliminating the CI artifact. As expected, two major components were detectable in the simulated datasets, a low frequency component at the modulation frequency and a pulsatile high frequency component related to the stimulation frequency. The first can be attributed to the phase-locked ASSR and the second to the stimulation artifact. Among the ICA algorithms tested, simulations showed that infomax was the most efficient and reliable in denoising the CI artifact-response mixture. Denoising algorithms can induce undesirable deformation of the signal of interest in real CI patient recordings. The proposed framework is a valuable tool for evaluating these algorithms in a controllable environment ahead of experimental or clinical applications.

  16. Estimating small-scale roughness of a rock joint using TLS data

    NASA Astrophysics Data System (ADS)

    Bitenc, Maja; Kieffer, D. Scott; Khoshelham, Kourosh

    2016-04-01

    Roughness of a rock joint is an important parameter influencing rock mass stability. Besides the surface amplitude, also the roughness direction- and scale-dependency should be observed (i.e. 3D roughness). Up to now most of roughness measurements and parameters rely on point or profile data obtained on small samples, mostly in a laboratory. State-of-the-art remote sensing technologies supply 3D measurements of an in-situ rock surface and therefore enable a 3D roughness parameterization. Detailed morphology of a remote large-scale vertical structure can be best observed by Terrestrial Laser Scanning (TLS). In a short time and from distances of a few hundred meters, TLS provides relatively dense and precise point cloud. Sturzenegger and Stead [2009] showed that the TLS technology and careful fieldwork allow the extraction of first-order roughness profiles, i.e. the surface irregularities with a wavelength greater than about 10 cm. Our goal is to find the lower limit; this is, to define the smallest discernible detail, and appropriate measuring and processing steps to extract this detail from the TLS data. The smallest observable roughness amplitude depends on the TLS data precision, which is limited mostly by an inherent range error (noise). An influence of the TLS noise on the rock joint roughness was analyzed using highly precise reference data acquired by Advanced TOpometric Sensor (ATOS) on a 20x30 cm rock joint sample. ATOS data were interpolated into 1 mm grid, to which five levels (0.5, 1, 1.5, 2, 2.5 mm) of normally distributed noise were added. The 3D surfaces entered direction-dependent roughness parameter computation after Grasselli [2001]. Average roughness of noisy surfaces logarithmically increase with the noise level and is already doubled for 1 mm noise. Performing Monte Carlo simulation roughness parameter noise sensitivity was investigated. Distribution of roughness differences (roughness of noisy surfaces minus roughness of reference ATOS surface) is approximately normal. Standard deviation of differences on average slightly increases with the noise level, but is strongly dependent on the analysis direction. As proved by different researches within the field of signal, image and also TLS data processing, noise can be, to a certain extent, removed by a post-processing step called denoising. In this research, four denoising methods, namely discrete WT (DWT) and stationary WT (SWT), and classic NLM (NLM) and probabilistic NLM (PNLM), were used on noisy ATOS data. Results were compared based on the (i) height and (ii) roughness differences between denoised surfaces and reference ATOS surface, (iii) the peak signal-to-noise ratio (PSNR) and (iv) the visual check of denoised surface. Increased PSNRs and reduced roughness differences prove the importance of the TLS data denoising procedure. In case of SWT, NLM and PNLM the surface is mostly over smoothed, whereas in case of DWT some noise remains. References: - Grasselli, G. (2001). Shear strength of rock joints based on quantified surface description. École Polytechnique Fédérale de Lausanne. Lausanne, EPFL. - Sturzenegger, M. and D. Stead (2009). "Close-range terrestrial digital photogrammetry and terrestrial laser scanning for discontinuity characterization on rock cuts." Engineering Geology 106(3-4): 163-182.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, T; Dong, X; Petrongolo, M

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less

  18. Method for exploratory cluster analysis and visualisation of single-trial ERP ensembles.

    PubMed

    Williams, N J; Nasuto, S J; Saddy, J D

    2015-07-30

    The validity of ensemble averaging on event-related potential (ERP) data has been questioned, due to its assumption that the ERP is identical across trials. Thus, there is a need for preliminary testing for cluster structure in the data. We propose a complete pipeline for the cluster analysis of ERP data. To increase the signal-to-noise (SNR) ratio of the raw single-trials, we used a denoising method based on Empirical Mode Decomposition (EMD). Next, we used a bootstrap-based method to determine the number of clusters, through a measure called the Stability Index (SI). We then used a clustering algorithm based on a Genetic Algorithm (GA) to define initial cluster centroids for subsequent k-means clustering. Finally, we visualised the clustering results through a scheme based on Principal Component Analysis (PCA). After validating the pipeline on simulated data, we tested it on data from two experiments - a P300 speller paradigm on a single subject and a language processing study on 25 subjects. Results revealed evidence for the existence of 6 clusters in one experimental condition from the language processing study. Further, a two-way chi-square test revealed an influence of subject on cluster membership. Our analysis operates on denoised single-trials, the number of clusters are determined in a principled manner and the results are presented through an intuitive visualisation. Given the cluster structure in some experimental conditions, we suggest application of cluster analysis as a preliminary step before ensemble averaging. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Signal processing techniques for damage detection with piezoelectric wafer active sensors and embedded ultrasonic structural radar

    NASA Astrophysics Data System (ADS)

    Yu, Lingyu; Bao, Jingjing; Giurgiutiu, Victor

    2004-07-01

    Embedded ultrasonic structural radar (EUSR) algorithm is developed for using piezoelectric wafer active sensor (PWAS) array to detect defects within a large area of a thin-plate specimen. Signal processing techniques are used to extract the time of flight of the wave packages, and thereby to determine the location of the defects with the EUSR algorithm. In our research, the transient tone-burst wave propagation signals are generated and collected by the embedded PWAS. Then, with signal processing, the frequency contents of the signals and the time of flight of individual frequencies are determined. This paper starts with an introduction of embedded ultrasonic structural radar algorithm. Then we will describe the signal processing methods used to extract the time of flight of the wave packages. The signal processing methods being used include the wavelet denoising, the cross correlation, and Hilbert transform. Though hardware device can provide averaging function to eliminate the noise coming from the signal collection process, wavelet denoising is included to ensure better signal quality for the application in real severe environment. For better recognition of time of flight, cross correlation method is used. Hilbert transform is applied to the signals after cross correlation in order to extract the envelope of the signals. Signal processing and EUSR are both implemented by developing a graphical user-friendly interface program in LabView. We conclude with a description of our vision for applying EUSR signal analysis to structural health monitoring and embedded nondestructive evaluation. To this end, we envisage an automatic damage detection application utilizing embedded PWAS, EUSR, and advanced signal processing.

  20. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  1. The Xanadu Annex on Titan Denoised

    NASA Image and Video Library

    2016-09-07

    This synthetic-aperture radar (SAR) image was obtained by NASA's Cassini spacecraft on July 25, 2016, during its 'T-121' pass over Titan's southern latitudes. The improved contrast provided by the denoising algorithm helps river channels (at bottom and upper left) stand out, as well as the crater-like feature at left. The image shows an area nicknamed the "Xanadu annex" by members of the Cassini radar team, earlier in the mission. This area had not been imaged by Cassini's radar until now, but measurements of its brightness temperature from Cassini's microwave radiometer were quite similar to that of the large region on Titan named Xanadu. Cassini's radiometer is essentially a very sensitive thermometer, and brightness temperature is a measure of the intensity of microwave radiation received from a feature by the instrument. Radar team members predicted at the time that, if this area were ever imaged, it would be similar in appearance to Xanadu, which lies just to the north. That earlier hunch appears to have been borne out, as features in this scene bear a strong similarity to the mountainous terrains Cassini's radar has imaged in Xanadu. Xanadu -- and now perhaps its annex -- remains something of a mystery. First imaged in 1994 by the Hubble Space Telescope (just three years before Cassini's launch from Earth), Xanadu was the first surface feature to be recognized on Titan. Once thought to be a raised plateau, the region is now understood to be slightly tilted, but not higher than, the darker surrounding regions. It blocks the formation of sand dunes, which otherwise extend all the way around Titan at its equator. The image was taken by the Cassini Synthetic Aperture radar (SAR) on July 25, 2016 during the mission's 122nd targeted Titan encounter. The image has been modified by the denoising method described in A. Lucas, JGR:Planets (2014). http://photojournal.jpl.nasa.gov/catalog/PIA20714

  2. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  3. Query-by-example surgical activity detection.

    PubMed

    Gao, Yixin; Vedula, S Swaroop; Lee, Gyusung I; Lee, Mija R; Khudanpur, Sanjeev; Hager, Gregory D

    2016-06-01

    Easy acquisition of surgical data opens many opportunities to automate skill evaluation and teaching. Current technology to search tool motion data for surgical activity segments of interest is limited by the need for manual pre-processing, which can be prohibitive at scale. We developed a content-based information retrieval method, query-by-example (QBE), to automatically detect activity segments within surgical data recordings of long duration that match a query. The example segment of interest (query) and the surgical data recording (target trial) are time series of kinematics. Our approach includes an unsupervised feature learning module using a stacked denoising autoencoder (SDAE), two scoring modules based on asymmetric subsequence dynamic time warping (AS-DTW) and template matching, respectively, and a detection module. A distance matrix of the query against the trial is computed using the SDAE features, followed by AS-DTW combined with template scoring, to generate a ranked list of candidate subsequences (substrings). To evaluate the quality of the ranked list against the ground-truth, thresholding conventional DTW distances and bipartite matching are applied. We computed the recall, precision, F1-score, and a Jaccard index-based score on three experimental setups. We evaluated our QBE method using a suture throw maneuver as the query, on two tool motion datasets (JIGSAWS and MISTIC-SL) captured in a training laboratory. We observed a recall of 93, 90 and 87 % and a precision of 93, 91, and 88 % with same surgeon same trial (SSST), same surgeon different trial (SSDT) and different surgeon (DS) experiment setups on JIGSAWS, and a recall of 87, 81 and 75 % and a precision of 72, 61, and 53 % with SSST, SSDT and DS experiment setups on MISTIC-SL, respectively. We developed a novel, content-based information retrieval method to automatically detect multiple instances of an activity within long surgical recordings. Our method demonstrated adequate recall across different complexity datasets and experimental conditions.

  4. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    NASA Astrophysics Data System (ADS)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  5. Noise distribution and denoising of current density images

    PubMed Central

    Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan

    2015-01-01

    Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100

  6. Semisupervised learning using denoising autoencoders for brain lesion detection and segmentation.

    PubMed

    Alex, Varghese; Vaidhya, Kiran; Thirunavukkarasu, Subramaniam; Kesavadas, Chandrasekharan; Krishnamurthi, Ganapathy

    2017-10-01

    The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.

  7. Edge Preserved Speckle Noise Reduction Using Integrated Fuzzy Filters

    PubMed Central

    Dewal, M. L.; Rohit, Manoj Kumar

    2014-01-01

    Echocardiographic images are inherent with speckle noise which makes visual reading and analysis quite difficult. The multiplicative speckle noise masks finer details, necessary for diagnosis of abnormalities. A novel speckle reduction technique based on integration of geometric, wiener, and fuzzy filters is proposed and analyzed in this paper. The denoising applications of fuzzy filters are studied and analyzed along with 26 denoising techniques. It is observed that geometric filter retains noise and, to address this issue, wiener filter is embedded into the geometric filter during iteration process. The performance of geometric-wiener filter is further enhanced using fuzzy filters and the proposed despeckling techniques are called integrated fuzzy filters. Fuzzy filters based on moving average and median value are employed in the integrated fuzzy filters. The performances of integrated fuzzy filters are tested on echocardiographic images and synthetic images in terms of image quality metrics. It is observed that the performance parameters are highest in case of integrated fuzzy filters in comparison to fuzzy and geometric-fuzzy filters. The clinical validation reveals that the output images obtained using geometric-wiener, integrated fuzzy, nonlocal means, and details preserving anisotropic diffusion filters are acceptable. The necessary finer details are retained in the denoised echocardiographic images. PMID:27437499

  8. Example-based human motion denoising.

    PubMed

    Lou, Hui; Chai, Jinxiang

    2010-01-01

    With the proliferation of motion capture data, interest in removing noise and outliers from motion capture data has increased. In this paper, we introduce an efficient human motion denoising technique for the simultaneous removal of noise and outliers from input human motion data. The key idea of our approach is to learn a series of filter bases from precaptured motion data and use them along with robust statistics techniques to filter noisy motion data. Mathematically, we formulate the motion denoising process in a nonlinear optimization framework. The objective function measures the distance between the noisy input and the filtered motion in addition to how well the filtered motion preserves spatial-temporal patterns embedded in captured human motion data. Optimizing the objective function produces an optimal filtered motion that keeps spatial-temporal patterns in captured motion data. We also extend the algorithm to fill in the missing values in input motion data. We demonstrate the effectiveness of our system by experimenting with both real and simulated motion data. We also show the superior performance of our algorithm by comparing it with three baseline algorithms and to those in state-of-art motion capture data processing software such as Vicon Blade.

  9. Image denoising via fundamental anisotropic diffusion and wavelet shrinkage: a comparative study

    NASA Astrophysics Data System (ADS)

    Bayraktar, Bulent; Analoui, Mostafa

    2004-05-01

    Noise removal faces a challenge: Keeping the image details. Resolving the dilemma of two purposes (smoothing and keeping image features in tact) working inadvertently of each other was an almost impossible task until anisotropic dif-fusion (AD) was formally introduced by Perona and Malik (PM). AD favors intra-region smoothing over inter-region in piecewise smooth images. Many authors regularized the original PM algorithm to overcome its drawbacks. We compared the performance of denoising using such 'fundamental' AD algorithms and one of the most powerful multiresolution tools available today, namely, wavelet shrinkage. The AD algorithms here are called 'fundamental' in the sense that the regularized versions center around the original PM algorithm with minor changes to the logic. The algorithms are tested with different noise types and levels. On top of the visual inspection, two mathematical metrics are used for performance comparison: Signal-to-noise ratio (SNR) and universal image quality index (UIQI). We conclude that some of the regu-larized versions of PM algorithm (AD) perform comparably with wavelet shrinkage denoising. This saves a lot of compu-tational power. With this conclusion, we applied the better-performing fundamental AD algorithms to a new imaging modality: Optical Coherence Tomography (OCT).

  10. Assessing denoising strategies to increase signal to noise ratio in spinal cord and in brain cortical and subcortical regions

    NASA Astrophysics Data System (ADS)

    Maugeri, L.; Moraschi, M.; Summers, P.; Favilla, S.; Mascali, D.; Cedola, A.; Porro, C. A.; Giove, F.; Fratini, M.

    2018-02-01

    Functional Magnetic Resonance Imaging (fMRI) based on Blood Oxygenation Level Dependent (BOLD) contrast has become one of the most powerful tools in neuroscience research. On the other hand, fMRI approaches have seen limited use in the study of spinal cord and subcortical brain regions (such as the brainstem and portions of the diencephalon). Indeed obtaining good BOLD signal in these areas still represents a technical and scientific challenge, due to poor control of physiological noise and to a limited overall quality of the functional series. A solution can be found in the combination of optimized experimental procedures at acquisition stage, and well-adapted artifact mitigation procedures in the data processing. In this framework, we studied two different data processing strategies to reduce physiological noise in cortical and subcortical brain regions and in the spinal cord, based on the aCompCor and RETROICOR denoising tools respectively. The study, performed in healthy subjects, was carried out using an ad hoc isometric motor task. We observed an increased signal to noise ratio in the denoised functional time series in the spinal cord and in the subcortical brain region.

  11. Estimating weak ratiometric signals in imaging data. II. Meta-analysis with multiple, dual-channel datasets.

    PubMed

    Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D

    2008-09-01

    Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.

  12. SparseCT: interrupted-beam acquisition and sparse reconstruction for radiation dose reduction

    NASA Astrophysics Data System (ADS)

    Koesters, Thomas; Knoll, Florian; Sodickson, Aaron; Sodickson, Daniel K.; Otazo, Ricardo

    2017-03-01

    State-of-the-art low-dose CT methods reduce the x-ray tube current and use iterative reconstruction methods to denoise the resulting images. However, due to compromises between denoising and image quality, only moderate dose reductions up to 30-40% are accepted in clinical practice. An alternative approach is to reduce the number of x-ray projections and use compressed sensing to reconstruct the full-tube-current undersampled data. This idea was recognized in the early days of compressed sensing and proposals for CT dose reduction appeared soon afterwards. However, no practical means of undersampling has yet been demonstrated in the challenging environment of a rapidly rotating CT gantry. In this work, we propose a moving multislit collimator as a practical incoherent undersampling scheme for compressed sensing CT and evaluate its application for radiation dose reduction. The proposed collimator is composed of narrow slits and moves linearly along the slice dimension (z), to interrupt the incident beam in different slices for each x-ray tube angle (θ). The reduced projection dataset is then reconstructed using a sparse approach, where 3D image gradients are employed to enforce sparsity. The effects of the collimator slits on the beam profile were measured and represented as a continuous slice profile. SparseCT was tested using retrospective undersampling and compared against commercial current-reduction techniques on phantoms and in vivo studies. Initial results suggest that SparseCT may enable higher performance than current-reduction, particularly for high dose reduction factors.

  13. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  14. Dictionary learning based noisy image super-resolution via distance penalty weight model

    PubMed Central

    Han, Yulan; Zhao, Yongping; Wang, Qisong

    2017-01-01

    In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633

  15. Erratum: Erratum: Denoising Phase Unwrapping Algorithm for Precise Phase Shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Phuc, Phan Huy; Rhee, Hyug-Gyo; Ghim, Young-Sik

    2018-06-01

    This is a revision of the reference list reported in the original article. In order to clear the contribution of the previous work on the incremental breadth-first search (IBFS) method applied to the PUMA algorithm, we add one more reference to the existing reference list, as in this erratum. Page 83 : In this paper, we propose an algorithm that modifies the Boykov-Kolmogorov (BK) algorithm using the incremental breadth-first search (IBFS) method [27, 28] to find paths from the source to the sink of a graph. [28] S. Ali, H. Khan, I. Shaik and F. Ali, Int. J. Eng. and Technol. 7, 254 (2015).

  16. Restoration of low-dose digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Borges, Lucas R.; Azzari, Lucio; Bakic, Predrag R.; Maidment, Andrew D. A.; Vieira, Marcelo A. C.; Foi, Alessandro

    2018-06-01

    In breast cancer screening, the radiation dose must be kept to the minimum necessary to achieve the desired diagnostic objective, thus minimizing risks associated with cancer induction. However, decreasing the radiation dose also degrades the image quality. In this work we restore digital breast tomosynthesis (DBT) projections acquired at low radiation doses with the goal of achieving a quality comparable to that obtained from current standard full-dose imaging protocols. A multiframe denoising algorithm was applied to low-dose projections, which are filtered jointly. Furthermore, a weighted average was used to inject a varying portion of the noisy signal back into the denoised one, in order to attain a signal-to-noise ratio comparable to that of standard full-dose projections. The entire restoration framework leverages a signal-dependent noise model with quantum gain which varies both upon the projection angle and on the pixel position. A clinical DBT system and a 3D anthropomorphic breast phantom were used to validate the proposed method, both on DBT projections and slices from the 3D reconstructed volume. The framework is shown to attain the standard full-dose image quality from data acquired at 50% lower radiation dose, whereas progressive loss of relevant details compromises the image quality if the dosage is further decreased.

  17. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    PubMed

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Dynamic characteristics of laser Doppler flowmetry signals obtained in response to a local and progressive pressure applied on diabetic and healthy subjects

    NASA Astrophysics Data System (ADS)

    Humeau, Anne; Koitka, Audrey; Abraham, Pierre; Saumet, Jean-Louis; L'Huillier, Jean-Pierre

    2004-09-01

    In the biomedical field, the laser Doppler flowmetry (LDF) technique is a non-invasive method to monitor skin perfusion. On the skin of healthy humans, LDF signals present a significant transient increase in response to a local and progressive pressure application. This vasodilatory reflex response may have important implications for cutaneous pathologies involved in various neurological diseases and in the pathophysiology of decubitus ulcers. The present work analyses the dynamic characteristics of these signals on young type 1 diabetic patients, and on healthy age-matched subjects. To obtain accurate dynamic characteristic values, a de-noising wavelet-based algorithm is first applied to LDF signals. All the de-noised signals are then normalised to the same value. The blood flow peak and the time to reach this peak are then calculated on each computed signal. The results show that a large vasodilation is present on signals of healthy subjects. The mean peak occurs at a pressure of 3.2 kPa approximately. However, a vasodilation of limited amplitude appears on type 1 diabetic patients. The maximum value is visualised, on the average, when the pressure is 1.1 kPa. The inability for diabetic patients to increase largely their cutaneous blood flow may bring explanations to foot ulcers.

  19. Linear friction weld process monitoring of fixture cassette deformations using empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Bakker, O. J.; Gibson, C.; Wilson, P.; Lohse, N.; Popov, A. A.

    2015-10-01

    Due to its inherent advantages, linear friction welding is a solid-state joining process of increasing importance to the aerospace, automotive, medical and power generation equipment industries. Tangential oscillations and forge stroke during the burn-off phase of the joining process introduce essential dynamic forces, which can also be detrimental to the welding process. Since burn-off is a critical phase in the manufacturing stage, process monitoring is fundamental for quality and stability control purposes. This study aims to improve workholding stability through the analysis of fixture cassette deformations. Methods and procedures for process monitoring are developed and implemented in a fail-or-pass assessment system for fixture cassette deformations during the burn-off phase. Additionally, the de-noised signals are compared to results from previous production runs. The observed deformations as a consequence of the forces acting on the fixture cassette are measured directly during the welding process. Data on the linear friction-welding machine are acquired and de-noised using empirical mode decomposition, before the burn-off phase is extracted. This approach enables a direct, objective comparison of the signal features with trends from previous successful welds. The capacity of the whole process monitoring system is validated and demonstrated through the analysis of a large number of signals obtained from welding experiments.

  20. Image registration for a UV-Visible dual-band imaging system

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Yuan, Shuang; Li, Jianping; Xing, Sheng; Zhang, Honglong; Dong, Yuming; Chen, Liangpei; Liu, Peng; Jiao, Guohua

    2018-06-01

    The detection of corona discharge is an effective way for early fault diagnosis of power equipment. UV-Visible dual-band imaging can detect and locate corona discharge spot at all-weather condition. In this study, we introduce an image registration protocol for this dual-band imaging system. The protocol consists of UV image denoising and affine transformation model establishment. We report the algorithm details of UV image preprocessing, affine transformation model establishment and relevant experiments for verification of their feasibility. The denoising algorithm was based on a correlation operation between raw UV images, a continuous mask and the transformation model was established by using corner feature and a statistical method. Finally, an image fusion test was carried out to verify the accuracy of affine transformation model. It has proved the average position displacement error between corona discharge and equipment fault at different distances in a 2.5m-20 m range are 1.34 mm and 1.92 mm in the horizontal and vertical directions, respectively, which are precise enough for most industrial applications. The resultant protocol is not only expected to improve the efficiency and accuracy of such imaging system for locating corona discharge spot, but also supposed to provide a more generalized reference for the calibration of various dual-band imaging systems in practice.

  1. Iterated oversampled filter banks and wavelet frames

    NASA Astrophysics Data System (ADS)

    Selesnick, Ivan W.; Sendur, Levent

    2000-12-01

    This paper takes up the design of wavelet tight frames that are analogous to Daubechies orthonormal wavelets - that is, the design of minimal length wavelet filters satisfying certain polynomial properties, but now in the oversampled case. The oversampled dyadic DWT considered in this paper is based on a single scaling function and tow distinct wavelets. Having more wavelets than necessary gives a closer spacing between adjacent wavelets within the same scale. As a result, the transform is nearly shift-invariant, and can be used to improve denoising. Because the associated time- frequency lattice preserves the dyadic structure of the critically sampled DWT it can be used with tree-based denoising algorithms that exploit parent-child correlation.

  2. Laser Spot Center Detection and Comparison Test

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Xu, Zhengjie; Fu, Deli; Hu, Cong

    2018-04-01

    High efficiency and precision of the pot center detection are the foundations of avionics instrument navigation and optics measurement basis for many applications. It has noticeable impact on overall system performance. Among them, laser spot detection is very important in the optical measurement technology. In order to improve the low accuracy of the spot center position, the algorithm is improved on the basis of the circle fitting. The pretreatment is used by circle fitting, and the improved adaptive denoising filter for TV repair technology can effectively improves the accuracy of the spot center position. At the same time, the pretreatment and de-noising can effectively reduce the influence of Gaussian white noise, which enhances the anti-jamming capability.

  3. Simultaneous Retrieval of Temperature, Water Vapor and Ozone Atmospheric Profiles from IASI: Compression, De-noising, First Guess Retrieval and Inversion Algorithms

    NASA Technical Reports Server (NTRS)

    Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.

  4. Model-based Bayesian filtering of cardiac contaminants from biomedical recordings.

    PubMed

    Sameni, R; Shamsollahi, M B; Jutten, C

    2008-05-01

    Electrocardiogram (ECG) and magnetocardiogram (MCG) signals are among the most considerable sources of noise for other biomedical signals. In some recent works, a Bayesian filtering framework has been proposed for denoising the ECG signals. In this paper, it is shown that this framework may be effectively used for removing cardiac contaminants such as the ECG, MCG and ballistocardiographic artifacts from different biomedical recordings such as the electroencephalogram, electromyogram and also for canceling maternal cardiac signals from fetal ECG/MCG. The proposed method is evaluated on simulated and real signals.

  5. Label-free detection of circulating melanoma cells by in vivo photoacoustic flow cytometry

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoling; Yang, Ping; Liu, Rongrong; Niu, Zhenyu; Suo, Yuanzhen; He, Hao; Gao, Wenyuan; Tang, Shuo; Wei, Xunbin

    2016-03-01

    Melanoma is a malignant tumor of melanocytes. Melanoma cells have high light absorption due to melanin highly contained in melanoma cells. This property is employed for the detection of circulating melanoma cell by in vivo photoacoustic flow cytometry (PAFC), which is based on photoacoustic effect. Compared to in vivo flow cytometry based on fluorescence, PAFC can employ high melanin content of melanoma cells as endogenous biomarkers to detect circulating melanoma cells in vivo. We have developed in vitro experiments to prove the ability of PAFC system of detecting photoacoustic signals from melanoma cells. For in vivo experiments, we have constructed a model of melanoma tumor bearing mice by inoculating highly metastatic murine melanoma cancer cells, B16F10 with subcutaneous injection. PA signals are detected in the blood vessels of mouse ears in vivo. The raw signal detected from target cells often contains some noise caused by electronic devices, such as background noise and thermal noise. We choose the Wavelet denoising method to effectively distinguish the target signal from background noise. Processing in time domain and frequency domain would be combined to analyze the signal after denoising. This algorithm contains time domain filter and frequency transformation. The frequency spectrum image of the signal contains distinctive features that can be used to analyze the property of target cells or particles. The processing methods have a great potential for analyzing signals accurately and rapidly. By counting circulating melanoma cells termly, we obtain the number variation of circulating melanoma cells as melanoma metastasized. Those results show that PAFC is a noninvasive and label-free method to detect melanoma metastases in blood or lymph circulation.

  6. Nonlocal variational model and filter algorithm to remove multiplicative noise

    NASA Astrophysics Data System (ADS)

    Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi

    2010-07-01

    The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.

  7. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  8. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  9. Adaptive bilateral filter for image denoising and its application to in-vitro Time-of-Flight data

    NASA Astrophysics Data System (ADS)

    Seitel, Alexander; dos Santos, Thiago R.; Mersmann, Sven; Penne, Jochen; Groch, Anja; Yung, Kwong; Tetzlaff, Ralf; Meinzer, Hans-Peter; Maier-Hein, Lena

    2011-03-01

    Image-guided therapy systems generally require registration of pre-operative planning data with the patient's anatomy. One common approach to achieve this is to acquire intra-operative surface data and match it to surfaces extracted from the planning image. Although increasingly popular for surface generation in general, the novel Time-of-Flight (ToF) technology has not yet been applied in this context. This may be attributed to the fact that the ToF range images are subject to considerable noise. The contribution of this study is two-fold. Firstly, we present an adaption of the well-known bilateral filter for denoising ToF range images based on the noise characteristics of the camera. Secondly, we assess the quality of organ surfaces generated from ToF range data with and without bilateral smoothing using corresponding high resolution CT data as ground truth. According to an evaluation on five porcine organs, the root mean squared (RMS) distance between the denoised ToF data points and the reference computed tomography (CT) surfaces ranged from 3.0 mm (lung) to 9.0 mm (kidney). This corresponds to an error-reduction of up to 36% compared to the error of the original ToF surfaces.

  10. On detection of median filtering in digital images

    NASA Astrophysics Data System (ADS)

    Kirchner, Matthias; Fridrich, Jessica

    2010-01-01

    In digital image forensics, it is generally accepted that intentional manipulations of the image content are most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing. However, it is also beneficial to know as much as possible about the general processing history of an image, including content-preserving operations, since they can affect the reliability of forensic methods in various ways. In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method is backed with experimental evidence on a large image database.

  11. Poisson denoising on the sphere: application to the Fermi gamma ray space telescope

    NASA Astrophysics Data System (ADS)

    Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.

    2010-07-01

    The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.

  12. Mass detection with digitized screening mammograms by using Gabor features

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2007-03-01

    Breast cancer is the leading cancer among American women. The current lifetime risk of developing breast cancer is 13.4% (one in seven). Mammography is the most effective technology presently available for breast cancer screening. With digital mammograms computer-aided detection (CAD) has proven to be a useful tool for radiologists. In this paper, we focus on mass detection that is a common category of breast cancers relative to calcification and architecture distortion. We propose a new mass detection algorithm utilizing Gabor filters, termed as "Gabor Mass Detection" (GMD). There are three steps in the GMD algorithm, (1) preprocessing, (2) generating alarms and (3) classification (reducing false alarms). Down-sampling, quantization, denoising and enhancement are done in the preprocessing step. Then a total of 30 Gabor filtered images (along 6 bands by 5 orientations) are produced. Alarm segments are generated by thresholding four Gabor images of full orientations (Stage-I classification) with image-dependent thresholds computed via histogram analysis. Next a set of edge histogram descriptors (EHD) are extracted from 24 Gabor images (6 by 4) that will be used for Stage-II classification. After clustering EHD features with fuzzy C-means clustering method, a k-nearest neighbor classifier is used to reduce the number of false alarms. We initially analyzed 431 digitized mammograms (159 normal images vs. 272 cancerous images, from the DDSM project, University of South Florida) with the proposed GMD algorithm. And a ten-fold cross validation was used for testing the GMD algorithm upon the available data. The GMD performance is as follows: sensitivity (true positive rate) = 0.88 at false positives per image (FPI) = 1.25, and the area under the ROC curve = 0.83. The overall performance of the GMD algorithm is satisfactory and the accuracy of locating masses (highlighting the boundaries of suspicious areas) is relatively high. Furthermore, the GMD algorithm can successfully detect early-stage (with small values of Assessment & low Subtlety) malignant masses. In addition, Gabor filtered images are used in both stages of classifications, which greatly simplifies the GMD algorithm.

  13. Adaptive target binarization method based on a dual-camera system

    NASA Astrophysics Data System (ADS)

    Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing

    2018-01-01

    An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.

  14. ECG feature extraction and disease diagnosis.

    PubMed

    Bhyri, Channappa; Hamde, S T; Waghmare, L M

    2011-01-01

    An important factor to consider when using findings on electrocardiograms for clinical decision making is that the waveforms are influenced by normal physiological and technical factors as well as by pathophysiological factors. In this paper, we propose a method for the feature extraction and heart disease diagnosis using wavelet transform (WT) technique and LabVIEW (Laboratory Virtual Instrument Engineering workbench). LabVIEW signal processing tools are used to denoise the signal before applying the developed algorithm for feature extraction. First, we have developed an algorithm for R-peak detection using Haar wavelet. After 4th level decomposition of the ECG signal, the detailed coefficient is squared and the standard deviation of the squared detailed coefficient is used as the threshold for detection of R-peaks. Second, we have used daubechies (db6) wavelet for the low resolution signals. After cross checking the R-peak location in 4th level, low resolution signal of daubechies wavelet P waves and T waves are detected. Other features of diagnostic importance, mainly heart rate, R-wave width, Q-wave width, T-wave amplitude and duration, ST segment and frontal plane axis are also extracted and scoring pattern is applied for the purpose of heart disease diagnosis. In this study, detection of tachycardia, bradycardia, left ventricular hypertrophy, right ventricular hypertrophy and myocardial infarction have been considered. In this work, CSE ECG data base which contains 5000 samples recorded at a sampling frequency of 500 Hz and the ECG data base created by the S.G.G.S. Institute of Engineering and Technology, Nanded (Maharashtra) have been used.

  15. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    NASA Astrophysics Data System (ADS)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  16. [Biometric identification method for ECG based on the piecewise linear representation (PLR) and dynamic time warping (DTW)].

    PubMed

    Yang, Licai; Shen, Jun; Bao, Shudi; Wei, Shoushui

    2013-10-01

    To treat the problem of identification performance and the complexity of the algorithm, we proposed a piecewise linear representation and dynamic time warping (PLR-DTW) method for ECG biometric identification. Firstly we detected R peaks to get the heartbeats after denoising preprocessing. Then we used the PLR method to keep important information of an ECG signal segment while reducing the data dimension at the same time. The improved DTW method was used for similarity measurements between the test data and the templates. The performance evaluation was carried out on the two ECG databases: PTB and MIT-BIH. The analystic results showed that compared to the discrete wavelet transform method, the proposed PLR-DTW method achieved a higher accuracy rate which is nearly 8% of rising, and saved about 30% operation time, and this demonstrated that the proposed method could provide a better performance.

  17. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A totalmore » of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet-based EFCM clustering model was introduced toward noise reduction and detail preservation. The proposed method improves the overall US image quality, which in turn could affect the decision-making on whether additional imaging and/or intervention is needed.« less

  18. Optimization of magnetic flux density measurement using multiple RF receiver coils and multi-echo in MREIT.

    PubMed

    Jeong, Woo Chul; Chauhan, Munish; Sajib, Saurav Z K; Kim, Hyung Joong; Serša, Igor; Kwon, Oh In; Woo, Eung Je

    2014-09-07

    Magnetic Resonance Electrical Impedance Tomography (MREIT) is an MRI method that enables mapping of internal conductivity and/or current density via measurements of magnetic flux density signals. The MREIT measures only the z-component of the induced magnetic flux density B = (Bx, By, Bz) by external current injection. The measured noise of Bz complicates recovery of magnetic flux density maps, resulting in lower quality conductivity and current-density maps. We present a new method for more accurate measurement of the spatial gradient of the magnetic flux density gradient (∇ Bz). The method relies on the use of multiple radio-frequency receiver coils and an interleaved multi-echo pulse sequence that acquires multiple sampling points within each repetition time. The noise level of the measured magnetic flux density Bz depends on the decay rate of the signal magnitude, the injection current duration, and the coil sensitivity map. The proposed method uses three key steps. The first step is to determine a representative magnetic flux density gradient from multiple receiver coils by using a weighted combination and by denoising the measured noisy data. The second step is to optimize the magnetic flux density gradient by using multi-echo magnetic flux densities at each pixel in order to reduce the noise level of ∇ Bz and the third step is to remove a random noise component from the recovered ∇ Bz by solving an elliptic partial differential equation in a region of interest. Numerical simulation experiments using a cylindrical phantom model with included regions of low MRI signal to noise ('defects') verified the proposed method. Experimental results using a real phantom experiment, that included three different kinds of anomalies, demonstrated that the proposed method reduced the noise level of the measured magnetic flux density. The quality of the recovered conductivity maps using denoised ∇ Bz data showed that the proposed method reduced the conductivity noise level up to 3-4 times at each anomaly region in comparison to the conventional method.

  19. Automatic arrival time detection for earthquakes based on Modified Laplacian of Gaussian filter

    NASA Astrophysics Data System (ADS)

    Saad, Omar M.; Shalaby, Ahmed; Samy, Lotfy; Sayed, Mohammed S.

    2018-04-01

    Precise identification of onset time for an earthquake is imperative in the right figuring of earthquake's location and different parameters that are utilized for building seismic catalogues. P-wave arrival detection of weak events or micro-earthquakes cannot be precisely determined due to background noise. In this paper, we propose a novel approach based on Modified Laplacian of Gaussian (MLoG) filter to detect the onset time even in the presence of very weak signal-to-noise ratios (SNRs). The proposed algorithm utilizes a denoising-filter algorithm to smooth the background noise. In the proposed algorithm, we employ the MLoG mask to filter the seismic data. Afterward, we apply a Dual-threshold comparator to detect the onset time of the event. The results show that the proposed algorithm can detect the onset time for micro-earthquakes accurately, with SNR of -12 dB. The proposed algorithm achieves an onset time picking accuracy of 93% with a standard deviation error of 0.10 s for 407 field seismic waveforms. Also, we compare the results with short and long time average algorithm (STA/LTA) and the Akaike Information Criterion (AIC), and the proposed algorithm outperforms them.

  20. Halftoning processing on a JPEG-compressed image

    NASA Astrophysics Data System (ADS)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

Top