Sample records for wavelet soft threshold

  1. Wavelet median denoising of ultrasound images

    NASA Astrophysics Data System (ADS)

    Macey, Katherine E.; Page, Wyatt H.

    2002-05-01

    Ultrasound images are contaminated with both additive and multiplicative noise, which is modeled by Gaussian and speckle noise respectively. Distinguishing small features such as fallopian tubes in the female genital tract in the noisy environment is problematic. A new method for noise reduction, Wavelet Median Denoising, is presented. Wavelet Median Denoising consists of performing a standard noise reduction technique, median filtering, in the wavelet domain. The new method is tested on 126 images, comprised of 9 original images each with 14 levels of Gaussian or speckle noise. Results for both separable and non-separable wavelets are evaluated, relative to soft-thresholding in the wavelet domain, using the signal-to-noise ratio and subjective assessment. The performance of Wavelet Median Denoising is comparable to that of soft-thresholding. Both methods are more successful in removing Gaussian noise than speckle noise. Wavelet Median Denoising outperforms soft-thresholding for a larger number of cases of speckle noise reduction than of Gaussian noise reduction. Noise reduction is more successful using non-separable wavelets than separable wavelets. When both methods are applied to ultrasound images obtained from a phantom of the female genital tract a small improvement is seen; however, a substantial improvement is required prior to clinical use.

  2. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  3. Wavelet methodology to improve single unit isolation in primary motor cortex cells

    PubMed Central

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.

    2016-01-01

    The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461

  4. Photoacoustic signals denoising of the glucose aqueous solutions using an improved wavelet threshold method

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Xiong, Zhihua

    2016-10-01

    The photoacoustic signals denoising of glucose is one of most important steps in the quality identification of the fruit because the real-time photoacoustic singals of glucose are easily interfered by all kinds of noises. To remove the noises and some useless information, an improved wavelet threshld function were proposed. Compared with the traditional wavelet hard and soft threshold functions, the improved wavelet threshold function can overcome the pseudo-oscillation effect of the denoised photoacoustic signals due to the continuity of the improved wavelet threshold function, and the error between the denoised signals and the original signals can be decreased. To validate the feasibility of the improved wavelet threshold function denoising, the denoising simulation experiments based on MATLAB programmimg were performed. In the simulation experiments, the standard test signal was used, and three different denoising methods were used and compared with the improved wavelet threshold function. The signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) values were used to evaluate the performance of the improved wavelet threshold function denoising. The experimental results demonstrate that the SNR value of the improved wavelet threshold function is largest and the RMSE value is lest, which fully verifies that the improved wavelet threshold function denoising is feasible. Finally, the improved wavelet threshold function denoising was used to remove the noises of the photoacoustic signals of the glucose solutions. The denoising effect is also very good. Therefore, the improved wavelet threshold function denoising proposed by this paper, has a potential value in the field of denoising for the photoacoustic singals.

  5. Wavelet methodology to improve single unit isolation in primary motor cortex cells.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2015-05-15

    The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.

  6. Electrocardiogram signal denoising based on a new improved wavelet thresholding

    NASA Astrophysics Data System (ADS)

    Han, Guoqiang; Xu, Zhijun

    2016-08-01

    Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.

  7. A de-noising method using the improved wavelet threshold function based on noise variance estimation

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao

    2018-01-01

    The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.

  8. Near-Infrared Spectrum Detection of Wheat Gluten Protein Content Based on a Combined Filtering Method.

    PubMed

    Cai, Jian-Hua

    2017-09-01

    To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.

  9. Twofold processing for denoising ultrasound medical images.

    PubMed

    Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y

    2015-01-01

    Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.

  10. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. A wavelet and least square filter based spatial-spectral denoising approach of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Li, Ting; Chen, Xiao-Mei; Chen, Gang; Xue, Bo; Ni, Guo-Qiang

    2009-11-01

    Noise reduction is a crucial step in hyperspectral imagery pre-processing. Based on sensor characteristics, the noise of hyperspectral imagery represents in both spatial and spectral domain. However, most prevailing denosing techniques process the imagery in only one specific domain, which have not utilized multi-domain nature of hyperspectral imagery. In this paper, a new spatial-spectral noise reduction algorithm is proposed, which is based on wavelet analysis and least squares filtering techniques. First, in the spatial domain, a new stationary wavelet shrinking algorithm with improved threshold function is utilized to adjust the noise level band-by-band. This new algorithm uses BayesShrink for threshold estimation, and amends the traditional soft-threshold function by adding shape tuning parameters. Comparing with soft or hard threshold function, the improved one, which is first-order derivable and has a smooth transitional region between noise and signal, could save more details of image edge and weaken Pseudo-Gibbs. Then, in the spectral domain, cubic Savitzky-Golay filter based on least squares method is used to remove spectral noise and artificial noise that may have been introduced in during the spatial denoising. Appropriately selecting the filter window width according to prior knowledge, this algorithm has effective performance in smoothing the spectral curve. The performance of the new algorithm is experimented on a set of Hyperion imageries acquired in 2007. The result shows that the new spatial-spectral denoising algorithm provides more significant signal-to-noise-ratio improvement than traditional spatial or spectral method, while saves the local spectral absorption features better.

  12. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    PubMed

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  13. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  14. Noise reduction algorithm with the soft thresholding based on the Shannon entropy and bone-conduction speech cross- correlation bands.

    PubMed

    Na, Sung Dae; Wei, Qun; Seong, Ki Woong; Cho, Jin Ho; Kim, Myoung Nam

    2018-01-01

    The conventional methods of speech enhancement, noise reduction, and voice activity detection are based on the suppression of noise or non-speech components of the target air-conduction signals. However, air-conduced speech is hard to differentiate from babble or white noise signals. To overcome this problem, the proposed algorithm uses the bone-conduction speech signals and soft thresholding based on the Shannon entropy principle and cross-correlation of air- and bone-conduction signals. A new algorithm for speech detection and noise reduction is proposed, which makes use of the Shannon entropy principle and cross-correlation with the bone-conduction speech signals to threshold the wavelet packet coefficients of the noisy speech. The proposed method can be get efficient result by objective quality measure that are PESQ, RMSE, Correlation, SNR. Each threshold is generated by the entropy and cross-correlation approaches in the decomposed bands using the wavelet packet decomposition. As a result, the noise is reduced by the proposed method using the MATLAB simulation. To verify the method feasibility, we compared the air- and bone-conduction speech signals and their spectra by the proposed method. As a result, high performance of the proposed method is confirmed, which makes it quite instrumental to future applications in communication devices, noisy environment, construction, and military operations.

  15. Noise reduction in Lidar signal using correlation-based EMD combined with soft thresholding and roughness penalty

    NASA Astrophysics Data System (ADS)

    Chang, Jianhua; Zhu, Lingyan; Li, Hongxu; Xu, Fan; Liu, Binggang; Yang, Zhenbo

    2018-01-01

    Empirical mode decomposition (EMD) is widely used to analyze the non-linear and non-stationary signals for noise reduction. In this study, a novel EMD-based denoising method, referred to as EMD with soft thresholding and roughness penalty (EMD-STRP), is proposed for the Lidar signal denoising. With the proposed method, the relevant and irrelevant intrinsic mode functions are first distinguished via a correlation coefficient. Then, the soft thresholding technique is applied to the irrelevant modes, and the roughness penalty technique is applied to the relevant modes to extract as much information as possible. The effectiveness of the proposed method was evaluated using three typical signals contaminated by white Gaussian noise. The denoising performance was then compared to the denoising capabilities of other techniques, such as correlation-based EMD partial reconstruction, correlation-based EMD hard thresholding, and wavelet transform. The use of EMD-STRP on the measured Lidar signal resulted in the noise being efficiently suppressed, with an improved signal to noise ratio of 22.25 dB and an extended detection range of 11 km.

  16. Application of time-resolved glucose concentration photoacoustic signals based on an improved wavelet denoising

    NASA Astrophysics Data System (ADS)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2014-10-01

    Real-time monitoring of blood glucose concentration (BGC) is a great important procedure in controlling diabetes mellitus and preventing the complication for diabetic patients. Noninvasive measurement of BGC has already become a research hotspot because it can overcome the physical and psychological harm. Photoacoustic spectroscopy is a well-established, hybrid and alternative technique used to determine the BGC. According to the theory of photoacoustic technique, the blood is irradiated by plused laser with nano-second repeation time and micro-joule power, the photoacoustic singals contained the information of BGC are generated due to the thermal-elastic mechanism, then the BGC level can be interpreted from photoacoustic signal via the data analysis. But in practice, the time-resolved photoacoustic signals of BGC are polluted by the varities of noises, e.g., the interference of background sounds and multi-component of blood. The quality of photoacoustic signal of BGC directly impacts the precision of BGC measurement. So, an improved wavelet denoising method was proposed to eliminate the noises contained in BGC photoacoustic signals. To overcome the shortcoming of traditional wavelet threshold denoising, an improved dual-threshold wavelet function was proposed in this paper. Simulation experimental results illustrated that the denoising result of this improved wavelet method was better than that of traditional soft and hard threshold function. To varify the feasibility of this improved function, the actual photoacoustic BGC signals were test, the test reslut demonstrated that the signal-to-noises ratio(SNR) of the improved function increases about 40-80%, and its root-mean-square error (RMSE) decreases about 38.7-52.8%.

  17. Spectral information enhancement using wavelet-based iterative filtering for in vivo gamma spectrometry.

    PubMed

    Paul, Sabyasachi; Sarkar, P K

    2013-04-01

    Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.

  18. Research and Implementation of Heart Sound Denoising

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Wang, Yutai; Wang, Yanxiang

    Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.

  19. Entropy-aware projected Landweber reconstruction for quantized block compressive sensing of aerial imagery

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Li, Kangda; Wang, Bing; Tang, Hainie; Gong, Xiaohui

    2017-01-01

    A quantized block compressive sensing (QBCS) framework, which incorporates the universal measurement, quantization/inverse quantization, entropy coder/decoder, and iterative projected Landweber reconstruction, is summarized. Under the QBCS framework, this paper presents an improved reconstruction algorithm for aerial imagery, QBCS, with entropy-aware projected Landweber (QBCS-EPL), which leverages the full-image sparse transform without Wiener filter and an entropy-aware thresholding model for wavelet-domain image denoising. Through analyzing the functional relation between the soft-thresholding factors and entropy-based bitrates for different quantization methods, the proposed model can effectively remove wavelet-domain noise of bivariate shrinkage and achieve better image reconstruction quality. For the overall performance of QBCS reconstruction, experimental results demonstrate that the proposed QBCS-EPL algorithm significantly outperforms several existing algorithms. With the experiment-driven methodology, the QBCS-EPL algorithm can obtain better reconstruction quality at a relatively moderate computational cost, which makes it more desirable for aerial imagery applications.

  20. Wavelet tree structure based speckle noise removal for optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Yuan, Xin; Liu, Xuan; Liu, Yang

    2018-02-01

    We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.

  1. Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography.

    PubMed

    Zaki, Farzana; Wang, Yahui; Su, Hao; Yuan, Xin; Liu, Xuan

    2017-05-01

    Optical coherence tomography (OCT) is based on coherence detection of interferometric signals and hence inevitably suffers from speckle noise. To remove speckle noise in OCT images, wavelet domain thresholding has demonstrated significant advantages in suppressing noise magnitude while preserving image sharpness. However, speckle noise in OCT images has different characteristics in different spatial scales, which has not been considered in previous applications of wavelet domain thresholding. In this study, we demonstrate a noise adaptive wavelet thresholding (NAWT) algorithm that exploits the difference of noise characteristics in different wavelet sub-bands. The algorithm is simple, fast, effective and is closely related to the physical origin of speckle noise in OCT image. Our results demonstrate that NAWT outperforms conventional wavelet thresholding.

  2. A new time-adaptive discrete bionic wavelet transform for enhancing speech from adverse noise environment

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui

    2012-04-01

    Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.

  3. Detecting trace components in liquid chromatography/mass spectrometry data sets with two-dimensional wavelets

    NASA Astrophysics Data System (ADS)

    Compton, Duane C.; Snapp, Robert R.

    2007-09-01

    TWiGS (two-dimensional wavelet transform with generalized cross validation and soft thresholding) is a novel algorithm for denoising liquid chromatography-mass spectrometry (LC-MS) data for use in "shot-gun" proteomics. Proteomics, the study of all proteins in an organism, is an emerging field that has already proven successful for drug and disease discovery in humans. There are a number of constraints that limit the effectiveness of liquid chromatography-mass spectrometry (LC-MS) for shot-gun proteomics, where the chemical signals are typically weak, and data sets are computationally large. Most algorithms suffer greatly from a researcher driven bias, making the results irreproducible and unusable by other laboratories. We thus introduce a new algorithm, TWiGS, that removes electrical (additive white) and chemical noise from LC-MS data sets. TWiGS is developed to be a true two-dimensional algorithm, which operates in the time-frequency domain, and minimizes the amount of researcher bias. It is based on the traditional discrete wavelet transform (DWT), which allows for fast and reproducible analysis. The separable two-dimensional DWT decomposition is paired with generalized cross validation and soft thresholding. The Haar, Coiflet-6, Daubechie-4 and the number of decomposition levels are determined based on observed experimental results. Using a synthetic LC-MS data model, TWiGS accurately retains key characteristics of the peaks in both the time and m/z domain, and can detect peaks from noise of the same intensity. TWiGS is applied to angiotensin I and II samples run on a LC-ESI-TOF-MS (liquid-chromatography-electrospray-ionization) to demonstrate its utility for the detection of low-lying peaks obscured by noise.

  4. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  5. Study on De-noising Technology of Radar Life Signal

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Fang; Wang, Lian-Huan; Ma, Jiang-Fei; Wang, Pei-Pei

    2016-05-01

    Radar detection is a kind of novel life detection technology, which can be applied to medical monitoring, anti-terrorism and disaster relief street fighting, etc. As the radar life signal is very weak, it is often submerged in the noise. Because of non-stationary and randomness of these clutter signals, it is necessary to denoise efficiently before extracting and separating the useful signal. This paper improves the radar life signal's theoretical model of the continuous wave, does de-noising processing by introducing lifting wavelet transform and determine the best threshold function through comparing the de-noising effects of different threshold functions. The result indicates that both SNR and MSE of the signal are better than the traditional ones by introducing lifting wave transform and using a new improved soft threshold function de-noising method..

  6. Wavelet-based adaptive thresholding method for image segmentation

    NASA Astrophysics Data System (ADS)

    Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl

    2001-05-01

    A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.

  7. Wavelet-domain de-noising of OCT images of human brain malignant glioma

    NASA Astrophysics Data System (ADS)

    Dolganova, I. N.; Aleksandrova, P. V.; Beshplav, S.-I. T.; Chernomyrdin, N. V.; Dubyanskaya, E. N.; Goryaynov, S. A.; Kurlov, V. N.; Reshetov, I. V.; Potapov, A. A.; Tuchin, V. V.; Zaytsev, K. I.

    2018-04-01

    We have proposed a wavelet-domain de-noising technique for imaging of human brain malignant glioma by optical coherence tomography (OCT). It implies OCT image decomposition using the direct fast wavelet transform, thresholding of the obtained wavelet spectrum and further inverse fast wavelet transform for image reconstruction. By selecting both wavelet basis and thresholding procedure, we have found an optimal wavelet filter, which application improves differentiation of the considered brain tissue classes - i.e. malignant glioma and normal/intact tissue. Namely, it allows reducing the scattering noise in the OCT images and retaining signal decrement for each tissue class. Therefore, the observed results reveals the wavelet-domain de-noising as a prospective tool for improved characterization of biological tissue using the OCT.

  8. Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection

    NASA Astrophysics Data System (ADS)

    Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai

    2017-02-01

    Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.

  9. ECG signal performance de-noising assessment based on threshold tuning of dual-tree wavelet transform.

    PubMed

    El B'charri, Oussama; Latif, Rachid; Elmansouri, Khalifa; Abenaou, Abdenbi; Jenkal, Wissam

    2017-02-07

    Since the electrocardiogram (ECG) signal has a low frequency and a weak amplitude, it is sensitive to miscellaneous mixed noises, which may reduce the diagnostic accuracy and hinder the physician's correct decision on patients. The dual tree wavelet transform (DT-WT) is one of the most recent enhanced versions of discrete wavelet transform. However, threshold tuning on this method for noise removal from ECG signal has not been investigated yet. In this work, we shall provide a comprehensive study on the impact of the choice of threshold algorithm, threshold value, and the appropriate wavelet decomposition level to evaluate the ECG signal de-noising performance. A set of simulations is performed on both synthetic and real ECG signals to achieve the promised results. First, the synthetic ECG signal is used to observe the algorithm response. The evaluation results of synthetic ECG signal corrupted by various types of noise has showed that the modified unified threshold and wavelet hyperbolic threshold de-noising method is better in realistic and colored noises. The tuned threshold is then used on real ECG signals from the MIT-BIH database. The results has shown that the proposed method achieves higher performance than the ordinary dual tree wavelet transform into all kinds of noise removal from ECG signal. The simulation results indicate that the algorithm is robust for all kinds of noises with varying degrees of input noise, providing a high quality clean signal. Moreover, the algorithm is quite simple and can be used in real time ECG monitoring.

  10. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  11. EEG Artifact Removal Using a Wavelet Neural Network

    NASA Technical Reports Server (NTRS)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  12. Wavelet analysis techniques applied to removing varying spectroscopic background in calibration model for pear sugar content

    NASA Astrophysics Data System (ADS)

    Liu, Yande; Ying, Yibin; Lu, Huishan; Fu, Xiaping

    2005-11-01

    A new method is proposed to eliminate the varying background and noise simultaneously for multivariate calibration of Fourier transform near infrared (FT-NIR) spectral signals. An ideal spectrum signal prototype was constructed based on the FT-NIR spectrum of fruit sugar content measurement. The performances of wavelet based threshold de-noising approaches via different combinations of wavelet base functions were compared. Three families of wavelet base function (Daubechies, Symlets and Coiflets) were applied to estimate the performance of those wavelet bases and threshold selection rules by a series of experiments. The experimental results show that the best de-noising performance is reached via the combinations of Daubechies 4 or Symlet 4 wavelet base function. Based on the optimization parameter, wavelet regression models for sugar content of pear were also developed and result in a smaller prediction error than a traditional Partial Least Squares Regression (PLSR) mode.

  13. Value-at-risk estimation with wavelet-based extreme value theory: Evidence from emerging markets

    NASA Astrophysics Data System (ADS)

    Cifter, Atilla

    2011-06-01

    This paper introduces wavelet-based extreme value theory (EVT) for univariate value-at-risk estimation. Wavelets and EVT are combined for volatility forecasting to estimate a hybrid model. In the first stage, wavelets are used as a threshold in generalized Pareto distribution, and in the second stage, EVT is applied with a wavelet-based threshold. This new model is applied to two major emerging stock markets: the Istanbul Stock Exchange (ISE) and the Budapest Stock Exchange (BUX). The relative performance of wavelet-based EVT is benchmarked against the Riskmetrics-EWMA, ARMA-GARCH, generalized Pareto distribution, and conditional generalized Pareto distribution models. The empirical results show that the wavelet-based extreme value theory increases predictive performance of financial forecasting according to number of violations and tail-loss tests. The superior forecasting performance of the wavelet-based EVT model is also consistent with Basel II requirements, and this new model can be used by financial institutions as well.

  14. Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform

    NASA Astrophysics Data System (ADS)

    Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili

    2017-08-01

    A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.

  15. Passive microrheology of soft materials with atomic force microscopy: A wavelet-based spectral analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Torres, C.; Streppa, L.; Arneodo, A.

    2016-01-18

    Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale methodmore » to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.« less

  16. Cloud-scale genomic signals processing classification analysis for gene expression microarray data.

    PubMed

    Harvey, Benjamin; Soo-Yeon Ji

    2014-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring inference though analysis of DNA/mRNA sequence data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological inference by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale classification analysis of microarray data using Wavelet thresholding in a Cloud environment to identify significantly expressed features. This paper proposes a novel methodology that uses Wavelet based Denoising to initialize a threshold for determination of significantly expressed genes for classification. Additionally, this research was implemented and encompassed within cloud-based distributed processing environment. The utilization of Cloud computing and Wavelet thresholding was used for the classification 14 tumor classes from the Global Cancer Map (GCM). The results proved to be more accurate than using a predefined p-value for differential expression classification. This novel methodology analyzed Wavelet based threshold features of gene expression in a Cloud environment, furthermore classifying the expression of samples by analyzing gene patterns, which inform us of biological processes. Moreover, enabling researchers to face the present and forthcoming challenges that may arise in the analysis of data in functional genomics of large microarray datasets.

  17. Cloud-Scale Genomic Signals Processing for Robust Large-Scale Cancer Genomic Microarray Data Analysis.

    PubMed

    Harvey, Benjamin Simeon; Ji, Soo-Yeon

    2017-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.

  18. Denoising solar radiation data using coiflet wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karim, Samsul Ariffin Abdul, E-mail: samsul-ariffin@petronas.com.my; Janier, Josefina B., E-mail: josefinajanier@petronas.com.my; Muthuvalu, Mohana Sundaram, E-mail: mohana.muthuvalu@petronas.com.my

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuatesmore » according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.« less

  19. Analysis of the Biceps Brachii Muscle by Varying the Arm Movement Level and Load Resistance Band

    PubMed Central

    Abdullah, Shahrum Shah; Jali, Mohd Hafiz

    2017-01-01

    Biceps brachii muscle illness is one of the common physical disabilities that requires rehabilitation exercises in order to build up the strength of the muscle after surgery. It is also important to monitor the condition of the muscle during the rehabilitation exercise through electromyography (EMG) signals. The purpose of this study was to analyse and investigate the selection of the best mother wavelet (MWT) function and depth of the decomposition level in the wavelet denoising EMG signals through the discrete wavelet transform (DWT) method at each decomposition level. In this experimental work, six healthy subjects comprised of males and females (26 ± 3.0 years and BMI of 22 ± 2.0) were selected as a reference for persons with the illness. The experiment was conducted for three sets of resistance band loads, namely, 5 kg, 9 kg, and 16 kg, as a force during the biceps brachii muscle contraction. Each subject was required to perform three levels of the arm angle positions (30°, 90°, and 150°) for each set of resistance band load. The experimental results showed that the Daubechies5 (db5) was the most appropriate DWT method together with a 6-level decomposition with a soft heursure threshold for the biceps brachii EMG signal analysis. PMID:29138687

  20. Analysis of the Biceps Brachii Muscle by Varying the Arm Movement Level and Load Resistance Band.

    PubMed

    Burhan, Nuradebah; Kasno, Mohammad 'Afif; Ghazali, Rozaimi; Said, Md Radzai; Abdullah, Shahrum Shah; Jali, Mohd Hafiz

    2017-01-01

    Biceps brachii muscle illness is one of the common physical disabilities that requires rehabilitation exercises in order to build up the strength of the muscle after surgery. It is also important to monitor the condition of the muscle during the rehabilitation exercise through electromyography (EMG) signals. The purpose of this study was to analyse and investigate the selection of the best mother wavelet (MWT) function and depth of the decomposition level in the wavelet denoising EMG signals through the discrete wavelet transform (DWT) method at each decomposition level. In this experimental work, six healthy subjects comprised of males and females (26 ± 3.0 years and BMI of 22 ± 2.0) were selected as a reference for persons with the illness. The experiment was conducted for three sets of resistance band loads, namely, 5 kg, 9 kg, and 16 kg, as a force during the biceps brachii muscle contraction. Each subject was required to perform three levels of the arm angle positions (30°, 90°, and 150°) for each set of resistance band load. The experimental results showed that the Daubechies5 (db5) was the most appropriate DWT method together with a 6-level decomposition with a soft heursure threshold for the biceps brachii EMG signal analysis.

  1. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  2. MLESAC Based Localization of Needle Insertion Using 2D Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Gao, Dedong; Wang, Shan; Zhanwen, A.

    2018-04-01

    In the 2D ultrasound image of ultrasound-guided percutaneous needle insertions, it is difficult to determine the positions of needle axis and tip because of the existence of artifacts and other noises. In this work the speckle is regarded as the noise of an ultrasound image, and a novel algorithm is presented to detect the needle in a 2D ultrasound image. Firstly, the wavelet soft thresholding technique based on BayesShrink rule is used to denoise the speckle of ultrasound image. Secondly, we add Otsu’s thresholding method and morphologic operations to pre-process the ultrasound image. Finally, the localization of the needle is identified and positioned in the 2D ultrasound image based on the maximum likelihood estimation sample consensus (MLESAC) algorithm. The experimental results show that it is valid for estimating the position of needle axis and tip in the ultrasound images with the proposed algorithm. The research work is hopeful to be used in the path planning and robot-assisted needle insertion procedures.

  3. Chaotic Signal Denoising Based on Hierarchical Threshold Synchrosqueezed Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wang, Wen-Bo; Jing, Yun-yu; Zhao, Yan-chao; Zhang, Lian-Hua; Wang, Xiang-Li

    2017-12-01

    In order to overcoming the shortcoming of single threshold synchrosqueezed wavelet transform(SWT) denoising method, an adaptive hierarchical threshold SWT chaotic signal denoising method is proposed. Firstly, a new SWT threshold function is constructed based on Stein unbiased risk estimation, which is two order continuous derivable. Then, by using of the new threshold function, a threshold process based on the minimum mean square error was implemented, and the optimal estimation value of each layer threshold in SWT chaotic denoising is obtained. The experimental results of the simulating chaotic signal and measured sunspot signals show that, the proposed method can filter the noise of chaotic signal well, and the intrinsic chaotic characteristic of the original signal can be recovered very well. Compared with the EEMD denoising method and the single threshold SWT denoising method, the proposed method can obtain better denoising result for the chaotic signal.

  4. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  5. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  6. Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.

    PubMed

    Zhou, Weidong; Gotman, Jean

    2004-01-01

    In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.

  7. Automated Classification and Removal of EEG Artifacts With SVM and Wavelet-ICA.

    PubMed

    Sai, Chong Yeh; Mokhtar, Norrima; Arof, Hamzah; Cumming, Paul; Iwahashi, Masahiro

    2018-05-01

    Brain electrical activity recordings by electroencephalography (EEG) are often contaminated with signal artifacts. Procedures for automated removal of EEG artifacts are frequently sought for clinical diagnostics and brain-computer interface applications. In recent years, a combination of independent component analysis (ICA) and discrete wavelet transform has been introduced as standard technique for EEG artifact removal. However, in performing the wavelet-ICA procedure, visual inspection or arbitrary thresholding may be required for identifying artifactual components in the EEG signal. We now propose a novel approach for identifying artifactual components separated by wavelet-ICA using a pretrained support vector machine (SVM). Our method presents a robust and extendable system that enables fully automated identification and removal of artifacts from EEG signals, without applying any arbitrary thresholding. Using test data contaminated by eye blink artifacts, we show that our method performed better in identifying artifactual components than did existing thresholding methods. Furthermore, wavelet-ICA in conjunction with SVM successfully removed target artifacts, while largely retaining the EEG source signals of interest. We propose a set of features including kurtosis, variance, Shannon's entropy, and range of amplitude as training and test data of SVM to identify eye blink artifacts in EEG signals. This combinatorial method is also extendable to accommodate multiple types of artifacts present in multichannel EEG. We envision future research to explore other descriptive features corresponding to other types of artifactual components.

  8. Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level.

    PubMed

    Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan

    2016-07-27

    This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.

  9. [A mobile sensor for remote detection of natural gas leakage].

    PubMed

    Zhang, Shuai; Liu, Wen-qing; Zhang, Yu-jun; Kan, Rui-feng; Ruan, Jun; Wang, Li-ming; Yu, Dian-qiang; Dong, Jin-ting; Han, Xiao-lei; Cui, Yi-ben; Liu, Jian-guo

    2012-02-01

    The detection of natural gas pipeline leak becomes a significant issue for body security, environmental protection and security of state property. However, the leak detection is difficult, because of the pipeline's covering many areas, operating conditions and complicated environment. A mobile sensor for remote detection of natural gas leakage based on scanning wavelength differential absorption spectroscopy (SWDAS) is introduced. The improved soft threshold wavelet denoising was proposed by analyzing the characteristics of reflection spectrum. And the results showed that the signal to noise ratio (SNR) was increased three times. When light intensity is 530 nA, the minimum remote sensitivity will be 80 ppm x m. A widely used SWDAS can make quantitative remote sensing of natural gas leak and locate the leak source precisely in a faster, safer and more intelligent way.

  10. Image-adaptive and robust digital wavelet-domain watermarking for images

    NASA Astrophysics Data System (ADS)

    Zhao, Yi; Zhang, Liping

    2018-03-01

    We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.

  11. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  12. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  13. Denoising in digital speckle pattern interferometry using wave atoms.

    PubMed

    Federico, Alejandro; Kaufmann, Guillermo H

    2007-05-15

    We present an effective method for speckle noise removal in digital speckle pattern interferometry, which is based on a wave-atom thresholding technique. Wave atoms are a variant of 2D wavelet packets with a parabolic scaling relation and improve the sparse representation of fringe patterns when compared with traditional expansions. The performance of the denoising method is analyzed by using computer-simulated fringes, and the results are compared with those produced by wavelet and curvelet thresholding techniques. An application of the proposed method to reduce speckle noise in experimental data is also presented.

  14. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    PubMed

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  15. Speckle noise reduction in quantitative optical metrology techniques by application of the discrete wavelet transformation

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2002-06-01

    Effective suppression of speckle noise content in interferometric data images can help in improving accuracy and resolution of the results obtained with interferometric optical metrology techniques. In this paper, novel speckle noise reduction algorithms based on the discrete wavelet transformation are presented. The algorithms proceed by: (a) estimating the noise level contained in the interferograms of interest, (b) selecting wavelet families, (c) applying the wavelet transformation using the selected families, (d) wavelet thresholding, and (e) applying the inverse wavelet transformation, producing denoised interferograms. The algorithms are applied to the different stages of the processing procedures utilized for generation of quantitative speckle correlation interferometry data of fiber-optic based opto-electronic holography (FOBOEH) techniques, allowing identification of optimal processing conditions. It is shown that wavelet algorithms are effective for speckle noise reduction while preserving image features otherwise faded with other algorithms.

  16. Design, development and testing of a low-cost sEMG system and its use in recording muscle activity in human gait.

    PubMed

    Supuk, Tamara Grujic; Skelin, Ana Kuzmanic; Cic, Maja

    2014-05-07

    Surface electromyography (sEMG) is an important measurement technique used in biomechanical, rehabilitation and sport environments. In this article the design, development and testing of a low-cost wearable sEMG system are described. The hardware architecture consists of a two-cascade small-sized bioamplifier with a total gain of 2,000 and band-pass of 3 to 500 Hz. The sampling frequency of the system is 1,000 Hz. Since real measured EMG signals are usually corrupted by various types of noises (motion artifacts, white noise and electromagnetic noise present at 50 Hz and higher harmonics), we have tested several denoising techniques, both on artificial and measured EMG signals. Results showed that a wavelet-based technique implementing Daubechies5 wavelet and soft sqtwolog thresholding is the most appropriate for EMG signals denoising. To test the system performance, EMG activities of six dominant muscles of ten healthy subjects during gait were measured (gluteus maximus, biceps femoris, sartorius, rectus femoris, tibialis anterior and medial gastrocnemius). The obtained EMG envelopes presented against the duration of gait cycle were compared favourably with the EMG data available in the literature, suggesting that the proposed system is suitable for a wide range of applications in biomechanics.

  17. A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient electromagnetic signal

    NASA Astrophysics Data System (ADS)

    Ji, Yanju; Li, Dongsheng; Yu, Mingmei; Wang, Yuan; Wu, Qiong; Lin, Jun

    2016-05-01

    The ground electrical source airborne transient electromagnetic system (GREATEM) on an unmanned aircraft enjoys considerable prospecting depth, lateral resolution and detection efficiency, etc. In recent years it has become an important technical means of rapid resources exploration. However, GREATEM data are extremely vulnerable to stationary white noise and non-stationary electromagnetic noise (sferics noise, aircraft engine noise and other human electromagnetic noises). These noises will cause degradation of the imaging quality for data interpretation. Based on the characteristics of the GREATEM data and major noises, we propose a de-noising algorithm utilizing wavelet threshold method and exponential adaptive window width-fitting. Firstly, the white noise is filtered in the measured data using the wavelet threshold method. Then, the data are segmented using data window whose step length is even logarithmic intervals. The data polluted by electromagnetic noise are identified within each window based on the discriminating principle of energy detection, and the attenuation characteristics of the data slope are extracted. Eventually, an exponential fitting algorithm is adopted to fit the attenuation curve of each window, and the data polluted by non-stationary electromagnetic noise are replaced with their fitting results. Thus the non-stationary electromagnetic noise can be effectively removed. The proposed algorithm is verified by the synthetic and real GREATEM signals. The results show that in GREATEM signal, stationary white noise and non-stationary electromagnetic noise can be effectively filtered using the wavelet threshold-exponential adaptive window width-fitting algorithm, which enhances the imaging quality.

  18. Rejection of the maternal electrocardiogram in the electrohysterogram signal.

    PubMed

    Leman, H; Marque, C

    2000-08-01

    The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.

  19. RBF neural network prediction on weak electrical signals in Aloe vera var. chinensis

    NASA Astrophysics Data System (ADS)

    Wang, Lanzhou; Zhao, Jiayin; Wang, Miao

    2008-10-01

    A Gaussian radial base function (RBF) neural network forecast on signals in the Aloe vera var. chinensis by the wavelet soft-threshold denoised as the time series and using the delayed input window chosen at 50, is set up to forecast backward. There was the maximum amplitude at 310.45μV, minimum -75.15μV, average value -2.69μV and <1.5Hz at frequency in Aloe vera var. chinensis respectively. The electrical signal in Aloe vera var. chinensis is a sort of weak, unstable and low frequency signals. A result showed that it is feasible to forecast plant electrical signals for the timing by the RBF. The forecast data can be used as the preferences for the intelligent autocontrol system based on the adaptive characteristic of plants to achieve the energy saving on the agricultural production in the plastic lookum or greenhouse.

  20. Brain tumour classification and abnormality detection using neuro-fuzzy technique and Otsu thresholding.

    PubMed

    Renjith, Arokia; Manjula, P; Mohan Kumar, P

    2015-01-01

    Brain tumour is one of the main causes for an increase in transience among children and adults. This paper proposes an improved method based on Magnetic Resonance Imaging (MRI) brain image classification and image segmentation approach. Automated classification is encouraged by the need of high accuracy when dealing with a human life. The detection of the brain tumour is a challenging problem, due to high diversity in tumour appearance and ambiguous tumour boundaries. MRI images are chosen for detection of brain tumours, as they are used in soft tissue determinations. First of all, image pre-processing is used to enhance the image quality. Second, dual-tree complex wavelet transform multi-scale decomposition is used to analyse texture of an image. Feature extraction extracts features from an image using gray-level co-occurrence matrix (GLCM). Then, the Neuro-Fuzzy technique is used to classify the stages of brain tumour as benign, malignant or normal based on texture features. Finally, tumour location is detected using Otsu thresholding. The classifier performance is evaluated based on classification accuracies. The simulated results show that the proposed classifier provides better accuracy than previous method.

  1. A wavelet-based adaptive fusion algorithm of infrared polarization imaging

    NASA Astrophysics Data System (ADS)

    Yang, Wei; Gu, Guohua; Chen, Qian; Zeng, Haifang

    2011-08-01

    The purpose of infrared polarization image is to highlight man-made target from a complex natural background. For the infrared polarization images can significantly distinguish target from background with different features, this paper presents a wavelet-based infrared polarization image fusion algorithm. The method is mainly for image processing of high-frequency signal portion, as for the low frequency signal, the original weighted average method has been applied. High-frequency part is processed as follows: first, the source image of the high frequency information has been extracted by way of wavelet transform, then signal strength of 3*3 window area has been calculated, making the regional signal intensity ration of source image as a matching measurement. Extraction method and decision mode of the details are determined by the decision making module. Image fusion effect is closely related to the setting threshold of decision making module. Compared to the commonly used experiment way, quadratic interpolation optimization algorithm is proposed in this paper to obtain threshold. Set the endpoints and midpoint of the threshold searching interval as initial interpolation nodes, and compute the minimum quadratic interpolation function. The best threshold can be obtained by comparing the minimum quadratic interpolation function. A series of image quality evaluation results show this method has got improvement in fusion effect; moreover, it is not only effective for some individual image, but also for a large number of images.

  2. Detection and classification of Breast Cancer in Wavelet Sub-bands of Fractal Segmented Cancerous Zones.

    PubMed

    Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri

    2015-01-01

    Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.

  3. Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and wavelets.

    PubMed

    Cheremkhin, Pavel A; Kurbatova, Ekaterina A

    2018-01-01

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

  4. SU-F-J-27: Segmentation of Prostate CBCT Images with Implanted Calypso Transponders Using Double Haar Wavelet Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Saleh, Z; Tang, X

    Purpose: Segmentation of prostate CBCT images is an essential step towards real-time adaptive radiotherapy. It is challenging For Calypso patients, as more artifacts are generated by the beacon transponders. We herein propose a novel wavelet-based segmentation algorithm for rectum, bladder, and prostate of CBCT images with implanted Calypso transponders. Methods: Five hypofractionated prostate patients with daily CBCT were studied. Each patient had 3 Calypso transponder beacons implanted, and the patients were setup and treated with Calypso tracking system. Two sets of CBCT images from each patient were studied. The structures (i.e. rectum, bladder, and prostate) were contoured by a trainedmore » expert, and these served as ground truth. For a given CBCT, the moving window-based Double Haar transformation is applied first to obtain the wavelet coefficients. Based on a user defined point in the object of interest, a cluster algorithm based adaptive thresholding is applied to the low frequency components of the wavelet coefficients, and a Lee filter theory based adaptive thresholding is applied to the high frequency components. For the next step, the wavelet reconstruction is applied to the thresholded wavelet coefficients. A binary/segmented image of the object of interest is therefore obtained. DICE, sensitivity, inclusiveness and ΔV were used to evaluate the segmentation result. Results: Considering all patients, the bladder has the DICE, sensitivity, inclusiveness, and ΔV ranges of [0.81–0.95], [0.76–0.99], [0.83–0.94], [0.02–0.21]. For prostate, the ranges are [0.77–0.93], [0.84–0.97], [0.68–0.92], [0.1–0.46]. For rectum, the ranges are [0.72–0.93], [0.57–0.99], [0.73–0.98], [0.03–0.42]. Conclusion: The proposed algorithm appeared effective segmenting prostate CBCT images with the present of the Calypso artifacts. However, it is not robust in two scenarios: 1) rectum with significant amount of gas; 2) prostate with very low contrast. Model based algorithm might improve the segmentation in these two scenarios.« less

  5. [Correlation analysis of hearing level and soft palate movement after palatoplasty].

    PubMed

    Lou, Qun; Ma, Xiaoran; Ma, Lian; Luo, Yi; Zhu, Hongping; Zhou, Zhibo

    2015-10-01

    To explore the relationship between hearing level and soft palate movement after palatoplasty and to verify the importance of recovery of soft palate movement function for improving the middle ear function as well as reducing the hearing loss. A total of 64 non-syndromic cleft palate patients were selected and the lateral cephalometric radiographs were taken. The patients hearing level was evaluated by the pure tone hearing threshold examination. This study also analyzed the correlation between hearing threshold of the patients after palatoplasty and the soft palate elevation angle and velopharyngeal rate respectively. Kendall correlation analysis revealed that the correlation coefficient between hearing threshold and the soft palate elevation angle after palatoplasty was -0.339 (r = -0.339, P < 0.01).The correlation showed a negative correlation. The hearing threshold decreased as the soft palate elevation angle increased. After palatoplasty, the correlation coefficient between the hearing threshold and the rate of velopharyngeal closure was -0.277 (r = -0.277, P < 0.01). The correlation showed a negative correlation. While, The hearing threshold decreased with the increase of velopharyngeal closure rate. The hearing threshold was correlated with soft palate elevation angle and velpharyngeal closure rate. The movement of soft palate and velopharyngeal closure function after palatoplasty both have impact on patient hearing level. In terms of the influence level, the movement of soft palate has a higher level of impact on patient hearing level than velopharygeal closure function.

  6. Threshold factorization redux

    NASA Astrophysics Data System (ADS)

    Chay, Junegone; Kim, Chul

    2018-05-01

    We reanalyze the factorization theorems for the Drell-Yan process and for deep inelastic scattering near threshold, as constructed in the framework of the soft-collinear effective theory (SCET), from a new, consistent perspective. In order to formulate the factorization near threshold in SCET, we should include an additional degree of freedom with small energy, collinear to the beam direction. The corresponding collinear-soft mode is included to describe the parton distribution function (PDF) near threshold. The soft function is modified by subtracting the contribution of the collinear-soft modes in order to avoid double counting on the overlap region. As a result, the proper soft function becomes infrared finite, and all the factorized parts are free of rapidity divergence. Furthermore, the separation of the relevant scales in each factorized part becomes manifest. We apply the same idea to the dihadron production in e+e- annihilation near threshold, and show that the resultant soft function is also free of infrared and rapidity divergences.

  7. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  8. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  9. The efficacy of support vector machines (SVM) in robust determination of earthquake early warning magnitudes in central Japan

    NASA Astrophysics Data System (ADS)

    Reddy, Ramakrushna; Nair, Rajesh R.

    2013-10-01

    This work deals with a methodology applied to seismic early warning systems which are designed to provide real-time estimation of the magnitude of an event. We will reappraise the work of Simons et al. (2006), who on the basis of wavelet approach predicted a magnitude error of ±1. We will verify and improve upon the methodology of Simons et al. (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source-receiver distance of up to 150 km during the period 1998-2011. We applied a wavelet transform on the seismogram data and calculating scale-dependent threshold wavelet coefficients. These coefficients were then classified into low magnitude and high magnitude events by constructing a maximum margin hyperplane between the two classes, which forms the essence of SVMs. Further, the classified events from both the classes were picked up and linear regressions were plotted to determine the relationship between wavelet coefficient magnitude and earthquake magnitude, which in turn helped us to estimate the earthquake magnitude of an event given its threshold wavelet coefficient. At wavelet scale number 7, we predicted the earthquake magnitude of an event within 2.7 seconds. This means that a magnitude determination is available within 2.7 s after the initial onset of the P-wave. These results shed light on the application of SVM as a way to choose the optimal regression function to estimate the magnitude from a few seconds of an incoming seismogram. This would improve the approaches from Simons et al. (2006) which use an average of the two regression functions to estimate the magnitude.

  10. [Investigation of fast filter of ECG signals with lifting wavelet and smooth filter].

    PubMed

    Li, Xuefei; Mao, Yuxing; He, Wei; Yang, Fan; Zhou, Liang

    2008-02-01

    The lifting wavelet is used to decompose the original ECG signals and separate them into the approach signals with low frequency and the detail signals with high frequency, based on frequency characteristic. Parts of the detail signals are ignored according to the frequency characteristic. To avoid the distortion of QRS Complexes, the approach signals are filtered by an adaptive smooth filter with a proper threshold value. Through the inverse transform of the lifting wavelet, the reserved approach signals are reconstructed, and the three primary kinds of noise are limited effectively. In addition, the method is fast and there is no time delay between input and output.

  11. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  12. Multiadaptive Bionic Wavelet Transform: Application to ECG Denoising and Baseline Wandering Reduction

    NASA Astrophysics Data System (ADS)

    Sayadi, Omid; Shamsollahi, Mohammad B.

    2007-12-01

    We present a new modified wavelet transform, called the multiadaptive bionic wavelet transform (MABWT), that can be applied to ECG signals in order to remove noise from them under a wide range of variations for noise. By using the definition of bionic wavelet transform and adaptively determining both the center frequency of each scale together with the[InlineEquation not available: see fulltext.]-function, the problem of desired signal decomposition is solved. Applying a new proposed thresholding rule works successfully in denoising the ECG. Moreover by using the multiadaptation scheme, lowpass noisy interference effects on the baseline of ECG will be removed as a direct task. The method was extensively clinically tested with real and simulated ECG signals which showed high performance of noise reduction, comparable to those of wavelet transform (WT). Quantitative evaluation of the proposed algorithm shows that the average SNR improvement of MABWT is 1.82 dB more than the WT-based results, for the best case. Also the procedure has largely proved advantageous over wavelet-based methods for baseline wandering cancellation, including both DC components and baseline drifts.

  13. Wavelet based detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Gur, Berke M.; Niezrecki, Christopher

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.

  14. Wavelet-domain de-noising technique for THz pulsed spectroscopy

    NASA Astrophysics Data System (ADS)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Gavdush, Arsenii A.; Fokina, Irina N.; Karasik, Valeriy E.; Reshetov, Igor V.; Kudrin, Konstantin G.; Nosov, Pavel A.; Yurchenko, Stanislav O.

    2014-09-01

    De-noising of terahertz (THz) pulsed spectroscopy (TPS) data is an essential problem, since a noise in the TPS system data prevents correct reconstruction of the sample spectral dielectric properties and to perform the sample internal structure studying. There are certain regions in TPS signal Fourier spectrum, where Fourier-domain signal-to-noise ratio is relatively small. Effective de-noising might potentially expand the range of spectrometer spectral sensitivity and reduce the time of waveform registration, which is an essential problem for biomedical applications of TPS. In this work, it is shown how the recent progress in signal processing in wavelet-domain could be used for TPS waveforms de-noising. It demonstrates the ability to perform effective de-noising of TPS data using the algorithm of the Fast Wavelet Transform (FWT). The results of the optimal wavelet basis selection and wavelet-domain thresholding technique selection are reported. Developed technique is implemented for reconstruction of in vivo healthy and deseased skin samplesspectral characteristics at THz frequency range.

  15. Multi-level basis selection of wavelet packet decomposition tree for heart sound classification.

    PubMed

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Abdullah Ramaiah, Asri Ranga

    2013-10-01

    Wavelet packet transform decomposes a signal into a set of orthonormal bases (nodes) and provides opportunities to select an appropriate set of these bases for feature extraction. In this paper, multi-level basis selection (MLBS) is proposed to preserve the most informative bases of a wavelet packet decomposition tree through removing less informative bases by applying three exclusion criteria: frequency range, noise frequency, and energy threshold. MLBS achieved an accuracy of 97.56% for classifying normal heart sound, aortic stenosis, mitral regurgitation, and aortic regurgitation. MLBS is a promising basis selection to be suggested for signals with a small range of frequencies. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Continuous Wavelet Transform Analysis of Acceleration Signals Measured from a Wave Buoy

    PubMed Central

    Chuang, Laurence Zsu-Hsin; Wu, Li-Chung; Wang, Jong-Hao

    2013-01-01

    Accelerometers, which can be installed inside a floating platform on the sea, are among the most commonly used sensors for operational ocean wave measurements. To examine the non-stationary features of ocean waves, this study was conducted to derive a wavelet spectrum of ocean waves and to synthesize sea surface elevations from vertical acceleration signals of a wave buoy through the continuous wavelet transform theory. The short-time wave features can be revealed by simultaneously examining the wavelet spectrum and the synthetic sea surface elevations. The in situ wave signals were applied to verify the practicality of the wavelet-based algorithm. We confirm that the spectral leakage and the noise at very-low-frequency bins influenced the accuracies of the estimated wavelet spectrum and the synthetic sea surface elevations. The appropriate thresholds of these two factors were explored. To study the short-time wave features from the wave records, the acceleration signals recorded from an accelerometer inside a discus wave buoy are analysed. The results from the wavelet spectrum show the evidence of short-time nonlinear wave events. Our study also reveals that more surface profiles with higher vertical asymmetry can be found from short-time nonlinear wave with stronger harmonic spectral peak. Finally, we conclude that the algorithms of continuous wavelet transform are practical for revealing the short-time wave features of the buoy acceleration signals. PMID:23966188

  17. Wavelet threshold method of resolving noise interference in periodic short-impulse signals chaotic detection

    NASA Astrophysics Data System (ADS)

    Deng, Ke; Zhang, Lu; Luo, Mao-Kang

    2010-03-01

    The chaotic oscillator has already been considered as a powerful method to detect weak signals, even weak signals accompanied with noises. However, many examples, analyses and simulations indicate that chaotic oscillator detection system cannot guarantee the immunity to noises (even white noise). In fact the randomness of noises has a serious or even a destructive effect on the detection results in many cases. To solve this problem, we present a new detecting method based on wavelet threshold processing that can detect the chaotic weak signal accompanied with noise. All theoretical analyses and simulation experiments indicate that the new method reduces the noise interferences to detection significantly, thereby making the corresponding chaotic oscillator that detects the weak signals accompanied with noises more stable and reliable.

  18. An improved method based on wavelet coefficient correlation to filter noise in Doppler ultrasound blood flow signals

    NASA Astrophysics Data System (ADS)

    Wan, Renzhi; Zu, Yunxiao; Shao, Lin

    2018-04-01

    The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.

  19. Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Pang, L.; Liu, H.; Xu, X.

    2018-04-01

    PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.

  20. ECG denoising with adaptive bionic wavelet transform.

    PubMed

    Sayadi, Omid; Shamsollahi, Mohammad Bagher

    2006-01-01

    In this paper a new ECG denoising scheme is proposed using a novel adaptive wavelet transform, named bionic wavelet transform (BWT), which had been first developed based on a model of the active auditory system. There has been some outstanding features with the BWT such as nonlinearity, high sensitivity and frequency selectivity, concentrated energy distribution and its ability to reconstruct signal via inverse transform but the most distinguishing characteristic of BWT is that its resolution in the time-frequency domain can be adaptively adjusted not only by the signal frequency but also by the signal instantaneous amplitude and its first-order differential. Besides by optimizing the BWT parameters parallel to modifying a new threshold value, one can handle ECG denoising with results comparing to those of wavelet transform (WT). Preliminary tests of BWT application to ECG denoising were constructed on the signals of MIT-BIH database which showed high performance of noise reduction.

  1. [A new method of distinguishing weak and overlapping signals of proton magnetic resonance spectroscopy].

    PubMed

    Jiang, Gang; Quan, Hong; Wang, Cheng; Gong, Qiyong

    2012-12-01

    In this paper, a new method of combining translation invariant (TI) and wavelet-threshold (WT) algorithm to distinguish weak and overlapping signals of proton magnetic resonance spectroscopy (1H-MRS) is presented. First, the 1H-MRS spectrum signal is transformed into wavelet domain and then its wavelet coefficients are obtained. Then, the TI method and WT method are applied to detect the weak signals overlapped by the strong ones. Through the analysis of the simulation data, we can see that both frequency and amplitude information of small-signals can be obtained accurately by the algorithm, and through the combination with the method of signal fitting, quantitative calculation of the area under weak signals peaks can be realized.

  2. Hardware Design and Implementation of a Wavelet De-Noising Procedure for Medical Signal Preprocessing

    PubMed Central

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-01-01

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz. PMID:26501290

  3. Hardware design and implementation of a wavelet de-noising procedure for medical signal preprocessing.

    PubMed

    Chen, Szi-Wen; Chen, Yuan-Ho

    2015-10-16

    In this paper, a discrete wavelet transform (DWT) based de-noising with its applications into the noise reduction for medical signal preprocessing is introduced. This work focuses on the hardware realization of a real-time wavelet de-noising procedure. The proposed de-noising circuit mainly consists of three modules: a DWT, a thresholding, and an inverse DWT (IDWT) modular circuits. We also proposed a novel adaptive thresholding scheme and incorporated it into our wavelet de-noising procedure. Performance was then evaluated on both the architectural designs of the software and. In addition, the de-noising circuit was also implemented by downloading the Verilog codes to a field programmable gate array (FPGA) based platform so that its ability in noise reduction may be further validated in actual practice. Simulation experiment results produced by applying a set of simulated noise-contaminated electrocardiogram (ECG) signals into the de-noising circuit showed that the circuit could not only desirably meet the requirement of real-time processing, but also achieve satisfactory performance for noise reduction, while the sharp features of the ECG signals can be well preserved. The proposed de-noising circuit was further synthesized using the Synopsys Design Compiler with an Artisan Taiwan Semiconductor Manufacturing Company (TSMC, Hsinchu, Taiwan) 40 nm standard cell library. The integrated circuit (IC) synthesis simulation results showed that the proposed design can achieve a clock frequency of 200 MHz and the power consumption was only 17.4 mW, when operated at 200 MHz.

  4. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  5. A wavelet-based estimator of the degrees of freedom in denoised fMRI time series for probabilistic testing of functional connectivity and brain graphs.

    PubMed

    Patel, Ameera X; Bullmore, Edward T

    2016-11-15

    Connectome mapping using techniques such as functional magnetic resonance imaging (fMRI) has become a focus of systems neuroscience. There remain many statistical challenges in analysis of functional connectivity and network architecture from BOLD fMRI multivariate time series. One key statistic for any time series is its (effective) degrees of freedom, df, which will generally be less than the number of time points (or nominal degrees of freedom, N). If we know the df, then probabilistic inference on other fMRI statistics, such as the correlation between two voxel or regional time series, is feasible. However, we currently lack good estimators of df in fMRI time series, especially after the degrees of freedom of the "raw" data have been modified substantially by denoising algorithms for head movement. Here, we used a wavelet-based method both to denoise fMRI data and to estimate the (effective) df of the denoised process. We show that seed voxel correlations corrected for locally variable df could be tested for false positive connectivity with better control over Type I error and greater specificity of anatomical mapping than probabilistic connectivity maps using the nominal degrees of freedom. We also show that wavelet despiked statistics can be used to estimate all pairwise correlations between a set of regional nodes, assign a P value to each edge, and then iteratively add edges to the graph in order of increasing P. These probabilistically thresholded graphs are likely more robust to regional variation in head movement effects than comparable graphs constructed by thresholding correlations. Finally, we show that time-windowed estimates of df can be used for probabilistic connectivity testing or dynamic network analysis so that apparent changes in the functional connectome are appropriately corrected for the effects of transient noise bursts. Wavelet despiking is both an algorithm for fMRI time series denoising and an estimator of the (effective) df of denoised fMRI time series. Accurate estimation of df offers many potential advantages for probabilistically thresholding functional connectivity and network statistics tested in the context of spatially variant and non-stationary noise. Code for wavelet despiking, seed correlational testing and probabilistic graph construction is freely available to download as part of the BrainWavelet Toolbox at www.brainwavelet.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Lifting wavelet method of target detection

    NASA Astrophysics Data System (ADS)

    Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin

    2009-11-01

    Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.

  7. Wavelet-based segmentation of renal compartments in DCE-MRI of human kidney: initial results in patients and healthy volunteers.

    PubMed

    Li, Sheng; Zöllner, Frank G; Merrem, Andreas D; Peng, Yinghong; Roervik, Jarle; Lundervold, Arvid; Schad, Lothar R

    2012-03-01

    Renal diseases can lead to kidney failure that requires life-long dialysis or renal transplantation. Early detection and treatment can prevent progression towards end stage renal disease. MRI has evolved into a standard examination for the assessment of the renal morphology and function. We propose a wavelet-based clustering to group the voxel time courses and thereby, to segment the renal compartments. This approach comprises (1) a nonparametric, discrete wavelet transform of the voxel time course, (2) thresholding of the wavelet coefficients using Stein's Unbiased Risk estimator, and (3) k-means clustering of the wavelet coefficients to segment the kidneys. Our method was applied to 3D dynamic contrast enhanced (DCE-) MRI data sets of human kidney in four healthy volunteers and three patients. On average, the renal cortex in the healthy volunteers could be segmented at 88%, the medulla at 91%, and the pelvis at 98% accuracy. In the patient data, with aberrant voxel time courses, the segmentation was also feasible with good results for the kidney compartments. In conclusion wavelet based clustering of DCE-MRI of kidney is feasible and a valuable tool towards automated perfusion and glomerular filtration rate quantification. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. A generalized time-frequency subtraction method for robust speech enhancement based on wavelet filter banks modeling of human auditory system.

    PubMed

    Shao, Yu; Chang, Chip-Hong

    2007-08-01

    We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.

  9. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  10. Sensor system for heart sound biomonitor

    NASA Astrophysics Data System (ADS)

    Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek

    1999-09-01

    Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.

  11. The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm

    NASA Astrophysics Data System (ADS)

    Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero

    1999-10-01

    We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.

  12. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  13. Identification of structural damage using wavelet-based data classification

    NASA Astrophysics Data System (ADS)

    Koh, Bong-Hwan; Jeong, Min-Joong; Jung, Uk

    2008-03-01

    Predicted time-history responses from a finite-element (FE) model provide a baseline map where damage locations are clustered and classified by extracted damage-sensitive wavelet coefficients such as vertical energy threshold (VET) positions having large silhouette statistics. Likewise, the measured data from damaged structure are also decomposed and rearranged according to the most dominant positions of wavelet coefficients. Having projected the coefficients to the baseline map, the true localization of damage can be identified by investigating the level of closeness between the measurement and predictions. The statistical confidence of baseline map improves as the number of prediction cases increases. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating structural damage even in the presence of a considerable amount of process and measurement noise.

  14. Improved wavelet de-noising method of rail vibration signal for wheel tread detection

    NASA Astrophysics Data System (ADS)

    Zhao, Quan-ke; Zhao, Quanke; Gao, Xiao-rong; Luo, Lin

    2011-12-01

    The irregularities of wheel tread can be detected by processing acceleration vibration signal of railway. Various kinds of noise from different sources such as wheel-rail resonance, bad weather and artificial reasons are the key factors influencing detection accuracy. A method which uses wavelet threshold de-noising is investigated to reduce noise in the detection signal, and an improved signal processing algorithm based on it has been established. The results of simulations and field experiments show that the proposed method can increase signal-to-noise ratio (SNR) of the rail vibration signal effectively, and improve the detection accuracy.

  15. Noise characteristics in DORIS station positions time series derived from IGN-JPL, INASAN and CNES-CLS analysis centres

    NASA Astrophysics Data System (ADS)

    Khelifa, S.

    2014-12-01

    Using wavelet transform and Allan variance, we have analysed the solutions of weekly position residuals of 09 high latitude DORIS stations in STCD (STation Coordinate Difference) format provided from the three Analysis Centres : IGN-JPL (solution ign11wd01), INASAN (solution ina10wd01) and CNES-CLS (solution lca11wd02), in order to compare the spectral characteristics of their residual noise. The temporal correlations between the three solutions, two by two and station by station, for each component (North, East and Vertical) reveal a high correlation in the horizontal components (North and East). For the North component, the correlation average is about 0.88, 0.81 and 0.79 between, respectively, IGN-INA, IGN-LCA and INA-LCA solutions, then for the East component it is about 0.84, 0.82 and 0.76, respectively. However, the correlations for the Vertical component are moderate with an average of 0.64, 0.57 and 0.58 in, respectively, IGN-INA, IGN-LCA and INA-LCA solutions. After removing the trends and seasonal components from the analysed time series, the Allan variance analysis shows that the three solutions are dominated by a white noise in the all three components (North, East and Vertical). The wavelet transform analysis, using the VisuShrink method with soft thresholding, reveals that the noise level in the LCA solution is less important compared to IGN and INA solutions. Indeed, the standard deviation of the noise for the three components is in the range of 5-11, 5-12 and 4-9mm in the IGN, INA, and LCA solutions, respectively.

  16. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  17. Rapid limit tests for metal impurities in pharmaceutical materials by X-ray fluorescence spectroscopy using wavelet transform filtering.

    PubMed

    Arzhantsev, Sergey; Li, Xiang; Kauffman, John F

    2011-02-01

    We introduce a new method for analysis of X-ray fluorescence (XRF) spectra based on continuous wavelet transform filters, and the method is applied to the determination of toxic metals in pharmaceutical materials using hand-held XRF spectrometers. The method uses the continuous wavelet transform to filter the signal and noise components of the spectrum. We present a limit test that compares the wavelet domain signal-to-noise ratios at the energies of the elements of interest to an empirically determined signal-to-noise decision threshold. The limit test is advantageous because it does not require the user to measure calibration samples prior to measurement, though system suitability tests are still recommended. The limit test was evaluated in a collaborative study that involved five different hand-held XRF spectrometers used by multiple analysts in six separate laboratories across the United States. In total, more than 1200 measurements were performed. The detection limits estimated for arsenic, lead, mercury, and chromium were 8, 14, 20, and 150 μg/g, respectively.

  18. Does hearing in response to soft-tissue stimulation involve skull vibrations? A within-subject comparison between skull vibration magnitudes and hearing thresholds.

    PubMed

    Chordekar, Shai; Perez, Ronen; Adelman, Cahtia; Sohmer, Haim; Kishon-Rabin, Liat

    2018-04-03

    Hearing can be elicited in response to bone as well as soft-tissue stimulation. However, the underlying mechanism of soft-tissue stimulation is under debate. It has been hypothesized that if skull vibrations were the underlying mechanism of hearing in response to soft-tissue stimulation, then skull vibrations would be associated with hearing thresholds. However, if skull vibrations were not associated with hearing thresholds, an alternative mechanism is involved. In the present study, both skull vibrations and hearing thresholds were assessed in the same participants in response to bone (mastoid) and soft-tissue (neck) stimulation. The experimental group included five hearing-impaired adults in whom a bone-anchored hearing aid was implanted due to conductive or mixed hearing loss. Because the implant is exposed above the skin and has become an integral part of the temporal bone, vibration of the implant represented skull vibrations. To ensure that middle-ear pathologies of the experimental group did not affect overall results, hearing thresholds were also obtained in 10 participants with normal hearing in response to stimulation at the same sites. We found that the magnitude of the bone vibrations initiated by the stimulation at the two sites (neck and mastoid) detected by the laser Doppler vibrometer on the bone-anchored implant were linearly related to stimulus intensity. It was therefore possible to extrapolate the vibration magnitudes at low-intensity stimulation, where poor signal-to-noise ratio limited actual recordings. It was found that the vibration magnitude differences (between soft-tissue and bone stimulation) were not different than the hearing threshold differences at the tested frequencies. Results of the present study suggest that bone vibration magnitude differences can adequately explain hearing threshold differences and are likely to be responsible for the hearing sensation. Thus, the present results support the idea that bone and soft-tissue conduction could share the same underlying mechanism, namely the induction of bone vibrations. Studies with the present methodology should be continued in future work in order to obtain further insight into the underlying mechanism of activation of the hearing system. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Reconstruction of color images via Haar wavelet based on digital micromirror device

    NASA Astrophysics Data System (ADS)

    Liu, Xingjiong; He, Weiji; Gu, Guohua

    2015-10-01

    A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.

  20. A factorization approach to next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Melville, S.; Vernazza, L.; White, C. D.

    2015-06-01

    Threshold logarithms become dominant in partonic cross sections when the selected final state forces gluon radiation to be soft or collinear. Such radiation factorizes at the level of scattering amplitudes, and this leads to the resummation of threshold logarithms which appear at leading power in the threshold variable. In this paper, we consider the extension of this factorization to include effects suppressed by a single power of the threshold variable. Building upon the Low-Burnett-Kroll-Del Duca (LBKD) theorem, we propose a decomposition of radiative amplitudes into universal building blocks, which contain all effects ultimately responsible for next-to-leading-power (NLP) threshold logarithms in hadronic cross sections for electroweak annihilation processes. In particular, we provide a NLO evaluation of the radiative jet function, responsible for the interference of next-to-soft and collinear effects in these cross sections. As a test, using our expression for the amplitude, we reproduce all abelian-like NLP threshold logarithms in the NNLO Drell-Yan cross section, including the interplay of real and virtual emissions. Our results are a significant step towards developing a generally applicable resummation formalism for NLP threshold effects, and illustrate the breakdown of next-to-soft theorems for gauge theory amplitudes at loop level.

  1. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  2. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models

    PubMed Central

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today’s increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, Singapore’s MSCI futures, South Korea’s KOSPI 200 futures, and Taiwan’s TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis. PMID:27248692

  3. Forecasting East Asian Indices Futures via a Novel Hybrid of Wavelet-PCA Denoising and Artificial Neural Network Models.

    PubMed

    Chan Phooi M'ng, Jacinta; Mehralizadeh, Mohammadali

    2016-01-01

    The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today's increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong's Hang Seng futures, Japan's NIKKEI 225 futures, Singapore's MSCI futures, South Korea's KOSPI 200 futures, and Taiwan's TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis.

  4. Taste acuity of the human palate. III. Studies with taste solutions on subjects in different age groups.

    PubMed

    Nilsson, B

    1979-01-01

    The taste acuity at the midline of the hard and soft palate near their junction and, for comparison, on representative areas of the tongue was determined in 80 subjects aged 11-79 years by applying test solutions of the four basic tastes. Twenty-one subjects (26%) could identify at least one taste on the hard palate but none could recognize all four tastes. Seventy subjects (87%) could identify at least one taste on the soft palate and 37 subjects (46%) could recognize all four tastes. Taste thresholds were much higher on the hard palate than on the tongue and were in most cases higher on the soft palate than on the tongue. The ability to recognize all four tastes was less frequent in older than in younger subjects and the difference was greatest on the soft palate and least at the foliate papillae. The differences were greatest for citric acid and least for sucrose. There was a tendency to lower thresholds for women compared to men for all four tastes on all areas examined which was most pronounced on the soft palate. No differences in taste thresholds were found between denture wearers and subjects with natural dentition. Smokers had higher thresholds than non-smokers only for salt on the soft palate and the base of the tongue.

  5. Finger wear detection for production line battery tester

    DOEpatents

    Depiante, Eduardo V.

    1997-01-01

    A method for detecting wear in a battery tester probe. The method includes providing a battery tester unit having at least one tester finger, generating a tester signal using the tester fingers and battery tester unit with the signal characteristic of the electrochemical condition of the battery and the tester finger, applying wavelet transformation to the tester signal including computing a mother wavelet to produce finger wear indicator signals, analyzing the signals to create a finger wear index, comparing the wear index for the tester finger with the index for a new tester finger and generating a tester finger signal change signal to indicate achieving a threshold wear change.

  6. Local cooling reduces skin ischemia under surface pressure in rats: an assessment by wavelet analysis of laser Doppler blood flow oscillations.

    PubMed

    Jan, Yih-Kuen; Lee, Bernard; Liao, Fuyuan; Foreman, Robert D

    2012-10-01

    The objectives of this study were to investigate the effects of local cooling on skin blood flow response to prolonged surface pressure and to identify associated physiological controls mediating these responses using the wavelet analysis of blood flow oscillations in rats. Twelve Sprague-Dawley rats were randomly assigned to three protocols, including pressure with local cooling (Δt = -10 °C), pressure with local heating (Δt = 10 °C) and pressure without temperature changes. Pressure of 700 mmHg was applied to the right trochanter area of rats for 3 h. Skin blood flow was measured using laser Doppler flowmetry. The 3 h loading period was divided into non-overlapping 30 min epochs for the analysis of the changes of skin blood flow oscillations using wavelet spectral analysis. The wavelet amplitudes and powers of three frequencies (metabolic, neurogenic and myogenic) of skin blood flow oscillations were calculated. The results showed that after an initial loading period of 30 min, skin blood flow continually decreased under the conditions of pressure with heating and of pressure without temperature changes, but maintained stable under the condition of pressure with cooling. Wavelet analysis revealed that stable skin blood flow under pressure with cooling was attributed to changes in the metabolic and myogenic frequencies. This study demonstrates that local cooling may be useful for reducing ischemia of weight-bearing soft tissues that prevents pressure ulcers.

  7. Local cooling reduces skin ischemia under surface pressure in rats: an assessment by wavelet analysis of laser Doppler blood flow oscillations

    PubMed Central

    Jan, Yih-Kuen; Lee, Bernard; Liao, Fuyuan; Foreman, Robert D.

    2012-01-01

    The objectives of this study were to investigate the effects of local cooling on skin blood flow response to prolonged surface pressure and to identify associated physiological controls mediating these responses using wavelet analysis of blood flow oscillations in rats. Twelve Sprague Dawley rats were randomly assigned into three protocols, including pressure with local cooling (Δt= −10°C), pressure with local heating (Δt= 10°C), and pressure without temperature changes. Pressure of 700 mmHg was applied to the right trochanter area of rats for 3 hours. Skin blood flow was measured using laser Doppler flowmetry. The 3-hour loading period was divided into non-overlapping 30 min epochs for analysis of the changes of skin blood flow oscillations using wavelet spectral analysis. The wavelet amplitudes and powers of three frequencies (metabolic, neurogenic and myogenic) of skin blood flow oscillations were calculated. The results showed that after an initial loading period of 30 min, skin blood flow continually decreased in the conditions of pressure with heating and of pressure without temperature changes, but maintained stable in the condition of pressure with cooling. Wavelet analysis revealed that stable skin blood flow under pressure with cooling was attributed to changes in the metabolic and myogenic frequencies. This study demonstrates that local cooling may be useful for reducing ischemia of weight-bearing soft tissues that prevents pressure ulcers. PMID:23010955

  8. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  9. Multiscale computations with a wavelet-adaptive algorithm

    NASA Astrophysics Data System (ADS)

    Rastigejev, Yevgenii Anatolyevich

    A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.

  10. Seismic instantaneous frequency extraction based on the SST-MAW

    NASA Astrophysics Data System (ADS)

    Liu, Naihao; Gao, Jinghuai; Jiang, Xiudi; Zhang, Zhuosheng; Wang, Ping

    2018-06-01

    The instantaneous frequency (IF) extraction of seismic data has been widely applied to seismic exploration for decades, such as detecting seismic absorption and characterizing depositional thicknesses. Based on the complex-trace analysis, the Hilbert transform (HT) can extract the IF directly, which is a traditional method and susceptible to noise. In this paper, a robust approach based on the synchrosqueezing transform (SST) is proposed to extract the IF from seismic data. In this process, a novel analytical wavelet is developed and chosen as the basic wavelet, which is called the modified analytical wavelet (MAW) and comes from the three parameter wavelet. After transforming the seismic signal into a sparse time-frequency domain via the SST taking the MAW (SST-MAW), an adaptive threshold is introduced to improve the noise immunity and accuracy of the IF extraction in a noisy environment. Note that the SST-MAW reconstructs a complex trace to extract seismic IF. To demonstrate the effectiveness of the proposed method, we apply the SST-MAW to synthetic data and field seismic data. Numerical experiments suggest that the proposed procedure yields the higher resolution and the better anti-noise performance compared to the conventional IF extraction methods based on the HT method and continuous wavelet transform. Moreover, geological features (such as the channels) are well characterized, which is insightful for further oil/gas reservoir identification.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maiolo, M., E-mail: massimo.maiolo@zhaw.ch; ZHAW, Institut für Angewandte Simulation, Grüental, CH-8820 Wädenswil; Vancheri, A., E-mail: alberto.vancheri@supsi.ch

    In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficientmore » orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.« less

  12. Spatiotemporal groundwater level modeling using hybrid artificial intelligence-meshless method

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Mousavi, Shahram

    2016-05-01

    Uncertainties of the field parameters, noise of the observed data and unknown boundary conditions are the main factors involved in the groundwater level (GL) time series which limit the modeling and simulation of GL. This paper presents a hybrid artificial intelligence-meshless model for spatiotemporal GL modeling. In this way firstly time series of GL observed in different piezometers were de-noised using threshold-based wavelet method and the impact of de-noised and noisy data was compared in temporal GL modeling by artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). In the second step, both ANN and ANFIS models were calibrated and verified using GL data of each piezometer, rainfall and runoff considering various input scenarios to predict the GL at one month ahead. In the final step, the simulated GLs in the second step of modeling were considered as interior conditions for the multiquadric radial basis function (RBF) based solve of governing partial differential equation of groundwater flow to estimate GL at any desired point within the plain where there is not any observation. In order to evaluate and compare the GL pattern at different time scales, the cross-wavelet coherence was also applied to GL time series of piezometers. The results showed that the threshold-based wavelet de-noising approach can enhance the performance of the modeling up to 13.4%. Also it was found that the accuracy of ANFIS-RBF model is more reliable than ANN-RBF model in both calibration and validation steps.

  13. Finger wear detection for production line battery tester

    DOEpatents

    Depiante, E.V.

    1997-11-18

    A method is described for detecting wear in a battery tester probe. The method includes providing a battery tester unit having at least one tester finger, generating a tester signal using the tester fingers and battery tester unit with the signal characteristic of the electrochemical condition of the battery and the tester finger, applying wavelet transformation to the tester signal including computing a mother wavelet to produce finger wear indicator signals, analyzing the signals to create a finger wear index, comparing the wear index for the tester finger with the index for a new tester finger and generating a tester finger signal change signal to indicate achieving a threshold wear change. 9 figs.

  14. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  15. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  16. Wavelet compression of noisy tomographic images

    NASA Astrophysics Data System (ADS)

    Kappeler, Christian; Mueller, Stefan P.

    1995-09-01

    3D data acquisition is increasingly used in positron emission tomography (PET) to collect a larger fraction of the emitted radiation. A major practical difficulty with data storage and transmission in 3D-PET is the large size of the data sets. A typical dynamic study contains about 200 Mbyte of data. PET images inherently have a high level of photon noise and therefore usually are evaluated after being processed by a smoothing filter. In this work we examined lossy compression schemes under the postulate not induce image modifications exceeding those resulting from low pass filtering. The standard we will refer to is the Hanning filter. Resolution and inhomogeneity serve as figures of merit for quantification of image quality. The images to be compressed are transformed to a wavelet representation using Daubechies12 wavelets and compressed after filtering by thresholding. We do not include further compression by quantization and coding here. Achievable compression factors at this level of processing are thirty to fifty.

  17. Built-Up Area Detection from High-Resolution Satellite Images Using Multi-Scale Wavelet Transform and Local Spatial Statistics

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Zhang, Y.; Gao, J.; Yuan, Y.; Lv, Z.

    2018-04-01

    Recently, built-up area detection from high-resolution satellite images (HRSI) has attracted increasing attention because HRSI can provide more detailed object information. In this paper, multi-resolution wavelet transform and local spatial autocorrelation statistic are introduced to model the spatial patterns of built-up areas. First, the input image is decomposed into high- and low-frequency subbands by wavelet transform at three levels. Then the high-frequency detail information in three directions (horizontal, vertical and diagonal) are extracted followed by a maximization operation to integrate the information in all directions. Afterward, a cross-scale operation is implemented to fuse different levels of information. Finally, local spatial autocorrelation statistic is introduced to enhance the saliency of built-up features and an adaptive threshold algorithm is used to achieve the detection of built-up areas. Experiments are conducted on ZY-3 and Quickbird panchromatic satellite images, and the results show that the proposed method is very effective for built-up area detection.

  18. Damage threshold of platinum coating used for optics for self-seeding of soft x-ray free electron laser

    DOE PAGES

    Krzywinski, Jacek; Cocco, Daniele; Moeller, Stefan; ...

    2015-02-23

    We investigated the experimental damage threshold of platinum coating on a silicon substrate illuminated by soft x-ray radiation at grazing incidence angle of 2.1 deg. The coating was the same as the blazed grating used for the soft X-ray self-seeding optics of the Linac Coherent Light Source free electron laser. The irradiation condition was chosen such that the absorbed dose was similar to the maximum dose expected for the grating. The expected dose was simulated by solving the Helmholtz equation in non-homogenous media. The experiment was performed at 900 eV photon energy for both single pulse and multi-shot conditions. Wemore » have not observed single shot damage. This corresponds to a single shot damage threshold being higher than 3 J/cm 2. The multiple shot damage threshold measured for 10 shots and about 600 shots was determined to be 0.95 J/cm 2 and 0.75 J/cm 2 respectively. The damage threshold occurred at an instantaneous dose which is higher that the melt dose of platinum.« less

  19. Speckle reduction in optical coherence tomography images based on wave atoms

    PubMed Central

    Du, Yongzhao; Liu, Gangjun; Feng, Guoying; Chen, Zhongping

    2014-01-01

    Abstract. Optical coherence tomography (OCT) is an emerging noninvasive imaging technique, which is based on low-coherence interferometry. OCT images suffer from speckle noise, which reduces image contrast. A shrinkage filter based on wave atoms transform is proposed for speckle reduction in OCT images. Wave atoms transform is a new multiscale geometric analysis tool that offers sparser expansion and better representation for images containing oscillatory patterns and textures than other traditional transforms, such as wavelet and curvelet transforms. Cycle spinning-based technology is introduced to avoid visual artifacts, such as Gibbs-like phenomenon, and to develop a translation invariant wave atoms denoising scheme. The speckle suppression degree in the denoised images is controlled by an adjustable parameter that determines the threshold in the wave atoms domain. The experimental results show that the proposed method can effectively remove the speckle noise and improve the OCT image quality. The signal-to-noise ratio, contrast-to-noise ratio, average equivalent number of looks, and cross-correlation (XCOR) values are obtained, and the results are also compared with the wavelet and curvelet thresholding techniques. PMID:24825507

  20. Mass Detection in Mammographic Images Using Wavelet Processing and Adaptive Threshold Technique.

    PubMed

    Vikhe, P S; Thool, V R

    2016-04-01

    Detection of mass in mammogram for early diagnosis of breast cancer is a significant assignment in the reduction of the mortality rate. However, in some cases, screening of mass is difficult task for radiologist, due to variation in contrast, fuzzy edges and noisy mammograms. Masses and micro-calcifications are the distinctive signs for diagnosis of breast cancer. This paper presents, a method for mass enhancement using piecewise linear operator in combination with wavelet processing from mammographic images. The method includes, artifact suppression and pectoral muscle removal based on morphological operations. Finally, mass segmentation for detection using adaptive threshold technique is carried out to separate the mass from background. The proposed method has been tested on 130 (45 + 85) images with 90.9 and 91 % True Positive Fraction (TPF) at 2.35 and 2.1 average False Positive Per Image(FP/I) from two different databases, namely Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM). The obtained results show that, the proposed technique gives improved diagnosis in the early breast cancer detection.

  1. A detection method for X-ray images based on wavelet transforms: the case of the ROSAT PSPC.

    NASA Astrophysics Data System (ADS)

    Damiani, F.; Maggio, A.; Micela, G.; Sciortino, S.

    1996-02-01

    The authors have developed a method based on wavelet transforms (WT) to detect efficiently sources in PSPC X-ray images. The multiscale approach typical of WT can be used to detect sources with a large range of sizes, and to estimate their size and count rate. Significance thresholds for candidate detections (found as local WT maxima) have been derived from a detailed study of the probability distribution of the WT of a locally uniform background. The use of the exposure map allows good detection efficiency to be retained even near PSPC ribs and edges. The algorithm may also be used to get upper limits to the count rate of undetected objects. Simulations of realistic PSPC images containing either pure background or background+sources were used to test the overall algorithm performances, and to assess the frequency of spurious detections (vs. detection threshold) and the algorithm sensitivity. Actual PSPC images of galaxies and star clusters show the algorithm to have good performance even in cases of extended sources and crowded fields.

  2. Multiscale peak detection in wavelet space.

    PubMed

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-07

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  3. Gradual multifractal reconstruction of time-series: Formulation of the method and an application to the coupling between stock market indices and their Hölder exponents

    NASA Astrophysics Data System (ADS)

    Keylock, Christopher J.

    2018-04-01

    A technique termed gradual multifractal reconstruction (GMR) is formulated. A continuum is defined from a signal that preserves the pointwise Hölder exponent (multifractal) structure of a signal but randomises the locations of the original data values with respect to this (φ = 0), to the original signal itself(φ = 1). We demonstrate that this continuum may be populated with synthetic time series by undertaking selective randomisation of wavelet phases using a dual-tree complex wavelet transform. That is, the φ = 0 end of the continuum is realised using the recently proposed iterated, amplitude adjusted wavelet transform algorithm (Keylock, 2017) that fully randomises the wavelet phases. This is extended to the GMR formulation by selective phase randomisation depending on whether or not the wavelet coefficient amplitudes exceeds a threshold criterion. An econophysics application of the technique is presented. The relation between the normalised log-returns and their Hölder exponents for the daily returns of eight financial indices are compared. One particularly noticeable result is the change for the two American indices (NASDAQ 100 and S&P 500) from a non-significant to a strongly significant (as determined using GMR) cross-correlation between the returns and their Hölder exponents from before the 2008 crash to afterwards. This is also reflected in the skewness of the phase difference distributions, which exhibit a geographical structure, with Asian markets not exhibiting significant skewness in contrast to those from elsewhere globally.

  4. Texture analysis of intermediate-advanced hepatocellular carcinoma: prognosis and patients' selection of transcatheter arterial chemoembolization and sorafenib

    PubMed Central

    Fu, Sirui; Chen, Shuting; Liang, Changhong; Liu, Zaiyi; Zhu, Yanjie; Li, Yong; Lu, Ligong

    2017-01-01

    Transcatheter arterial chemoembolization (TACE) and sorafenib combination treatment for unselected hepatocellular carcinoma (HCC) is controversial. We explored the potential of texture analysis for appropriate patient selection. There were 261 HCCs included (TACE group: n = 197; TACE plus sorafenib (TACE+Sorafenib) group n = 64). We applied a Gabor filter and wavelet transform with 3 band-width responses (filter 0, 1.0, and 1.5) to portal-phase computed tomography (CT) images of the TACE group. Twenty-one textural parameters per filter were extracted from the region of interests delineated around tumor outline. After testing survival correlations, the TACE group was subdivided according to parameter thresholds in receiver operating characteristic curves and compared to TACE+Sorafenib group survival. The Gabor-1-90 (filter 0) was most significantly correlated with TTP. The TACE group was accordingly divided into the TACE-1 (Gabor-1-90 ≤ 3.6190) and TACE-2 (Gabor-1-90 > 3.6190) subgroups; TTP was similar in the TACE-1 subgroup and TACE+Sorafenib group, but shorter in the TACE-2 subgroup. Only wavelet-3-D (filter 1.0) correlated with overall survival (OS), and was used for subgrouping. The TACE-5 (wavelet-3-D ≤ 12.2620) subgroup and the TACE+Sorafenib group showed similar OS, while the TACE-6 (wavelet-3-D > 12.2620) subgroup had shorter OS. Gabor-1-90 and wavelet-3-D were consistent. In dependent of tumor number or size, CT textural parameters are correlated with TTP and OS. Patients with lower Gabor-1-90 (filter 0) and wavelet-3-D (filter 1.0) should be treated with TACE and sorafenib. Texture analysis holds promise for appropriate selection of HCCs for this combination therapy. PMID:27911268

  5. Differentiating epileptic from non-epileptic high frequency intracerebral EEG signals with measures of wavelet entropy.

    PubMed

    Mooij, Anne H; Frauscher, Birgit; Amiri, Mina; Otte, Willem M; Gotman, Jean

    2016-12-01

    To assess whether there is a difference in the background activity in the ripple band (80-200Hz) between epileptic and non-epileptic channels, and to assess whether this difference is sufficient for their reliable separation. We calculated mean and standard deviation of wavelet entropy in 303 non-epileptic and 334 epileptic channels from 50 patients with intracerebral depth electrodes and used these measures as predictors in a multivariable logistic regression model. We assessed sensitivity, positive predictive value (PPV) and negative predictive value (NPV) based on a probability threshold corresponding to 90% specificity. The probability of a channel being epileptic increased with higher mean (p=0.004) and particularly with higher standard deviation (p<0.0001). The performance of the model was however not sufficient for fully classifying the channels. With a threshold corresponding to 90% specificity, sensitivity was 37%, PPV was 80%, and NPV was 56%. A channel with a high standard deviation of entropy is likely to be epileptic; with a threshold corresponding to 90% specificity our model can reliably select a subset of epileptic channels. Most studies have concentrated on brief ripple events. We showed that background activity in the ripple band also has some ability to discriminate epileptic channels. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    NASA Astrophysics Data System (ADS)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The treatment on finance touches on the use of wavelets by other authors in studying stock prices, commodity behaviour, market dynamics and foreign exchange rates. The treatment on geophysics covers what was omitted from the fourth chapter, namely, seismology, well logging, topographic feature analysis and the analysis of climatic data. The text concludes with an assortment of other application areas which could only be mentioned in passing. Unlike most other publications in the subject, this book does not treat wavelet transforms in a mathematically rigorous manner but rather aims to explain the mechanics of the wavelet transform in a way that is easy to understand. Consequently, it serves as an excellent overview of the subject rather than as a reference text. Keeping the mathematics to a minimum and omitting cumbersome and detailed proofs from the text, the book is best-suited to those who are new to wavelets or who want an intuitive understanding of the subject. Such an audience may include graduate students in engineering and professionals and researchers in engineering and the applied sciences.

  7. Automatic brain tumor detection in MRI: methodology and statistical validation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Islam, Mohammad A.; Shaik, Jahangheer; Parra, Carlos; Ogg, Robert

    2005-04-01

    Automated brain tumor segmentation and detection are immensely important in medical diagnostics because it provides information associated to anatomical structures as well as potential abnormal tissue necessary to delineate appropriate surgical planning. In this work, we propose a novel automated brain tumor segmentation technique based on multiresolution texture information that combines fractal Brownian motion (fBm) and wavelet multiresolution analysis. Our wavelet-fractal technique combines the excellent multiresolution localization property of wavelets to texture extraction of fractal. We prove the efficacy of our technique by successfully segmenting pediatric brain MR images (MRIs) from St. Jude Children"s Research Hospital. We use self-organizing map (SOM) as our clustering tool wherein we exploit both pixel intensity and multiresolution texture features to obtain segmented tumor. Our test results show that our technique successfully segments abnormal brain tissues in a set of T1 images. In the next step, we design a classifier using Feed-Forward (FF) neural network to statistically validate the presence of tumor in MRI using both the multiresolution texture and the pixel intensity features. We estimate the corresponding receiver operating curve (ROC) based on the findings of true positive fractions and false positive fractions estimated from our classifier at different threshold values. An ROC, which can be considered as a gold standard to prove the competence of a classifier, is obtained to ascertain the sensitivity and specificity of our classifier. We observe that at threshold 0.4 we achieve true positive value of 1.0 (100%) sacrificing only 0.16 (16%) false positive value for the set of 50 T1 MRI analyzed in this experiment.

  8. Detection algorithm for glass bottle mouth defect by continuous wavelet transform based on machine vision

    NASA Astrophysics Data System (ADS)

    Qian, Jinfang; Zhang, Changjiang

    2014-11-01

    An efficient algorithm based on continuous wavelet transform combining with pre-knowledge, which can be used to detect the defect of glass bottle mouth, is proposed. Firstly, under the condition of ball integral light source, a perfect glass bottle mouth image is obtained by Japanese Computar camera through the interface of IEEE-1394b. A single threshold method based on gray level histogram is used to obtain the binary image of the glass bottle mouth. In order to efficiently suppress noise, moving average filter is employed to smooth the histogram of original glass bottle mouth image. And then continuous wavelet transform is done to accurately determine the segmentation threshold. Mathematical morphology operations are used to get normal binary bottle mouth mask. A glass bottle to be detected is moving to the detection zone by conveyor belt. Both bottle mouth image and binary image are obtained by above method. The binary image is multiplied with normal bottle mask and a region of interest is got. Four parameters (number of connected regions, coordinate of centroid position, diameter of inner cycle, and area of annular region) can be computed based on the region of interest. Glass bottle mouth detection rules are designed by above four parameters so as to accurately detect and identify the defect conditions of glass bottle. Finally, the glass bottles of Coca-Cola Company are used to verify the proposed algorithm. The experimental results show that the proposed algorithm can accurately detect the defect conditions of the glass bottles and have 98% detecting accuracy.

  9. Cochlear implant characteristics and speech perception skills of adolescents with long-term device use.

    PubMed

    Davidson, Lisa S; Geers, Ann E; Brenner, Christine

    2010-10-01

    Updated cochlear implant technology and optimized fitting can have a substantial impact on speech perception. The effects of upgrades in processor technology and aided thresholds on word recognition at soft input levels and sentence recognition in noise were examined. We hypothesized that updated speech processors and lower aided thresholds would allow improved recognition of soft speech without compromising performance in noise. 109 teenagers who had used a Nucleus 22-cochlear implant since preschool were tested with their current speech processor(s) (101 unilateral and 8 bilateral): 13 used the Spectra, 22 the ESPrit 22, 61 the ESPrit 3G, and 13 the Freedom. The Lexical Neighborhood Test (LNT) was administered at 70 and 50 dB SPL and the Bamford Kowal Bench sentences were administered in quiet and in noise. Aided thresholds were obtained for frequency-modulated tones from 250 to 4,000 Hz. Results were analyzed using repeated measures analysis of variance. Aided thresholds for the Freedom/3G group were significantly lower (better) than the Spectra/Sprint group. LNT scores at 50 dB were significantly higher for the Freedom/3G group. No significant differences between the 2 groups were found for the LNT at 70 or sentences in quiet or noise. Adolescents using updated processors that allowed for aided detection thresholds of 30 dB HL or better performed the best at soft levels. The BKB in noise results suggest that greater access to soft speech does not compromise listening in noise.

  10. Indirect adaptive soft computing based wavelet-embedded control paradigms for WT/PV/SOFC in a grid/charging station connected hybrid power system.

    PubMed

    Mumtaz, Sidra; Khan, Laiq; Ahmed, Saghir; Bader, Rabiah

    2017-01-01

    This paper focuses on the indirect adaptive tracking control of renewable energy sources in a grid-connected hybrid power system. The renewable energy systems have low efficiency and intermittent nature due to unpredictable meteorological conditions. The domestic load and the conventional charging stations behave in an uncertain manner. To operate the renewable energy sources efficiently for harvesting maximum power, instantaneous nonlinear dynamics should be captured online. A Chebyshev-wavelet embedded NeuroFuzzy indirect adaptive MPPT (maximum power point tracking) control paradigm is proposed for variable speed wind turbine-permanent synchronous generator (VSWT-PMSG). A Hermite-wavelet incorporated NeuroFuzzy indirect adaptive MPPT control strategy for photovoltaic (PV) system to extract maximum power and indirect adaptive tracking control scheme for Solid Oxide Fuel Cell (SOFC) is developed. A comprehensive simulation test-bed for a grid-connected hybrid power system is developed in Matlab/Simulink. The robustness of the suggested indirect adaptive control paradigms are evaluated through simulation results in a grid-connected hybrid power system test-bed by comparison with conventional and intelligent control techniques. The simulation results validate the effectiveness of the proposed control paradigms.

  11. Indirect adaptive soft computing based wavelet-embedded control paradigms for WT/PV/SOFC in a grid/charging station connected hybrid power system

    PubMed Central

    Khan, Laiq; Ahmed, Saghir; Bader, Rabiah

    2017-01-01

    This paper focuses on the indirect adaptive tracking control of renewable energy sources in a grid-connected hybrid power system. The renewable energy systems have low efficiency and intermittent nature due to unpredictable meteorological conditions. The domestic load and the conventional charging stations behave in an uncertain manner. To operate the renewable energy sources efficiently for harvesting maximum power, instantaneous nonlinear dynamics should be captured online. A Chebyshev-wavelet embedded NeuroFuzzy indirect adaptive MPPT (maximum power point tracking) control paradigm is proposed for variable speed wind turbine-permanent synchronous generator (VSWT-PMSG). A Hermite-wavelet incorporated NeuroFuzzy indirect adaptive MPPT control strategy for photovoltaic (PV) system to extract maximum power and indirect adaptive tracking control scheme for Solid Oxide Fuel Cell (SOFC) is developed. A comprehensive simulation test-bed for a grid-connected hybrid power system is developed in Matlab/Simulink. The robustness of the suggested indirect adaptive control paradigms are evaluated through simulation results in a grid-connected hybrid power system test-bed by comparison with conventional and intelligent control techniques. The simulation results validate the effectiveness of the proposed control paradigms. PMID:28877191

  12. A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.

    2002-01-01

    Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.

  13. Design, Development and Testing of a Low-Cost sEMG System and Its Use in Recording Muscle Activity in Human Gait

    PubMed Central

    Supuk, Tamara Grujic; Skelin, Ana Kuzmanic; Cic, Maja

    2014-01-01

    Surface electromyography (sEMG) is an important measurement technique used in biomechanical, rehabilitation and sport environments. In this article the design, development and testing of a low-cost wearable sEMG system are described. The hardware architecture consists of a two-cascade small-sized bioamplifier with a total gain of 2,000 and band-pass of 3 to 500 Hz. The sampling frequency of the system is 1,000 Hz. Since real measured EMG signals are usually corrupted by various types of noises (motion artifacts, white noise and electromagnetic noise present at 50 Hz and higher harmonics), we have tested several denoising techniques, both on artificial and measured EMG signals. Results showed that a wavelet—based technique implementing Daubechies5 wavelet and soft sqtwolog thresholding is the most appropriate for EMG signals denoising. To test the system performance, EMG activities of six dominant muscles of ten healthy subjects during gait were measured (gluteus maximus, biceps femoris, sartorius, rectus femoris, tibialis anterior and medial gastrocnemius). The obtained EMG envelopes presented against the duration of gait cycle were compared favourably with the EMG data available in the literature, suggesting that the proposed system is suitable for a wide range of applications in biomechanics. PMID:24811078

  14. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement.

    PubMed

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G

    2015-08-07

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method.

  15. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less

  16. Autocorrelation Analysis Combined with a Wavelet Transform Method to Detect and Remove Cosmic Rays in a Single Raman Spectrum.

    PubMed

    Maury, Augusto; Revilla, Reynier I

    2015-08-01

    Cosmic rays (CRs) occasionally affect charge-coupled device (CCD) detectors, introducing large spikes with very narrow bandwidth in the spectrum. These CR features can distort the chemical information expressed by the spectra. Consequently, we propose here an algorithm to identify and remove significant spikes in a single Raman spectrum. An autocorrelation analysis is first carried out to accentuate the CRs feature as outliers. Subsequently, with an adequate selection of the threshold, a discrete wavelet transform filter is used to identify CR spikes. Identified data points are then replaced by interpolated values using the weighted-average interpolation technique. This approach only modifies the data in a close vicinity of the CRs. Additionally, robust wavelet transform parameters are proposed (a desirable property for automation) after optimizing them with the application of the method in a great number of spectra. However, this algorithm, as well as all the single-spectrum analysis procedures, is limited to the cases in which CRs have much narrower bandwidth than the Raman bands. This might not be the case when low-resolution Raman instruments are used.

  17. STFT or CWT for the detection of Doppler ultrasound embolic signals.

    PubMed

    Gonçalves, Ivo B; Leiria, Ana; Moura, M M M

    2013-09-01

    Aiming reliable detection and localization of cerebral blood flow and emboli, embolic signals were added to simulated middle cerebral artery Doppler signals and analysed. Short-time Fourier transform (STFT) and continuous wavelet transform (CWT) were used in the evaluation. The following parameters were used in this study: the powers of the embolic signals added were 5, 6, 6.5, 7, 7.5, 8 and 9 dB; the mother wavelets for CWT analysis were Morlet, Mexican hat, Meyer, Gaussian (order 4) and Daubechies (orders 4 and 8); and the thresholds for detection (equated in terms of false positive, false negative and sensitivity) were 2 and 3.5 dB for the CWT and STFT, respectively. The results indicate that although the STFT allows accurately detecting emboli, better time localization can be achieved with the CWT. Among the CWT, the current best overall results were obtained with Mexican Hat mother wavelet, with optimal results for sensitivity (100% detection rate) for nearly all emboli power values studied. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Improved CEEMDAN-wavelet transform de-noising method and its application in well logging noise reduction

    NASA Astrophysics Data System (ADS)

    Zhang, Jingxia; Guo, Yinghai; Shen, Yulin; Zhao, Difei; Li, Mi

    2018-06-01

    The use of geophysical logging data to identify lithology is an important groundwork in logging interpretation. Inevitably, noise is mixed in during data collection due to the equipment and other external factors and this will affect the further lithological identification and other logging interpretation. Therefore, to get a more accurate lithological identification it is necessary to adopt de-noising methods. In this study, a new de-noising method, namely improved complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN)-wavelet transform, is proposed, which integrates the superiorities of improved CEEMDAN and wavelet transform. Improved CEEMDAN, an effective self-adaptive multi-scale analysis method, is used to decompose non-stationary signals as the logging data to obtain the intrinsic mode function (IMF) of N different scales and one residual. Moreover, one self-adaptive scale selection method is used to determine the reconstruction scale k. Simultaneously, given the possible frequency aliasing problem between adjacent IMFs, a wavelet transform threshold de-noising method is used to reduce the noise of the (k-1)th IMF. Subsequently, the de-noised logging data are reconstructed by the de-noised (k-1)th IMF and the remaining low-frequency IMFs and the residual. Finally, empirical mode decomposition, improved CEEMDAN, wavelet transform and the proposed method are applied for analysis of the simulation and the actual data. Results show diverse performance of these de-noising methods with regard to accuracy for lithological identification. Compared with the other methods, the proposed method has the best self-adaptability and accuracy in lithological identification.

  19. Scalable Wavelet-Based Active Network Stepping Stone Detection

    DTIC Science & Technology

    2012-03-22

    47 4.2.2 Synchronization Frame . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2.3 Frame Size...the vector. Pilot experiments result in the final algorithm shown in Figure 3.4 and the detector in Figure 3.5. Note that the synchronization frame and... synchronization frames divided by the number of total frames. Comparing this statistic to the detection threshold γ determines whether a watermark is

  20. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  1. Threshold resummation for top-pair hadroproduction to next-to-next-to-leading log

    NASA Astrophysics Data System (ADS)

    Czakon, Michal; Mitov, Alexander; Sterman, George

    2009-10-01

    We derive the threshold-resummed total cross section for heavy quark production in hadronic collisions accurate to next-to-next-to-leading logarithm, employing recent advances on soft anomalous dimension matrices for massive pair production in the relevant kinematic limit. We also derive the relation between heavy quark threshold resummations for fixed pair kinematics and the inclusive cross section. As a check of our results, we have verified that they reproduce all poles of the color-averaged qq¯→tt¯ amplitudes at two loops, noting that the latter are insensitive to the color-antisymmetric terms of the soft anomalous dimension.

  2. The application of machine vision in fire protection system

    NASA Astrophysics Data System (ADS)

    Rong, Jiang

    2018-04-01

    Based on the previous research, this paper introduces the theory of wavelet, collects the situation through the video system, and calculates the key information needed in the fire protection system. That is, through the algorithm to collect the information, according to the flame color characteristics and smoke characteristics were extracted, and as the characteristic information corresponding processing. Alarm system set the corresponding alarm threshold, when more than this alarm threshold, the system will alarm. This combination of flame color characteristics and smoke characteristics of the fire method not only improve the accuracy of judgment, but also improve the efficiency of judgments. Experiments show that the scheme is feasible.

  3. Threshold resummation of soft gluons in hadronic reactions - an introduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berger, E. L.

    The authors discuss the motivation for resummation of the effects of initial-state soft gluon radiation, to all orders in the strong coupling strength, for processes in which the near-threshold region in the partonic subenergy is important. The author summarizes the method of perturbative resummation and its application to the calculation of the total cross section for top quark production at hadron colliders. Comments are included on the differences between the treatment of subleading logarithmic terms in this method and in other approaches.

  4. Non-abelian factorisation for next-to-leading-power threshold logarithms

    NASA Astrophysics Data System (ADS)

    Bonocore, D.; Laenen, E.; Magnea, L.; Vernazza, L.; White, C. D.

    2016-12-01

    Soft and collinear radiation is responsible for large corrections to many hadronic cross sections, near thresholds for the production of heavy final states. There is much interest in extending our understanding of this radiation to next-to-leading power (NLP) in the threshold expansion. In this paper, we generalise a previously proposed all-order NLP factorisation formula to include non-abelian corrections. We define a nonabelian radiative jet function, organising collinear enhancements at NLP, and compute it for quark jets at one loop. We discuss in detail the issue of double counting between soft and collinear regions. Finally, we verify our prescription by reproducing all NLP logarithms in Drell-Yan production up to NNLO, including those associated with double real emission. Our results constitute an important step in the development of a fully general resummation formalism for NLP threshold effects.

  5. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  6. Signal quality enhancement using higher order wavelets for ultrasonic TOFD signals from austenitic stainless steel welds.

    PubMed

    Praveen, Angam; Vijayarekha, K; Abraham, Saju T; Venkatraman, B

    2013-09-01

    Time of flight diffraction (TOFD) technique is a well-developed ultrasonic non-destructive testing (NDT) method and has been applied successfully for accurate sizing of defects in metallic materials. This technique was developed in early 1970s as a means for accurate sizing and positioning of cracks in nuclear components became very popular in the late 1990s and is today being widely used in various industries for weld inspection. One of the main advantages of TOFD is that, apart from fast technique, it provides higher probability of detection for linear defects. Since TOFD is based on diffraction of sound waves from the extremities of the defect compared to reflection from planar faces as in pulse echo and phased array, the resultant signal would be quite weak and signal to noise ratio (SNR) low. In many cases the defect signal is submerged in this noise making it difficult for detection, positioning and sizing. Several signal processing methods such as digital filtering, Split Spectrum Processing (SSP), Hilbert Transform and Correlation techniques have been developed in order to suppress unwanted noise and enhance the quality of the defect signal which can thus be used for characterization of defects and the material. Wavelet Transform based thresholding techniques have been applied largely for de-noising of ultrasonic signals. However in this paper, higher order wavelets are used for analyzing the de-noising performance for TOFD signals obtained from Austenitic Stainless Steel welds. It is observed that higher order wavelets give greater SNR improvement compared to the lower order wavelets. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Wavelet Monte Carlo dynamics: A new algorithm for simulating the hydrodynamics of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Dyer, Oliver T.; Ball, Robin C.

    2017-03-01

    We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.

  8. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  9. Ocean Wave Separation Using CEEMD-Wavelet in GPS Wave Measurement

    PubMed Central

    Wang, Junjie; He, Xiufeng; Ferreira, Vagner G.

    2015-01-01

    Monitoring ocean waves plays a crucial role in, for example, coastal environmental and protection studies. Traditional methods for measuring ocean waves are based on ultrasonic sensors and accelerometers. However, the Global Positioning System (GPS) has been introduced recently and has the advantage of being smaller, less expensive, and not requiring calibration in comparison with the traditional methods. Therefore, for accurately measuring ocean waves using GPS, further research on the separation of the wave signals from the vertical GPS-mounted carrier displacements is still necessary. In order to contribute to this topic, we present a novel method that combines complementary ensemble empirical mode decomposition (CEEMD) with a wavelet threshold denoising model (i.e., CEEMD-Wavelet). This method seeks to extract wave signals with less residual noise and without losing useful information. Compared with the wave parameters derived from the moving average skill, high pass filter and wave gauge, the results show that the accuracy of the wave parameters for the proposed method was improved with errors of about 2 cm and 0.2 s for mean wave height and mean period, respectively, verifying the validity of the proposed method. PMID:26262620

  10. Wavelet-Based Blind Superresolution from Video Sequence and in MRI

    DTIC Science & Technology

    2005-12-31

    in Fig. 4(e) and (f), respectively. The PSNR- based optimal threshold gives better noise filtering but poor deblurring [see Fig. 4(c) and (e)] while...that ultimately produces the deblurred , noise filtered, superresolved image. Finite support linear shift invariant blurs are reasonable to assume... Deblurred and Noise Filtered HR Image Cameras with different PSFs Figure 1: Multichannel Blind Superresolution Model condition [11] on the zeros of the

  11. Automatic detection of muscle activity from mechanomyogram signals: a comparison of amplitude and wavelet-based methods.

    PubMed

    Alves, Natasha; Chau, Tom

    2010-04-01

    Knowledge of muscle activity timing is critical to many clinical applications, such as the assessment of muscle coordination and the prescription of muscle-activated switches for individuals with disabilities. In this study, we introduce a continuous wavelet transform (CWT) algorithm for the detection of muscle activity via mechanomyogram (MMG) signals. CWT coefficients of the MMG signal were compared to scale-specific thresholds derived from the baseline signal to estimate the timing of muscle activity. Test signals were recorded from the flexor carpi radialis muscles of 15 able-bodied participants as they squeezed and released a hand dynamometer. Using the dynamometer signal as a reference, the proposed CWT detection algorithm was compared against a global-threshold CWT detector as well as amplitude-based event detection for sensitivity and specificity to voluntary contractions. The scale-specific CWT-based algorithm exhibited superior detection performance over the other detectors. CWT detection also showed good muscle selectivity during hand movement, particularly when a given muscle was the primary facilitator of the contraction. This may suggest that, during contraction, the compound MMG signal has a recurring morphological pattern that is not prevalent in the baseline signal. The ability of CWT analysis to be implemented in real time makes it a candidate for muscle-activity detection in clinical applications.

  12. Exploring the impact of wavelet-based denoising in the classification of remote sensing hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco

    2016-10-01

    The classification of remote sensing hyperspectral images for land cover applications is a very intensive topic. In the case of supervised classification, Support Vector Machines (SVMs) play a dominant role. Recently, the Extreme Learning Machine algorithm (ELM) has been extensively used. The classification scheme previously published by the authors, and called WT-EMP, introduces spatial information in the classification process by means of an Extended Morphological Profile (EMP) that is created from features extracted by wavelets. In addition, the hyperspectral image is denoised in the 2-D spatial domain, also using wavelets and it is joined to the EMP via a stacked vector. In this paper, the scheme is improved achieving two goals. The first one is to reduce the classification time while preserving the accuracy of the classification by using ELM instead of SVM. The second one is to improve the accuracy results by performing not only a 2-D denoising for every spectral band, but also a previous additional 1-D spectral signature denoising applied to each pixel vector of the image. For each denoising the image is transformed by applying a 1-D or 2-D wavelet transform, and then a NeighShrink thresholding is applied. Improvements in terms of classification accuracy are obtained, especially for images with close regions in the classification reference map, because in these cases the accuracy of the classification in the edges between classes is more relevant.

  13. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  14. Fast l₁-SPIRiT compressed sensing parallel imaging MRI: scalable parallel implementation and clinically feasible runtime.

    PubMed

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-06-01

    We present l₁-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative self-consistent parallel imaging (SPIRiT). Like many iterative magnetic resonance imaging reconstructions, l₁-SPIRiT's image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing l₁-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of l₁-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT spoiled gradient echo (SPGR) sequence with up to 8× acceleration via Poisson-disc undersampling in the two phase-encoded directions.

  15. Fast ℓ1-SPIRiT Compressed Sensing Parallel Imaging MRI: Scalable Parallel Implementation and Clinically Feasible Runtime

    PubMed Central

    Murphy, Mark; Alley, Marcus; Demmel, James; Keutzer, Kurt; Vasanawala, Shreyas; Lustig, Michael

    2012-01-01

    We present ℓ1-SPIRiT, a simple algorithm for auto calibrating parallel imaging (acPI) and compressed sensing (CS) that permits an efficient implementation with clinically-feasible runtimes. We propose a CS objective function that minimizes cross-channel joint sparsity in the Wavelet domain. Our reconstruction minimizes this objective via iterative soft-thresholding, and integrates naturally with iterative Self-Consistent Parallel Imaging (SPIRiT). Like many iterative MRI reconstructions, ℓ1-SPIRiT’s image quality comes at a high computational cost. Excessively long runtimes are a barrier to the clinical use of any reconstruction approach, and thus we discuss our approach to efficiently parallelizing ℓ1-SPIRiT and to achieving clinically-feasible runtimes. We present parallelizations of ℓ1-SPIRiT for both multi-GPU systems and multi-core CPUs, and discuss the software optimization and parallelization decisions made in our implementation. The performance of these alternatives depends on the processor architecture, the size of the image matrix, and the number of parallel imaging channels. Fundamentally, achieving fast runtime requires the correct trade-off between cache usage and parallelization overheads. We demonstrate image quality via a case from our clinical experimentation, using a custom 3DFT Spoiled Gradient Echo (SPGR) sequence with up to 8× acceleration via poisson-disc undersampling in the two phase-encoded directions. PMID:22345529

  16. Nondestructive characterization of thermal barrier coating by noncontact laser ultrasonic technique

    NASA Astrophysics Data System (ADS)

    Zhao, Yang; Chen, Jianwei; Zhang, Zhenzhen

    2015-09-01

    We present the application of a laser ultrasonic technique in nondestructive characterization of the bonding layer (BL) in a thermal barrier coating (TBC). A physical mode of a multilayered medium is established to describe the propagation of a longitudinal wave generated by a laser in a TBC system. Furthermore, the theoretical analysis on the ultrasonic transmission in TBC is carried out in order to derive the expression of the BL transmission coefficient spectrum (TCS) which is used to determine the velocity of the longitudinal wave in the BL. We employ the inversion method combined with TCS to ascertain the attenuation coefficient of the BL. The experimental validations are performed with TBC specimens produced by an electron-beam physical vapor deposition method. In those experiments, a pulsed laser with a width of 10 ns is used to generate an ultrasonic signal while a two-wave mixing interferometer is created to receive the ultrasonic signals. By introducing the wavelet soft-threshold method that improves the signal-to-noise ratio, the laser ultrasonic testing results of TBC with an oxidation of 1 cycle, 10 cycles, and 100 cycles show that the attenuation coefficients of the BL become larger with an increase in the oxidation time, which is evident for the scanning electron microscopy observations, in which the thickness of the thermally grown oxide increases with oxidation time.

  17. Study of wavelet packet energy entropy for emotion classification in speech and glottal signals

    NASA Astrophysics Data System (ADS)

    He, Ling; Lech, Margaret; Zhang, Jing; Ren, Xiaomei; Deng, Lihua

    2013-07-01

    The automatic speech emotion recognition has important applications in human-machine communication. Majority of current research in this area is focused on finding optimal feature parameters. In recent studies, several glottal features were examined as potential cues for emotion differentiation. In this study, a new type of feature parameter is proposed, which calculates energy entropy on values within selected Wavelet Packet frequency bands. The modeling and classification tasks are conducted using the classical GMM algorithm. The experiments use two data sets: the Speech Under Simulated Emotion (SUSE) data set annotated with three different emotions (angry, neutral and soft) and Berlin Emotional Speech (BES) database annotated with seven different emotions (angry, bored, disgust, fear, happy, sad and neutral). The average classification accuracy achieved for the SUSE data (74%-76%) is significantly higher than the accuracy achieved for the BES data (51%-54%). In both cases, the accuracy was significantly higher than the respective random guessing levels (33% for SUSE and 14.3% for BES).

  18. Signal-to-noise ratios in coherent soft limiters

    NASA Technical Reports Server (NTRS)

    Lesh, J. R.

    1973-01-01

    Expressions for the output signal-to-noise power ratio of a bandpass soft limiter followed by a coherent detection device are presented and discussed. It is found that a significant improvement in the output signal-to-noise ratio at low input SNRs can be achieved by such soft limiters as compared to hard limiters. This indicates that the soft limiter may be of some use in the area of threshold extension. Approximation methods for determining output signal-to-noise spectral densities are also presented.

  19. Prognostics of Lithium-Ion Batteries Based on Wavelet Denoising and DE-RVM

    PubMed Central

    Zhang, Chaolong; He, Yigang; Yuan, Lifeng; Xiang, Sheng; Wang, Jinping

    2015-01-01

    Lithium-ion batteries are widely used in many electronic systems. Therefore, it is significantly important to estimate the lithium-ion battery's remaining useful life (RUL), yet very difficult. One important reason is that the measured battery capacity data are often subject to the different levels of noise pollution. In this paper, a novel battery capacity prognostics approach is presented to estimate the RUL of lithium-ion batteries. Wavelet denoising is performed with different thresholds in order to weaken the strong noise and remove the weak noise. Relevance vector machine (RVM) improved by differential evolution (DE) algorithm is utilized to estimate the battery RUL based on the denoised data. An experiment including battery 5 capacity prognostics case and battery 18 capacity prognostics case is conducted and validated that the proposed approach can predict the trend of battery capacity trajectory closely and estimate the battery RUL accurately. PMID:26413090

  20. Improving liquid chromatography-tandem mass spectrometry determinations by modifying noise frequency spectrum between two consecutive wavelet-based low-pass filtering procedures.

    PubMed

    Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien

    2010-04-23

    This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. Copyright 2010 Elsevier B.V. All rights reserved.

  1. An Application of Reassigned Time-Frequency Representations for Seismic Noise/Signal Decomposition

    NASA Astrophysics Data System (ADS)

    Mousavi, S. M.; Langston, C. A.

    2016-12-01

    Seismic data recorded by surface arrays are often strongly contaminated by unwanted noise. This background noise makes the detection of small magnitude events difficult. An automatic method for seismic noise/signal decomposition is presented based upon an enhanced time-frequency representation. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and suppressed more easily in this reassigned domain. The threshold level is estimated using a general cross validation approach that does not rely on any prior knowledge about the noise level. Efficiency of thresholding has been improved by adding a pre-processing step based on higher order statistics and a post-processing step based on adaptive hard-thresholding. In doing so, both accuracy and speed of the denoising have been improved compared to our previous algorithms (Mousavi and Langston, 2016a, 2016b; Mousavi et al., 2016). The proposed algorithm can either kill the noise (either white or colored) and keep the signal or kill the signal and keep the noise. Hence, It can be used in either normal denoising applications or in ambient noise studies. Application of the proposed method on synthetic and real seismic data shows the effectiveness of the method for denoising/designaling of local microseismic, and ocean bottom seismic data. References: Mousavi, S.M., C. A. Langston., and S. P. Horton (2016), Automatic Microseismic Denoising and Onset Detection Using the Synchrosqueezed-Continuous Wavelet Transform. Geophysics. 81, V341-V355, doi: 10.1190/GEO2015-0598.1. Mousavi, S.M., and C. A. Langston (2016a), Hybrid Seismic Denoising Using Higher-Order Statistics and Improved Wavelet Block Thresholding. Bull. Seismol. Soc. Am., 106, doi: 10.1785/0120150345. Mousavi, S.M., and C.A. Langston (2016b), Adaptive noise estimation and suppression for improving microseismic event detection, Journal of Applied Geophysics., doi: http://dx.doi.org/10.1016/j.jappgeo.2016.06.008.

  2. Rapid Ecological Change in Two Contrasting Lake Ecosystems: Evidence of Threshold Responses, Altered Species Dynamics, and Perturbed Patterns of Variability

    NASA Astrophysics Data System (ADS)

    Simpson, G. L.

    2015-12-01

    Studying threshold responses to environmental change is often made difficult due to the paucity of monitoring data prior to and during change. Progress has been made via theoretical models of regime shifts or experimental manipulation but natural, real world, examples of threshold change are limited and in many cases inconclusive. Lake sediments provide the potential to examine abrupt ecological change by directly observing how species, communities, and biogeochemical proxies responded to environmental perturbation or recorded ecosystem change. These records are not problem-free; age uncertainties, uneven and variable temporal resolution, and time-consuming taxonomic work all act to limit the scope and scale of the data or complicate its analysis. Here I use two annually laminated records 1. Kassjön, a seasonally anoxic mesotrophic lake in N Sweden, and2. Baldeggersee, a nutrient rich, hardwater lake on the central Swiss Plateau to investigate lake ecosystem responses to abrupt environmental change using ideal paleoecological time series. Rapid cooling 2.2kyr ago in northern Sweden significantly perturbed the diatom community of Kassjön. Using wavelet analysis, this amelioration in climate also fundamentally altered patterns of variance in diatom abundances, suppressing cyclicity in species composition that required several hundred years to reestablish. Multivariate wavelet analysis of the record showed marked switching between synchronous and asynchronous species dynamics in response to rapid climatic cooling and subsequent warming. Baldeggersee has experienced a long history of eutrophication and the diatom record has been used as a classic illustration of a regime shift in response to nutrient loading. Time series analysis of the record identified some evidence of a threshold-like response in the diatoms. A stochastic volatility model identified increasing variance in composition prior to the threshold, as predicted from theory, and a switch from compensatory to synchronous species dynamics, concomitant with eutrophication, was observed. These results document in high resolution how two aquatic systems reacted to abrupt change and demonstrate that under ideal conditions sediments can preserve valuable evidence of rapid ecological change.

  3. Detection of Heart Sounds in Children with and without Pulmonary Arterial Hypertension―Daubechies Wavelets Approach

    PubMed Central

    Elgendi, Mohamed; Kumar, Shine; Guo, Long; Rutledge, Jennifer; Coe, James Y.; Zemp, Roger; Schuurmans, Dale; Adatia, Ian

    2015-01-01

    Background Automatic detection of the 1st (S1) and 2nd (S2) heart sounds is difficult, and existing algorithms are imprecise. We sought to develop a wavelet-based algorithm for the detection of S1 and S2 in children with and without pulmonary arterial hypertension (PAH). Method Heart sounds were recorded at the second left intercostal space and the cardiac apex with a digital stethoscope simultaneously with pulmonary arterial pressure (PAP). We developed a Daubechies wavelet algorithm for the automatic detection of S1 and S2 using the wavelet coefficient ‘D 6’ based on power spectral analysis. We compared our algorithm with four other Daubechies wavelet-based algorithms published by Liang, Kumar, Wang, and Zhong. We annotated S1 and S2 from an audiovisual examination of the phonocardiographic tracing by two trained cardiologists and the observation that in all subjects systole was shorter than diastole. Results We studied 22 subjects (9 males and 13 females, median age 6 years, range 0.25–19). Eleven subjects had a mean PAP < 25 mmHg. Eleven subjects had PAH with a mean PAP ≥ 25 mmHg. All subjects had a pulmonary artery wedge pressure ≤ 15 mmHg. The sensitivity (SE) and positive predictivity (+P) of our algorithm were 70% and 68%, respectively. In comparison, the SE and +P of Liang were 59% and 42%, Kumar 19% and 12%, Wang 50% and 45%, and Zhong 43% and 53%, respectively. Our algorithm demonstrated robustness and outperformed the other methods up to a signal-to-noise ratio (SNR) of 10 dB. For all algorithms, detection errors arose from low-amplitude peaks, fast heart rates, low signal-to-noise ratio, and fixed thresholds. Conclusion Our algorithm for the detection of S1 and S2 improves the performance of existing Daubechies-based algorithms and justifies the use of the wavelet coefficient ‘D 6’ through power spectral analysis. Also, the robustness despite ambient noise may improve real world clinical performance. PMID:26629704

  4. Comparison of wavelet based denoising schemes for gear condition monitoring: An Artificial Neural Network based Approach

    NASA Astrophysics Data System (ADS)

    Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva

    2018-02-01

    Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.

  5. A novel coupling of noise reduction algorithms for particle flow simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimoń, M.J., E-mail: malgorzata.zimon@stfc.ac.uk; James Weir Fluids Lab, Mechanical and Aerospace Engineering Department, The University of Strathclyde, Glasgow G1 1XJ; Reese, J.M.

    2016-09-15

    Proper orthogonal decomposition (POD) and its extension based on time-windows have been shown to greatly improve the effectiveness of recovering smooth ensemble solutions from noisy particle data. However, to successfully de-noise any molecular system, a large number of measurements still need to be provided. In order to achieve a better efficiency in processing time-dependent fields, we have combined POD with a well-established signal processing technique, wavelet-based thresholding. In this novel hybrid procedure, the wavelet filtering is applied within the POD domain and referred to as WAVinPOD. The algorithm exhibits promising results when applied to both synthetically generated signals and particlemore » data. In this work, the simulations compare the performance of our new approach with standard POD or wavelet analysis in extracting smooth profiles from noisy velocity and density fields. Numerical examples include molecular dynamics and dissipative particle dynamics simulations of unsteady force- and shear-driven liquid flows, as well as phase separation phenomenon. Simulation results confirm that WAVinPOD preserves the dimensionality reduction obtained using POD, while improving its filtering properties through the sparse representation of data in wavelet basis. This paper shows that WAVinPOD outperforms the other estimators for both synthetically generated signals and particle-based measurements, achieving a higher signal-to-noise ratio from a smaller number of samples. The new filtering methodology offers significant computational savings, particularly for multi-scale applications seeking to couple continuum informations with atomistic models. It is the first time that a rigorous analysis has compared de-noising techniques for particle-based fluid simulations.« less

  6. ECG feature extraction and disease diagnosis.

    PubMed

    Bhyri, Channappa; Hamde, S T; Waghmare, L M

    2011-01-01

    An important factor to consider when using findings on electrocardiograms for clinical decision making is that the waveforms are influenced by normal physiological and technical factors as well as by pathophysiological factors. In this paper, we propose a method for the feature extraction and heart disease diagnosis using wavelet transform (WT) technique and LabVIEW (Laboratory Virtual Instrument Engineering workbench). LabVIEW signal processing tools are used to denoise the signal before applying the developed algorithm for feature extraction. First, we have developed an algorithm for R-peak detection using Haar wavelet. After 4th level decomposition of the ECG signal, the detailed coefficient is squared and the standard deviation of the squared detailed coefficient is used as the threshold for detection of R-peaks. Second, we have used daubechies (db6) wavelet for the low resolution signals. After cross checking the R-peak location in 4th level, low resolution signal of daubechies wavelet P waves and T waves are detected. Other features of diagnostic importance, mainly heart rate, R-wave width, Q-wave width, T-wave amplitude and duration, ST segment and frontal plane axis are also extracted and scoring pattern is applied for the purpose of heart disease diagnosis. In this study, detection of tachycardia, bradycardia, left ventricular hypertrophy, right ventricular hypertrophy and myocardial infarction have been considered. In this work, CSE ECG data base which contains 5000 samples recorded at a sampling frequency of 500 Hz and the ECG data base created by the S.G.G.S. Institute of Engineering and Technology, Nanded (Maharashtra) have been used.

  7. Efficacy and predictability of soft tissue ablation using a prototype Raman-shifted alexandrite laser

    NASA Astrophysics Data System (ADS)

    Kozub, John A.; Shen, Jin-H.; Joos, Karen M.; Prasad, Ratna; Shane Hutson, M.

    2015-10-01

    Previous research showed that mid-infrared free-electron lasers could reproducibly ablate soft tissue with little collateral damage. The potential for surgical applications motivated searches for alternative tabletop lasers providing thermally confined pulses in the 6- to-7-μm wavelength range with sufficient pulse energy, stability, and reliability. Here, we evaluate a prototype Raman-shifted alexandrite laser. We measure ablation thresholds, etch rates, and collateral damage in gelatin and cornea as a function of laser wavelength (6.09, 6.27, or 6.43 μm), pulse energy (up to 3 mJ/pulse), and spot diameter (100 to 600 μm). We find modest wavelength dependence for ablation thresholds and collateral damage, with the lowest thresholds and least damage for 6.09 μm. We find a strong spot-size dependence for all metrics. When the beam is tightly focused (˜100-μm diameter), ablation requires more energy, is highly variable and less efficient, and can yield large zones of mechanical damage (for pulse energies >1 mJ). When the beam is softly focused (˜300-μm diameter), ablation proceeded at surgically relevant etch rates, with reasonable reproducibility (5% to 12% within a single sample), and little collateral damage. With improvements in pulse-energy stability, this prototype laser may have significant potential for soft-tissue surgical applications.

  8. Efficacy and predictability of soft tissue ablation using a prototype Raman-shifted alexandrite laser

    PubMed Central

    Kozub, John A.; Shen, Jin-H.; Joos, Karen M.; Prasad, Ratna; Shane Hutson, M.

    2015-01-01

    Abstract. Previous research showed that mid-infrared free-electron lasers could reproducibly ablate soft tissue with little collateral damage. The potential for surgical applications motivated searches for alternative tabletop lasers providing thermally confined pulses in the 6- to-7-μm wavelength range with sufficient pulse energy, stability, and reliability. Here, we evaluate a prototype Raman-shifted alexandrite laser. We measure ablation thresholds, etch rates, and collateral damage in gelatin and cornea as a function of laser wavelength (6.09, 6.27, or 6.43  μm), pulse energy (up to 3  mJ/pulse), and spot diameter (100 to 600  μm). We find modest wavelength dependence for ablation thresholds and collateral damage, with the lowest thresholds and least damage for 6.09  μm. We find a strong spot-size dependence for all metrics. When the beam is tightly focused (∼100-μm diameter), ablation requires more energy, is highly variable and less efficient, and can yield large zones of mechanical damage (for pulse energies >1  mJ). When the beam is softly focused (∼300-μm diameter), ablation proceeded at surgically relevant etch rates, with reasonable reproducibility (5% to 12% within a single sample), and little collateral damage. With improvements in pulse-energy stability, this prototype laser may have significant potential for soft-tissue surgical applications. PMID:26456553

  9. Improving surface EMG burst detection in infrahyoid muscles during swallowing using digital filters and discrete wavelet analysis.

    PubMed

    Restrepo-Agudelo, Sebastian; Roldan-Vasco, Sebastian; Ramirez-Arbelaez, Lina; Cadavid-Arboleda, Santiago; Perez-Giraldo, Estefania; Orozco-Duque, Andres

    2017-08-01

    The visual inspection is a widely used method for evaluating the surface electromyographic signal (sEMG) during deglutition, a process highly dependent of the examiners expertise. It is desirable to have a less subjective and automated technique to improve the onset detection in swallowing related muscles, which have a low signal-to-noise ratio. In this work, we acquired sEMG measured in infrahyoid muscles with high baseline noise of ten healthy adults during water swallowing tasks. Two methods were applied to find the combination of cutoff frequencies that achieve the most accurate onset detection: discrete wavelet decomposition based method and fixed steps variations of low and high cutoff frequencies of a digital bandpass filter. Teager-Kaiser Energy operator, root mean square and simple threshold method were applied for both techniques. Results show a narrowing of the effective bandwidth vs. the literature recommended parameters for sEMG acquisition. Both level 3 decomposition with mother wavelet db4 and bandpass filter with cutoff frequencies between 130 and 180Hz were optimal for onset detection in infrahyoid muscles. The proposed methodologies recognized the onset time with predictive power above 0.95, that is similar to previous findings but in larger and more superficial muscles in limbs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Shape Adaptive, Robust Iris Feature Extraction from Noisy Iris Images

    PubMed Central

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-01-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate. PMID:24696801

  11. Shape adaptive, robust iris feature extraction from noisy iris images.

    PubMed

    Ghodrati, Hamed; Dehghani, Mohammad Javad; Danyali, Habibolah

    2013-10-01

    In the current iris recognition systems, noise removing step is only used to detect noisy parts of the iris region and features extracted from there will be excluded in matching step. Whereas depending on the filter structure used in feature extraction, the noisy parts may influence relevant features. To the best of our knowledge, the effect of noise factors on feature extraction has not been considered in the previous works. This paper investigates the effect of shape adaptive wavelet transform and shape adaptive Gabor-wavelet for feature extraction on the iris recognition performance. In addition, an effective noise-removing approach is proposed in this paper. The contribution is to detect eyelashes and reflections by calculating appropriate thresholds by a procedure called statistical decision making. The eyelids are segmented by parabolic Hough transform in normalized iris image to decrease computational burden through omitting rotation term. The iris is localized by an accurate and fast algorithm based on coarse-to-fine strategy. The principle of mask code generation is to assign the noisy bits in an iris code in order to exclude them in matching step is presented in details. An experimental result shows that by using the shape adaptive Gabor-wavelet technique there is an improvement on the accuracy of recognition rate.

  12. Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.

    2017-10-01

    Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.

  13. Identification of a self-paced hitting task in freely moving rats based on adaptive spike detection from multi-unit M1 cortical signals

    PubMed Central

    Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie

    2013-01-01

    Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254

  14. Communication in a noisy environment: Perception of one's own voice and speech enhancement

    NASA Astrophysics Data System (ADS)

    Le Cocq, Cecile

    Workers in noisy industrial environments are often confronted to communication problems. Lost of workers complain about not being able to communicate easily with their coworkers when they wear hearing protectors. In consequence, they tend to remove their protectors, which expose them to the risk of hearing loss. In fact this communication problem is a double one: first the hearing protectors modify one's own voice perception; second they interfere with understanding speech from others. This double problem is examined in this thesis. When wearing hearing protectors, the modification of one's own voice perception is partly due to the occlusion effect which is produced when an earplug is inserted in the car canal. This occlusion effect has two main consequences: first the physiological noises in low frequencies are better perceived, second the perception of one's own voice is modified. In order to have a better understanding of this phenomenon, the literature results are analyzed systematically, and a new method to quantify the occlusion effect is developed. Instead of stimulating the skull with a bone vibrator or asking the subject to speak as is usually done in the literature, it has been decided to excite the buccal cavity with an acoustic wave. The experiment has been designed in such a way that the acoustic wave which excites the buccal cavity does not excite the external car or the rest of the body directly. The measurement of the hearing threshold in open and occluded car has been used to quantify the subjective occlusion effect for an acoustic wave in the buccal cavity. These experimental results as well as those reported in the literature have lead to a better understanding of the occlusion effect and an evaluation of the role of each internal path from the acoustic source to the internal car. The speech intelligibility from others is altered by both the high sound levels of noisy industrial environments and the speech signal attenuation due to hearing protectors. A possible solution to this problem is to denoise the speech signal and transmit it under the hearing protector. Lots of denoising techniques are available and are often used for denoising speech in telecommunication. In the framework of this thesis, denoising by wavelet thresholding is considered. A first study on "classical" wavelet denoising technics is conducted in order to evaluate their performance in noisy industrial environments. The tested speech signals are altered by industrial noises according to a wide range of signal to noise ratios. The speech denoised signals are evaluated with four criteria. A large database is obtained and analyzed with a selection algorithm which has been designed for this purpose. This first study has lead to the identification of the influence from the different parameters of the wavelet denoising method on its quality and has identified the "classical" method which has given the best performances in terms of denoising quality. This first study has also generated ideas for designing a new thresholding rule suitable for speech wavelet denoising in an industrial noisy environment. In a second study, this new thresholding rule is presented and evaluated. Its performances are better than the "classical" method found in the first study when the signal to noise ratio from the speech signal is between --10 dB and 15 dB.

  15. The Hilbert-Huang Transform-Based Denoising Method for the TEM Response of a PRBS Source Signal

    NASA Astrophysics Data System (ADS)

    Hai, Li; Guo-qiang, Xue; Pan, Zhao; Hua-sen, Zhong; Khan, Muhammad Younis

    2016-08-01

    The denoising process is critical in processing transient electromagnetic (TEM) sounding data. For the full waveform pseudo-random binary sequences (PRBS) response, an inadequate noise estimation may result in an erroneous interpretation. We consider the Hilbert-Huang transform (HHT) and its application to suppress the noise in the PRBS response. The focus is on the thresholding scheme to suppress the noise and the analysis of the signal based on its Hilbert time-frequency representation. The method first decomposes the signal into the intrinsic mode function, and then, inspired by the thresholding scheme in wavelet analysis; an adaptive and interval thresholding is conducted to set to zero all the components in intrinsic mode function which are lower than a threshold related to the noise level. The algorithm is based on the characteristic of the PRBS response. The HHT-based denoising scheme is tested on the synthetic and field data with the different noise levels. The result shows that the proposed method has a good capability in denoising and detail preservation.

  16. Patterns of motor recruitment can be determined using surface EMG.

    PubMed

    Wakeling, James M

    2009-04-01

    Previous studies have reported how different populations of motor units (MUs) can be recruited during dynamic and locomotor tasks. It was hypothesised that the higher-threshold units would contribute higher-frequency components to the sEMG spectra due to their faster conduction velocities, and thus recruitment patterns that increase the proportion of high-threshold units active would lead to higher-frequency elements in the sEMG spectra. This idea was tested by using a model of varying recruitment coupled to a three-layer volume conductor model to generate a series of sEMG signals. The recruitment varied from (A) orderly recruitment where the lowest-threshold MUs were initially activated and higher-threshold MUs were sequentially recruited as the contraction progressed, (B) a recurrent inhibition model that started with orderly recruitment, but as the higher-threshold units were activated they inhibited the lower-threshold MUs (C) nine models with intermediate properties that were graded between these two extremes. The sEMG was processed using wavelet analysis and the spectral properties quantified by their mean frequency, and an angle theta that was determined from the principal components of the spectra. Recruitment strategies that resulted in a greater proportion of faster MUs being active had a significantly lower theta and higher mean frequency.

  17. An iterative shrinkage approach to total-variation image restoration.

    PubMed

    Michailovich, Oleg V

    2011-05-01

    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a situation, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information--commonly referred to as simply priors--is essential for image restoration, rendering it stable and robust to noise. Moreover, using the priors makes the recovered images exhibit some plausible features of their original counterpart. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In this present paper, a different approach to the solution of the problem is proposed based upon the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae. Finally, a number of standard examples of image deblurring are demonstrated, in which the proposed method can provide restoration results of superior quality as compared to the case of sparse-wavelet deconvolution.

  18. Computation of the soft anomalous dimension matrix in coordinate space

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander; Sterman, George; Sung, Ilmo

    2010-08-01

    We complete the coordinate space calculation of the three-parton correlation in the two-loop massive soft anomalous dimension matrix. The full answer agrees with the result found previously by a different approach. The coordinate space treatment of renormalized two-loop gluon exchange diagrams exhibits their color symmetries in a transparent fashion. We compare coordinate space calculations of the soft anomalous dimension matrix with massive and massless eikonal lines and examine its nonuniform limit at absolute threshold.

  19. Estimating phonation threshold pressure.

    PubMed

    Fisher, K V; Swank, P R

    1997-10-01

    Phonation threshold pressure (PTP) is the minimum subglottal pressure required to initiate vocal fold oscillation. Although potentially useful clinically, PTP is difficult to estimate noninvasively because of limitations to vocal motor control near the threshold of soft phonation. Previous investigators observed, for example, that trained subjects were unable to produce flat, consistent oral pressure peaks during/pae/syllable strings when they attempted to phonate as softly as possible (Verdolini-Marston, Titze, & Druker, 1990). The present study aimed to determine if nasal airflow or vowel context affected phonation threshold pressure as estimated from oral pressure (Smitheran & Hixon, 1981) in 5 untrained female speakers with normal velopharyngeal and voice function. Nasal airflow during /p/occlusion was observed for 3 of 5 participants when they attempted to phonate near threshold pressure. When the nose was occluded, nasal airflow was reduced or eliminated during /p/;however, individuals then evidenced compensatory changes in glottal adduction and/or respiratory effort that may be expected to alter PTP estimates. Results demonstrate the importance of monitoring nasal flow (or the flow zero point in undivided masks) when obtaining PTP measurements noninvasively. Results also highlight the need to pursue improved methods for noninvasive estimation of PTP.

  20. A novel method for 3D measurement of RFID multi-tag network based on matching vision and wavelet

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiao; Yu, Xiaolei; Zhao, Zhimin; Wang, Donghua; Zhang, Wenjie; Liu, Zhenlu; Lu, Dongsheng; Dong, Dingbang

    2018-07-01

    In the field of radio frequency identification (RFID), the three-dimensional (3D) distribution of RFID multi-tag networks has a significant impact on their reading performance. At the same time, in order to realize the anti-collision of RFID multi-tag networks in practical engineering applications, the 3D distribution of RFID multi-tag networks must be measured. In this paper, a novel method for the 3D measurement of RFID multi-tag networks is proposed. A dual-CCD system (vertical and horizontal cameras) is used to obtain images of RFID multi-tag networks from different angles. Then, the wavelet threshold denoising method is used to remove noise in the obtained images. The template matching method is used to determine the two-dimensional coordinates and vertical coordinate of each tag. The 3D coordinates of each tag are obtained subsequently. Finally, a model of the nonlinear relation between the 3D coordinate distribution of the RFID multi-tag network and the corresponding reading distance is established using the wavelet neural network. The experiment results show that the average prediction relative error is 0.71% and the time cost is 2.17 s. The values of the average prediction relative error and time cost are smaller than those of the particle swarm optimization neural network and genetic algorithm–back propagation neural network. The time cost of the wavelet neural network is about 1% of that of the other two methods. The method proposed in this paper has a smaller relative error. The proposed method can improve the real-time performance of RFID multi-tag networks and the overall dynamic performance of multi-tag networks.

  1. Automated wavelet denoising of photoacoustic signals for circulating melanoma cell detection and burn image reconstruction.

    PubMed

    Holan, Scott H; Viator, John A

    2008-06-21

    Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.

  2. Multiplexed wavelet transform technique for detection of microcalcification in digitized mammograms.

    PubMed

    Mini, M G; Devassia, V P; Thomas, Tessamma

    2004-12-01

    Wavelet transform (WT) is a potential tool for the detection of microcalcifications, an early sign of breast cancer. This article describes the implementation and evaluates the performance of two novel WT-based schemes for the automatic detection of clustered microcalcifications in digitized mammograms. Employing a one-dimensional WT technique that utilizes the pseudo-periodicity property of image sequences, the proposed algorithms achieve high detection efficiency and low processing memory requirements. The detection is achieved from the parent-child relationship between the zero-crossings [Marr-Hildreth (M-H) detector] /local extrema (Canny detector) of the WT coefficients at different levels of decomposition. The detected pixels are weighted before the inverse transform is computed, and they are segmented by simple global gray level thresholding. Both detectors produce 95% detection sensitivity, even though there are more false positives for the M-H detector. The M-H detector preserves the shape information and provides better detection sensitivity for mammograms containing widely distributed calcifications.

  3. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach

    PubMed Central

    Tan, Robin; Perkowski, Marek

    2017-01-01

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems. PMID:28230745

  4. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    PubMed

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  5. The Application of Time-Frequency Methods to HUMS

    NASA Technical Reports Server (NTRS)

    Pryor, Anna H.; Mosher, Marianne; Lewicki, David G.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper reports the study of four time-frequency transforms applied to vibration signals and presents a new metric for comparing them for fault detection. The four methods to be described and compared are the Short Time Frequency Transform (STFT), the Choi-Williams Distribution (WV-CW), the Continuous Wavelet Transform (CWT) and the Discrete Wavelet Transform (DWT). Vibration data of bevel gear tooth fatigue cracks, under a variety of operating load levels, are analyzed using these methods. The new metric for automatic fault detection is developed and can be produced from any systematic numerical representation of the vibration signals. This new metric reveals indications of gear damage with all of the methods on this data set. Analysis with the CWT detects mechanical problems with the test rig not found with the other transforms. The WV-CW and CWT use considerably more resources than the STFT and the DWT. More testing of the new metric is needed to determine its value for automatic fault detection and to develop methods of setting the threshold for the metric.

  6. Skin surface removal on breast microwave imagery using wavelet multiscale products

    NASA Astrophysics Data System (ADS)

    Flores-Tapia, Daniel; Thomas, Gabriel; Pistorius, Stephen

    2006-03-01

    In many parts of the world, breast cancer is the leading cause mortality among women and it is the major cause of cancer death, next only to lung cancer. In recent years, microwave imaging has shown its potential as an alternative approach for breast cancer detection. Although advances have improved the likelihood of developing an early detection system based on this technology, there are still limitations. One of these limitations is that target responses are often obscured by surface reflections. Contrary to ground penetrating radar applications, a simple reference subtraction cannot be easily applied to alleviate this problem due to differences in the breast skin composition between patients. A novel surface removal technique for the removal of these high intensity reflections is proposed in this paper. This paper presents an algorithm based on the multiplication of adjacent wavelet subbands in order to enhance target echoes while reducing skin reflections. In these multiscale products, target signatures can be effectively distinguished from surface reflections. A simple threshold is applied to the signal in the wavelet domain in order to eliminate the skin responses. This final signal is reconstructed to the spatial domain in order to obtain a focused image. The proposed algorithm yielded promising results when applied to real data obtained from a phantom which mimics the dielectric properties of breast, cancer and skin tissues.

  7. [Laser Raman spectrum analysis of carbendazim pesticide].

    PubMed

    Wang, Xiao-bin; Wu, Rui-mei; Liu, Mu-hua; Zhang, Lu-ling; Lin, Lei; Yan, Lin-yuan

    2014-06-01

    Raman signal of solid and liquid carbendazim pesticide was collected by laser Raman spectrometer. The acquired Raman spectrum signal of solid carbendazim was preprocessed by wavelet analysis method, and the optimal combination of wavelet denoising parameter was selected through mixed orthogonal test. The results showed that the best effect was got with signal to noise ratio (SNR) being 62.483 when db2 wavelet function was used, decomposition level was 2, the threshold option scheme was 'rigisure' and reset mode was 'sln'. According to the vibration mode of different functional groups, the de-noised Raman bands could be divided into 3 areas: 1 400-2 000, 700-1 400 and 200-700 cm(-1). And the de-noised Raman bands were assigned with and analyzed. The characteristic vibrational modes were gained in different ranges of wavenumbers. Strong Raman signals were observed in the Raman spectrum at 619, 725, 964, 1 022, 1 265, 1 274 and 1 478 cm(-1), respectively. These characteristic vibrational modes are characteristic Raman peaks of solid carbendazim pesticide. Find characteristic Raman peaks at 629, 727, 1 001, 1 219, 1 258 and 1 365 cm(-1) in Raman spectrum signal of liquid carbendazim. These characteristic peaks were basically tallies with the solid carbendazim. The results can provide basis for the rapid screening of pesticide residue in food and agricultural products based on Raman spectrum.

  8. Finding the multipath propagation of multivariable crude oil prices using a wavelet-based network approach

    NASA Astrophysics Data System (ADS)

    Jia, Xiaoliang; An, Haizhong; Sun, Xiaoqi; Huang, Xuan; Gao, Xiangyun

    2016-04-01

    The globalization and regionalization of crude oil trade inevitably give rise to the difference of crude oil prices. The understanding of the pattern of the crude oil prices' mutual propagation is essential for analyzing the development of global oil trade. Previous research has focused mainly on the fuzzy long- or short-term one-to-one propagation of bivariate oil prices, generally ignoring various patterns of periodical multivariate propagation. This study presents a wavelet-based network approach to help uncover the multipath propagation of multivariable crude oil prices in a joint time-frequency period. The weekly oil spot prices of the OPEC member states from June 1999 to March 2011 are adopted as the sample data. First, we used wavelet analysis to find different subseries based on an optimal decomposing scale to describe the periodical feature of the original oil price time series. Second, a complex network model was constructed based on an optimal threshold selection to describe the structural feature of multivariable oil prices. Third, Bayesian network analysis (BNA) was conducted to find the probability causal relationship based on periodical structural features to describe the various patterns of periodical multivariable propagation. Finally, the significance of the leading and intermediary oil prices is discussed. These findings are beneficial for the implementation of periodical target-oriented pricing policies and investment strategies.

  9. Compressively sampled MR image reconstruction using generalized thresholding iterative algorithm

    NASA Astrophysics Data System (ADS)

    Elahi, Sana; kaleem, Muhammad; Omer, Hammad

    2018-01-01

    Compressed sensing (CS) is an emerging area of interest in Magnetic Resonance Imaging (MRI). CS is used for the reconstruction of the images from a very limited number of samples in k-space. This significantly reduces the MRI data acquisition time. One important requirement for signal recovery in CS is the use of an appropriate non-linear reconstruction algorithm. It is a challenging task to choose a reconstruction algorithm that would accurately reconstruct the MR images from the under-sampled k-space data. Various algorithms have been used to solve the system of non-linear equations for better image quality and reconstruction speed in CS. In the recent past, iterative soft thresholding algorithm (ISTA) has been introduced in CS-MRI. This algorithm directly cancels the incoherent artifacts produced because of the undersampling in k -space. This paper introduces an improved iterative algorithm based on p -thresholding technique for CS-MRI image reconstruction. The use of p -thresholding function promotes sparsity in the image which is a key factor for CS based image reconstruction. The p -thresholding based iterative algorithm is a modification of ISTA, and minimizes non-convex functions. It has been shown that the proposed p -thresholding iterative algorithm can be used effectively to recover fully sampled image from the under-sampled data in MRI. The performance of the proposed method is verified using simulated and actual MRI data taken at St. Mary's Hospital, London. The quality of the reconstructed images is measured in terms of peak signal-to-noise ratio (PSNR), artifact power (AP), and structural similarity index measure (SSIM). The proposed approach shows improved performance when compared to other iterative algorithms based on log thresholding, soft thresholding and hard thresholding techniques at different reduction factors.

  10. Percutaneous Soft Tissue Release for Treating Chronic Recurrent Myofascial Pain Associated with Lateral Epicondylitis: 6 Case Studies

    PubMed Central

    Lin, Ming-Ta; Chou, Li-Wei; Chen, Hsin-Shui; Kao, Mu-Jung

    2012-01-01

    Objective. The purpose of this pilot study is to investigate the effectiveness of the percutaneous soft tissue release for the treatment of recurrent myofascial pain in the forearm due to recurrent lateral epicondylitis. Methods. Six patients with chronic recurrent pain in the forearm with myofascial trigger points (MTrPs) due to chronic lateral epicondylitis were treated with percutaneous soft tissue release of Lin's technique. Pain intensity (measured with a numerical pain rating scale), pressure pain threshold (measured with a pressure algometer), and grasping strength (measured with a hand dynamometer) were assessed before, immediately after, and 3 months and 12 months after the treatment. Results. For every individual case, the pain intensity was significantly reduced (P < 0.01) and the pressure pain threshold and the grasping strength were significantly increased (P < 0.01) immediately after the treatment. This significant effectiveness lasts for at least one year. Conclusions. It is suggested that percutaneous soft tissue release can be used for treating chronic recurrent lateral epicondylitis to avoid recurrence, if other treatment, such as oral anti-inflammatory medicine, physical therapy, or local steroid injection, cannot control the recurrent pain. PMID:23243428

  11. SU-C-304-05: Use of Local Noise Power Spectrum and Wavelets in Comprehensive EPID Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Gopal, A; Yan, G

    2015-06-15

    Purpose: As EPIDs are increasingly used for IMRT QA and real-time treatment verification, comprehensive quality assurance (QA) of EPIDs becomes critical. Current QA with phantoms such as the Las Vegas and PIPSpro™ can fail in the early detection of EPID artifacts. Beyond image quality assessment, we propose a quantitative methodology using local noise power spectrum (NPS) to characterize image noise and wavelet transform to identify bad pixels and inter-subpanel flat-fielding artifacts. Methods: A total of 93 image sets including bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Quantitative metrics such asmore » modulation transform function (MTF), NPS and detective quantum efficiency (DQE) were computed for each image set. Local 2D NPS was calculated for each subpanel. A 1D NPS was obtained by radial averaging the 2D NPS and fitted to a power-law function. R-square and slope of the linear regression analysis were used for panel performance assessment. Haar wavelet transformation was employed to identify pixel defects and non-uniform gain correction across subpanels. Results: Overall image quality was assessed with DQE based on empirically derived area under curve (AUC) thresholds. Using linear regression analysis of 1D NPS, panels with acceptable flat fielding were indicated by r-square between 0.8 and 1, and slopes of −0.4 to −0.7. However, for panels requiring flat fielding recalibration, r-square values less than 0.8 and slopes from +0.2 to −0.4 were observed. The wavelet transform successfully identified pixel defects and inter-subpanel flat fielding artifacts. Standard QA with the Las Vegas and PIPSpro phantoms failed to detect these artifacts. Conclusion: The proposed QA methodology is promising for the early detection of imaging and dosimetric artifacts of EPIDs. Local NPS can accurately characterize the noise level within each subpanel, while the wavelet transforms can detect bad pixels and inter-subpanel flat fielding artifacts.« less

  12. Higgs boson gluon-fusion production beyond threshold in N 3LO QCD

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2015-03-18

    In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.

  13. Beauty and charm production in fixed target experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidonakis, Nikolaos; Vogt, Ramona

    We present calculations of NNLO threshold corrections for beauty and charm production in {pi}{sup -} p and pp interactions at fixed-target experiments. Recent calculations for heavy quark hadroproduction have included next-to-next-to-leading-order (NNLO) soft-gluon corrections [1] to the double differential cross section from threshold resummation techniques [2]. These corrections are important for near-threshold beauty and charm production at fixed-target experiments, including HERA-B and some of the current and future heavy ion experiments.

  14. Soft-solution route to ZnO nanowall array with low threshold power density

    NASA Astrophysics Data System (ADS)

    Jang, Eue-Soon; Chen, Xiaoyuan; Won, Jung-Hee; Chung, Jae-Hun; Jang, Du-Jeon; Kim, Young-Woon; Choy, Jin-Ho

    2010-07-01

    ZnO nanowall array (ZNWA) has been directionally grown on the buffer layer of ZnO nanoparticles dip-coated on Si-wafer under a soft solution process. Nanowalls on substrate are in most suitable shape and orientation not only as an optical trap but also as an optical waveguide due to their unique growth habit, V[011¯0]≫V[0001]≈V[0001¯]. Consequently, the stimulated emission at 384 nm through nanowalls is generated by the threshold power density of only 25 kW/cm2. Such UV lasing properties are superior to those of previously reported ZnO nanorod arrays. Moreover, there is no green (defect) emission due to the mild procedure to synthesize ZNWA.

  15. A Fast and On-Machine Measuring System Using the Laser Displacement Sensor for the Contour Parameters of the Drill Pipe Thread.

    PubMed

    Dong, Zhixu; Sun, Xingwei; Chen, Changzheng; Sun, Mengnan

    2018-04-13

    The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor's measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved.

  16. A Fast and On-Machine Measuring System Using the Laser Displacement Sensor for the Contour Parameters of the Drill Pipe Thread

    PubMed Central

    Sun, Xingwei; Chen, Changzheng; Sun, Mengnan

    2018-01-01

    The inconvenient loading and unloading of a long and heavy drill pipe gives rise to the difficulty in measuring the contour parameters of its threads at both ends. To solve this problem, in this paper we take the SCK230 drill pipe thread-repairing machine tool as a carrier to design and achieve a fast and on-machine measuring system based on a laser probe. This system drives a laser displacement sensor to acquire the contour data of a certain axial section of the thread by using the servo function of a CNC machine tool. To correct the sensor’s measurement errors caused by the measuring point inclination angle, an inclination error model is built to compensate data in real time. To better suppress random error interference and ensure real contour information, a new wavelet threshold function is proposed to process data through the wavelet threshold denoising. Discrete data after denoising is segmented according to the geometrical characteristics of the drill pipe thread, and the regression model of the contour data in each section is fitted by using the method of weighted total least squares (WTLS). Then, the thread parameters are calculated in real time to judge the processing quality. Inclination error experiments show that the proposed compensation model is accurate and effective, and it can improve the data acquisition accuracy of a sensor. Simulation results indicate that the improved threshold function is of better continuity and self-adaptability, which makes sure that denoising effects are guaranteed, and, meanwhile, the complete elimination of real data distorted in random errors is avoided. Additionally, NC50 thread-testing experiments show that the proposed on-machine measuring system can complete the measurement of a 25 mm thread in 7.8 s, with a measurement accuracy of ±8 μm and repeatability limit ≤ 4 μm (high repeatability), and hence the accuracy and efficiency of measurement are both improved. PMID:29652836

  17. The development and application of an injury prediction model for noncontact, soft-tissue injuries in elite collision sport athletes.

    PubMed

    Gabbett, Tim J

    2010-10-01

    Limited information exists on the training dose-response relationship in elite collision sport athletes. In addition, no study has developed an injury prediction model for collision sport athletes. The purpose of this study was to develop an injury prediction model for noncontact, soft-tissue injuries in elite collision sport athletes. Ninety-one professional rugby league players participated in this 4-year prospective study. This study was conducted in 2 phases. Firstly, training load and injury data were prospectively recorded over 2 competitive seasons in elite collision sport athletes. Training load and injury data were modeled using a logistic regression model with a binomial distribution (injury vs. no injury) and logit link function. Secondly, training load and injury data were prospectively recorded over a further 2 competitive seasons in the same cohort of elite collision sport athletes. An injury prediction model based on planned and actual training loads was developed and implemented to determine if noncontact, soft-tissue injuries could be predicted and therefore prevented in elite collision sport athletes. Players were 50-80% likely to sustain a preseason injury within the training load range of 3,000-5,000 units. These training load 'thresholds' were considerably reduced (1,700-3,000 units) in the late-competition phase of the season. A total of 159 noncontact, soft-tissue injuries were sustained over the latter 2 seasons. The percentage of true positive predictions was 62.3% (n = 121), whereas the total number of false positive and false negative predictions was 20 and 18, respectively. Players that exceeded the training load threshold were 70 times more likely to test positive for noncontact, soft-tissue injury, whereas players that did not exceed the training load threshold were injured 1/10 as often. These findings provide information on the training dose-response relationship and a scientific method of monitoring and regulating training load in elite collision sport athletes.

  18. Just Noticeable Distortion Model and Its Application in Color Image Watermarking

    NASA Astrophysics Data System (ADS)

    Liu, Kuo-Cheng

    In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.

  19. The Soft Rock Socketed Monopile with Creep Effects - A Reliability Approach based on Wavelet Neural Networks

    NASA Astrophysics Data System (ADS)

    Kozubal, Janusz; Tomanovic, Zvonko; Zivaljevic, Slobodan

    2016-09-01

    In the present study the numerical model of the pile embedded in marl described by a time dependent model, based on laboratory tests, is proposed. The solutions complement the state of knowledge of the monopile loaded by horizontal force in its head with respect to its random variability values in time function. The investigated reliability problem is defined by the union of failure events defined by the excessive horizontal maximal displacement of the pile head in each periods of loads. Abaqus has been used for modeling of the presented task with a two layered viscoplastic model for marl. The mechanical parameters for both parts of model: plastic and rheological were calibrated based on the creep laboratory test results. The important aspect of the problem is reliability analysis of a monopile in complex environment under random sequences of loads which help understanding the role of viscosity in nature of rock basis constructions. Due to the lack of analytical solutions the computations were done by the method of response surface in conjunction with wavelet neural network as a method recommended for time sequences process and description of nonlinear phenomenon.

  20. Wavelet Scale Analysis of Mesoscale Convective Systems for Detecting Deep Convection From Infrared Imagery

    NASA Astrophysics Data System (ADS)

    Klein, Cornelia; Belušić, Danijel; Taylor, Christopher M.

    2018-03-01

    Mesoscale convective systems (MCSs) are frequently associated with rainfall extremes and are expected to further intensify under global warming. However, despite the significant impact of such extreme events, the dominant processes favoring their occurrence are still under debate. Meteosat geostationary satellites provide unique long-term subhourly records of cloud top temperatures, allowing to track changes in MCS structures that could be linked to rainfall intensification. Focusing on West Africa, we show that Meteosat cloud top temperatures are a useful proxy for rainfall intensities, as derived from snapshots from the Tropical Rainfall Measuring Mission 2A25 product: MCSs larger than 15,000 km2 at a temperature threshold of -40°C are found to produce 91% of all extreme rainfall occurrences in the study region, with 80% of the storms producing extreme rain when their minimum temperature drops below -80°C. Furthermore, we present a new method based on 2-D continuous wavelet transform to explore the relationship between cloud top temperature and rainfall intensity for subcloud features at different length scales. The method shows great potential for separating convective and stratiform cloud parts when combining information on temperature and scale, improving the common approach of using a temperature threshold only. We find that below -80°C, every fifth pixel is associated with deep convection. This frequency is doubled when looking at subcloud features smaller than 35 km. Scale analysis of subcloud features can thus help to better exploit cloud top temperature data sets, which provide much more spatiotemporal detail of MCS characteristics than available rainfall data sets alone.

  1. Texture orientation-based algorithm for detecting infrared maritime targets.

    PubMed

    Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai

    2015-05-20

    Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.

  2. Implementation and performance evaluation of acoustic denoising algorithms for UAV

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ahmed Sony Kamal

    Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.

  3. A hybrid spatial-spectral denoising method for infrared hyperspectral images using 2DPCA

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Ma, Yong; Mei, Xiaoguang; Fan, Fan

    2016-11-01

    The traditional noise reduction methods for 3-D infrared hyperspectral images typically operate independently in either the spatial or spectral domain, and such methods overlook the relationship between the two domains. To address this issue, we propose a hybrid spatial-spectral method in this paper to link both domains. First, principal component analysis and bivariate wavelet shrinkage are performed in the 2-D spatial domain. Second, 2-D principal component analysis transformation is conducted in the 1-D spectral domain to separate the basic components from detail ones. The energy distribution of noise is unaffected by orthogonal transformation; therefore, the signal-to-noise ratio of each component is used as a criterion to determine whether a component should be protected from over-denoising or denoised with certain 1-D denoising methods. This study implements the 1-D wavelet shrinking threshold method based on Stein's unbiased risk estimator, and the quantitative results on publicly available datasets demonstrate that our method can improve denoising performance more effectively than other state-of-the-art methods can.

  4. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  5. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  6. SU-C-213-01: 3D Printed Patient Specific Phantom Composed of Bone and Soft Tissue Substitute Plastics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ehler, E; Sterling, D; Higgins, P

    Purpose: 3D printed phantoms constructed of multiple tissue approximating materials could be useful in both clinical and research aspects of radiotherapy. This work describes a 3D printed phantom constructed with tissue substitute plastics for both bone and soft tissue; air cavities were included as well. Methods: 3D models of an anonymized nasopharynx patient were generated for air cavities, soft tissues, and bone, which were segmented by Hounsfield Unit (HU) thresholds. HU thresholds were chosen to define air-to-soft tissue boundaries of 0.65 g/cc and soft tissue-to-bone boundaries of 1.18 g/cc based on clinical HU to density tables. After evaluation of severalmore » composite plastics, a bone tissue substitute was identified as an acceptable material for typical radiotherapy x-ray energies, composed of iron and PLA plastic. PET plastic was determined to be an acceptable soft tissue substitute. 3D printing was performed on a consumer grade dual extrusion fused deposition model 3D printer. Results: MVCT scans of the 3D printed heterogeneous phantom were acquired. Rigid image registration of the patient and the 3D printed phantom scans was performed. The average physical density of the soft tissue and bone regions was 1.02 ± 0.08 g/cc and 1.39 ± 0.14 g/cc, respectively, for the patient kVCT scan. In the 3D printed phantom MVCT scan, the average density of the soft tissue and bone was 1.01 ± 0.09 g/cc and 1.44 ± 0.12 g/cc, respectively. Conclusion: A patient specific phantom, constructed of heterogeneous tissue substitute materials was constructed by 3D printing. MVCT of the 3D printed phantom showed realistic tissue densities were recreated by the 3D printing materials. Funding provided by intra-department grant by University of Minnesota Department of Radiation Oncology.« less

  7. An enriched stable-isotope approach to determine the gill-zinc binding properties of juvenile rainbow trout (Oncorhynchus mykiss) during acute zinc exposures in hard and soft waters

    USGS Publications Warehouse

    Todd, A.S.; Brinkman, S.; Wolf, R.E.; Lamothe, P.J.; Smith, K.S.; Ranville, J.F.

    2009-01-01

    The objective of the present study was to employ an enriched stable-isotope approach to characterize Zn uptake in the gills of rainbow trout (Oncorhynchus mykiss) during acute Zn exposures in hard water (???140 mg/L as CaCO 3) and soft water (???30 mg/L as CaCO3). Juvenile rainbow trout were acclimated to the test hardnesses and then exposed for up to 72 h in static exposures to a range of Zn concentrations in hard water (0-1,000 ??g/L) and soft water (0-250 ??g/L). To facilitate detection of new gill Zn from endogenous gill Zn, the exposure media was significantly enriched with 67Zn stable isotope (89.60% vs 4.1% natural abundance). Additionally, acute Zn toxicity thresholds (96-h median lethal concentration [LC50]) were determined experimentally through traditional, flow-through toxicity tests in hard water (580 ??g/L) and soft water (110 ??g/L). Following short-term (???3 h) exposures, significant differences in gill accumulation of Zn between hard and soft water treatments were observed at the three common concentrations (75, 150, and 250 ??g/L), with soft water gills accumulating more Zn than hard water gills. Short-term gill Zn accumulation at hard and soft water LC50s (45-min median lethal accumulation) was similar (0.27 and 0.20 ??g/g wet wt, respectively). Finally, comparison of experimental gill Zn accumulation, with accumulation predicted by the biotic ligand model, demonstrated that model output reflected short-term (<1 h) experimental gill Zn accumulation and predicted observed differences in accumulation between hard and soft water rainbow trout gills. Our results indicate that measurable differences exist in short-term gill Zn accumulation following acclimation and exposure in different water hardnesses and that short-term Zn accumulation appears to be predictive of Zn acute toxicity thresholds (96-h LC50s). ?? 2009 SETAC.

  8. Normal and abnormal tissue identification system and method for medical images such as digital mammograms

    NASA Technical Reports Server (NTRS)

    Heine, John J. (Inventor); Clarke, Laurence P. (Inventor); Deans, Stanley R. (Inventor); Stauduhar, Richard Paul (Inventor); Cullers, David Kent (Inventor)

    2001-01-01

    A system and method for analyzing a medical image to determine whether an abnormality is present, for example, in digital mammograms, includes the application of a wavelet expansion to a raw image to obtain subspace images of varying resolution. At least one subspace image is selected that has a resolution commensurate with a desired predetermined detection resolution range. A functional form of a probability distribution function is determined for each selected subspace image, and an optimal statistical normal image region test is determined for each selected subspace image. A threshold level for the probability distribution function is established from the optimal statistical normal image region test for each selected subspace image. A region size comprising at least one sector is defined, and an output image is created that includes a combination of all regions for each selected subspace image. Each region has a first value when the region intensity level is above the threshold and a second value when the region intensity level is below the threshold. This permits the localization of a potential abnormality within the image.

  9. Exploring three faint source detections methods for aperture synthesis radio images

    NASA Astrophysics Data System (ADS)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko

    In this study, we compute the gluon fusion Higgs boson cross-section at N 3LO through the second term in the threshold expansion. This calculation constitutes a major milestone towards the full N 3LO cross section. Our result has the best formal accuracy in the threshold expansion currently available, and includes contributions from collinear regions besides subleading corrections from soft and hard regions, as well as certain logarithmically enhanced contributions for general kinematics. We use our results to perform a critical appraisal of the validity of the threshold approximation at N 3LO in perturbative QCD.

  11. Blurred Star Image Processing for Star Sensors under Dynamic Conditions

    PubMed Central

    Zhang, Weina; Quan, Wei; Guo, Lei

    2012-01-01

    The precision of star point location is significant to identify the star map and to acquire the aircraft attitude for star sensors. Under dynamic conditions, star images are not only corrupted by various noises, but also blurred due to the angular rate of the star sensor. According to different angular rates under dynamic conditions, a novel method is proposed in this article, which includes a denoising method based on adaptive wavelet threshold and a restoration method based on the large angular rate. The adaptive threshold is adopted for denoising the star image when the angular rate is in the dynamic range. Then, the mathematical model of motion blur is deduced so as to restore the blurred star map due to large angular rate. Simulation results validate the effectiveness of the proposed method, which is suitable for blurred star image processing and practical for attitude determination of satellites under dynamic conditions. PMID:22778666

  12. Multisensor signal denoising based on matching synchrosqueezing wavelet transform for mechanical fault condition assessment

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Huang, Tao; You, Guanghui

    2018-04-01

    Since it is difficult to obtain the accurate running status of mechanical equipment with only one sensor, multisensor measurement technology has attracted extensive attention. In the field of mechanical fault diagnosis and condition assessment based on vibration signal analysis, multisensor signal denoising has emerged as an important tool to improve the reliability of the measurement result. A reassignment technique termed the synchrosqueezing wavelet transform (SWT) has obvious superiority in slow time-varying signal representation and denoising for fault diagnosis applications. The SWT uses the time-frequency reassignment scheme, which can provide signal properties in 2D domains (time and frequency). However, when the measured signal contains strong noise components and fast varying instantaneous frequency, the performance of SWT-based analysis still depends on the accuracy of instantaneous frequency estimation. In this paper, a matching synchrosqueezing wavelet transform (MSWT) is investigated as a potential candidate to replace the conventional synchrosqueezing transform for the applications of denoising and fault feature extraction. The improved technology utilizes the comprehensive instantaneous frequency estimation by chirp rate estimation to achieve a highly concentrated time-frequency representation so that the signal resolution can be significantly improved. To exploit inter-channel dependencies, the multisensor denoising strategy is performed by using a modulated multivariate oscillation model to partition the time-frequency domain; then, the common characteristics of the multivariate data can be effectively identified. Furthermore, a modified universal threshold is utilized to remove noise components, while the signal components of interest can be retained. Thus, a novel MSWT-based multisensor signal denoising algorithm is proposed in this paper. The validity of this method is verified by numerical simulation, and experiments including a rolling bearing system and a gear system. The results show that the proposed multisensor matching synchronous squeezing wavelet transform (MMSWT) is superior to existing methods.

  13. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  14. A wavelet based method for automatic detection of slow eye movements: a pilot study.

    PubMed

    Magosso, Elisa; Provini, Federica; Montagna, Pasquale; Ursino, Mauro

    2006-11-01

    Electro-oculographic (EOG) activity during the wake-sleep transition is characterized by the appearance of slow eye movements (SEM). The present work describes an algorithm for the automatic localisation of SEM events from EOG recordings. The algorithm is based on a wavelet multiresolution analysis of the difference between right and left EOG tracings, and includes three main steps: (i) wavelet decomposition down to 10 detail levels (i.e., 10 scales), using Daubechies order 4 wavelet; (ii) computation of energy in 0.5s time steps at any level of decomposition; (iii) construction of a non-linear discriminant function expressing the relative energy of high-scale details to both high- and low-scale details. The main assumption is that the value of the discriminant function increases above a given threshold during SEM episodes due to energy redistribution toward higher scales. Ten EOG recordings from ten male patients with obstructive sleep apnea syndrome were used. All tracings included a period from pre-sleep wakefulness to stage 2 sleep. Two experts inspected the tracings separately to score SEMs. A reference set of SEM (gold standard) were obtained by joint examination by both experts. Parameters of the discriminant function were assigned on three tracings (design set) to minimize the disagreement between the system classification and classification by the two experts; the algorithm was then tested on the remaining seven tracings (test set). Results show that the agreement between the algorithm and the gold standard was 80.44+/-4.09%, the sensitivity of the algorithm was 67.2+/-7.37% and the selectivity 83.93+/-8.65%. However, most errors were not caused by an inability of the system to detect intervals with SEM activity against NON-SEM intervals, but were due to a different localisation of the beginning and end of some SEM episodes. The proposed method may be a valuable tool for computerized EOG analysis.

  15. Effects of Temperature on the Histotripsy Intrinsic Threshold for Cavitation.

    PubMed

    Vlaisavljevich, Eli; Xu, Zhen; Maxwell, Adam; Mancia, Lauren; Zhang, Xi; Lin, Kuang-Wei; Duryea, Alexander; Sukovich, Jonathan; Hall, Tim; Johnsen, Eric; Cain, Charles

    2016-05-10

    Histotripsy is an ultrasound ablation method that depends on the initiation of a dense cavitation bubble cloud to fractionate soft tissue. Previous work has demonstrated that a cavitation cloud can be formed by a single acoustic pulse with one high amplitude negative cycle, when the negative pressure amplitude exceeds a threshold intrinsic to the medium. The intrinsic thresholds in soft tissues and tissue phantoms that are water-based are similar to the intrinsic threshold of water over an experimentally verified frequency range of 0.3-3 MHz. Previous work studying the histotripsy intrinsic threshold has been limited to experiments performed at room temperature (~20°C). In this study, we investigate the effects of temperature on the histotripsy intrinsic threshold in water, which is essential to accurately predict the intrinsic thresholds expected over the full range of in vivo therapeutic temperatures. Based on previous work studying the histotripsy intrinsic threshold and classical nucleation theory, we hypothesize that the intrinsic threshold will decrease with increasing temperature. To test this hypothesis, the intrinsic threshold in water was investigated both experimentally and theoretically. The probability of generating cavitation bubbles was measured by applying a single pulse with one high amplitude negative cycle at 1 MHz to distilled, degassed water at temperatures ranging from 10°C-90°C. Cavitation was detected and characterized by passive cavitation detection and high-speed photography, from which the probability of cavitation was measured vs. pressure amplitude. The results indicate that the intrinsic threshold (the negative pressure at which the cavitation probability=0.5) significantly decreases with increasing temperature, showing a nearly linear decreasing trend from 29.8±0.4 MPa at 10˚C to 14.9±1.4 MPa at 90˚C. Overall, the results of this study support our hypothesis that the intrinsic threshold is highly dependent upon the temperature of the medium, which may allow for better predictions of cavitation generation at body temperature in vivo and at the elevated temperatures commonly seen in high intensity focused ultrasound (HIFU) regimes.

  16. Effects of Temperature on the Histotripsy Intrinsic Threshold for Cavitation

    PubMed Central

    Vlaisavljevich, Eli; Xu, Zhen; Maxwell, Adam; Mancia, Lauren; Zhang, Xi; Lin, Kuang-Wei; Duryea, Alexander; Sukovich, Jonathan; Hall, Tim; Johnsen, Eric; Cain, Charles

    2018-01-01

    Histotripsy is an ultrasound ablation method that depends on the initiation of a dense cavitation bubble cloud to fractionate soft tissue. Previous work has demonstrated that a cavitation cloud can be formed by a single acoustic pulse with one high amplitude negative cycle, when the negative pressure amplitude exceeds a threshold intrinsic to the medium. The intrinsic thresholds in soft tissues and tissue phantoms that are water-based are similar to the intrinsic threshold of water over an experimentally verified frequency range of 0.3–3 MHz. Previous work studying the histotripsy intrinsic threshold has been limited to experiments performed at room temperature (~ 20°C). In this study, we investigate the effects of temperature on the histotripsy intrinsic threshold in water, which is essential to accurately predict the intrinsic thresholds expected over the full range of in vivo therapeutic temperatures. Based on previous work studying the histotripsy intrinsic threshold and classical nucleation theory, we hypothesize that the intrinsic threshold will decrease with increasing temperature. To test this hypothesis, the intrinsic threshold in water was investigated both experimentally and theoretically. The probability of generating cavitation bubbles was measured by applying a single pulse with one high amplitude negative cycle at 1 MHz to distilled, degassed water at temperatures ranging from 10°C–90°C. Cavitation was detected and characterized by passive cavitation detection and high-speed photography, from which the probability of cavitation was measured vs. pressure amplitude. The results indicate that the intrinsic threshold (the negative pressure at which the cavitation probability = 0.5) significantly decreases with increasing temperature, showing a nearly linear decreasing trend from 29.8±0.4 MPa at 10°C to 14.9±1.4 MPa at 90°C. Overall, the results of this study support our hypothesis that the intrinsic threshold is highly dependent upon the temperature of the medium, which may allow for better predictions of cavitation generation at body temperature in vivo and at the elevated temperatures commonly seen in high intensity focused ultrasound (HIFU) regimes. PMID:28113706

  17. 'Soft' amplifier circuits based on field-effect ionic transistors.

    PubMed

    Boon, Niels; Olvera de la Cruz, Monica

    2015-06-28

    Soft materials can be used as the building blocks for electronic devices with extraordinary properties. We introduce a theoretical model for a field-effect transistor in which ions are the gated species instead of electrons. Our model incorporates readily-available soft materials, such as conductive porous membranes and polymer-electrolytes to represent a device that regulates ion currents and can be integrated as a component in larger circuits. By means of Nernst-Planck numerical simulations as well as an analytical description of the steady-state current we find that the responses of the system to various input voltages can be categorized into ohmic, sub-threshold, and active modes. This is fully analogous to what is known for the electronic field-effect transistor (FET). Pivotal FET properties such as the threshold voltage and the transconductance crucially depend on the half-cell redox potentials of the source and drain electrodes as well as on the polyelectrolyte charge density and the gate material work function. We confirm the analogy with the electronic FETs through numerical simulations of elementary amplifier circuits in which we successfully substitute the electronic transistor by an ionic transistor.

  18. Comparison of Fluoroplastic Causse Loop Piston and Titanium Soft-Clip in Stapedotomy

    PubMed Central

    Faramarzi, Mohammad; Gilanifar, Nafiseh; Roosta, Sareh

    2017-01-01

    Introduction: Different types of prosthesis are available for stapes replacement. Because there has been no published report on the efficacy of the titanium soft-clip vs the fluoroplastic Causse loop Teflon piston, we compared short-term hearing results of both types of prosthesis in patients who underwent stapedotomy due to otosclerosis. Materials and Methods: A total of 57 ears were included in the soft-clip group and 63 ears were included in the Teflon-piston group. Pre-operative and post-operative air conduction, bone conduction, air-bone gaps, speech discrimination score, and speech reception thresholds were analyzed. Results: Post-operative speech reception threshold gains did not differ significantly between the two groups (P=0.919). However, better post-operative air-bone gap improvement at low frequencies was observed in the Teflon-piston group over the short-term follow-up (at frequencies of 0.25 and 0.50 kHz; P=0.007 and P=0.001, respectively). Conclusion: Similar post-operative hearing results were observed in the two groups in the short-term. PMID:28229059

  19. The coolest DA white dwarfs detected at soft X-ray wavelengths

    NASA Technical Reports Server (NTRS)

    Kidder, K. M.; Holberg, J. B.; Barstow, M. A.; Tweedy, R. W.; Wesemael, F.

    1992-01-01

    New soft X-ray/EUV photometric observations of the DA white dwarfs KPD 0631 + 1043 = WD 0631 + 107 and PG 1113 + 413 = WD 1113 + 413 are analyzed. Previously reported soft X-ray detections of three other DAs and the failure to detect a fourth DA in deep Exosat observations are investigated. New ground-based spectra are presented for all of the objects, with IUE Ly-alpha spectra for some. These data are used to constrain the effective temperatures and surface gravities. The improved estimates of these parameters are employed to refer a photospheric He abundance for the hotter objects and to elucidate an effective observational low-temperature threshold for the detection of pure hydrogen DA white dwarfs at soft X-ray wavelengths.

  20. Mechanical sensibility in free and island flaps of the foot.

    PubMed

    Rautio, J; Kekoni, J; Hämäläinen, H; Härmä, M; Asko-Seljavaara, S

    1989-04-01

    Mechanical sensibility in 20 free skin flaps and four dorsalis pedis island flaps, used for the reconstruction of foot defects, was analyzed with conventional clinical methods and by determining sensibility thresholds to vibration frequencies of 20, 80, and 240 Hz. To eliminate inter-individual variability, a score was calculated for each frequency by dividing the thresholds determined for each flap with values obtained from the corresponding area on the uninjured foot. The soft tissue stability of the reconstruction was assessed. Patients were divided into three groups according to the scores. In the group of flaps with the best sensibility, the threshold increases were low at all frequencies. In the group with intermediate sensibility, the relative threshold increases were greater, the higher the frequency. In the group with the poorest sensibility, no thresholds were obtained with 240 Hz frequency and the thresholds increases were very high at all frequencies. Sensibility was not related to the length of follow-up time, nor to the type or size of the flap. However, flap sensibility was closely associated with that of the recipient area, where sensibility was usually inferior to that of normal skin. The island flaps generally had better sensibility than the free flaps. There was a good correspondence between the levels of sensibility determined by clinical and quantitative methods. The quantitative data on the level of sensibility obtained with the psychophysical method were found to be reliable and free from observer bias, and are therefore recommended for future studies. The degree of sensibility may have contributed to, but was not essential for, good soft-tissue stability of the reconstruction.

  1. The norms and variances of the Gabor, Morlet and general harmonic wavelet functions

    NASA Astrophysics Data System (ADS)

    Simonovski, I.; Boltežar, M.

    2003-07-01

    This paper deals with certain properties of the continuous wavelet transform and wavelet functions. The norms and the spreads in time and frequency of the common Gabor and Morlet wavelet functions are presented. It is shown that the norm of the Morlet wavelet function does not satisfy the normalization condition and that the normalized Morlet wavelet function is identical to the Gabor wavelet function with the parameter σ=1. The general harmonic wavelet function is developed using frequency modulation of the Hanning and Hamming window functions. Several properties of the general harmonic wavelet function are also presented and compared to the Gabor wavelet function. The time and frequency spreads of the general harmonic wavelet function are only slightly higher than the time and frequency spreads of the Gabor wavelet function. However, the general harmonic wavelet function is simpler to use than the Gabor wavelet function. In addition, the general harmonic wavelet function can be constructed in such a way that the zero average condition is truly satisfied. The average value of the Gabor wavelet function can approach a value of zero but it cannot reach it. When calculating the continuous wavelet transform, errors occur at the start- and the end-time indexes. This is called the edge effect and is caused by the fact that the wavelet transform is calculated from a signal of finite length. In this paper, we propose a method that uses signal mirroring to reduce the errors caused by the edge effect. The success of the proposed method is demonstrated by using a simulated signal.

  2. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  3. Threshold expansion of the gg (qqbar) →QQbar + X cross section at O (αs4)

    NASA Astrophysics Data System (ADS)

    Beneke, Martin; Czakon, Michal; Falgari, Pietro; Mitov, Alexander; Schwinn, Christian

    2010-07-01

    We derive the complete set of velocity-enhanced terms in the expansion of the total cross section for heavy-quark pair production in hadronic collisions at next-to-next-to-leading order. Our expression takes into account the effects of soft-gluon emission as well as that of potential-gluon exchanges. We prove that there are no enhancements due to subleading soft-gluon couplings multiplying the leading Coulomb singularity.

  4. Wavelets in Physics

    NASA Astrophysics Data System (ADS)

    van den Berg, J. C.

    2004-03-01

    A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.

  5. Wavelets in Physics

    NASA Astrophysics Data System (ADS)

    van den Berg, J. C.

    1999-08-01

    A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.

  6. Influence of abutment material on peri-implant soft tissues in anterior areas with thin gingival biotype: a multicentric prospective study.

    PubMed

    Lops, Diego; Stellini, Edoardo; Sbricoli, Luca; Cea, Niccolò; Romeo, Eugenio; Bressan, Eriberto

    2017-10-01

    The aim of the present clinical trial was to analyze, through spectrophotometric digital technology, the influence of the abutment material on the color of the peri-implant soft tissue in patients with thin gingival biotype. Thirty-seven patients received an endosseous dental implant in the anterior maxilla. At time of each definitive prosthesis delivery, an all-ceramic crown has been tried on gold, titanium and zirconia abutment. Peri-implant soft-tissue color has been measured through a spectrophotometer after the insertion of each single abutment. Also facial peri-implant soft-tissue thickness was measured at the level of the implant neck through a caliper. A specific software has been utilized to identify a standardized tissue area and to collect the data before the statistical analysis in Lab* color space. ΔE parameters of the selected abutments were tested for correlation with mucosal thickness. Pearson correlation test was used. Only 15 patients met the study inclusion criteria on peri-implant soft-tissue thickness. Peri-implant soft-tissue color was different from that around natural teeth, no matter which type of restorative material was selected. Measurements regarding all the abutments were above the critical threshold of ΔE 8.74 for intraoral color distinction by the naked eye. The ΔE mean values of gold and zirconium abutments were similar (11.43 and 11.37, respectively) and significantly lower (P = 0.03 and P = 0.04, respectively) than the titanium abutment (13.55). In patients with a facial soft-tissue thickness ≤2 mm, the ΔE mean value of gold and zirconia abutments was significantly lower than that of titanium abutments (P = 0.03 and P = 0.04, respectively) and much more close to the reference threshold of 8.74. For peri-implant soft tissue of ≤2 mm, gold or zirconia abutments could be selected in anterior areas treatment. Moreover, the thickness of the peri-implant soft tissue seemed to be a crucial factor in the abutment impact on the color of soft tissues with a thickness of ≤2 mm. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Influence of Abutment Color and Mucosal Thickness on Soft Tissue Color.

    PubMed

    Ferrari, Marco; Carrabba, Michele; Vichi, Alessandro; Goracci, Cecilia; Cagidiaco, Maria Crysanti

    Zirconia (ZrO₂) and titanium nitride (TiN) implant abutments were introduced mainly for esthetic purposes, as titanium's gray color can be visible through mucosal tissues. This study was aimed at assessing whether ZrO₂ and TiN abutments could achieve better esthetics in comparison with titanium (Ti) abutments, regarding the appearance of soft tissues. Ninety patients were included in the study. Each patient was provided with an implant (OsseoSpeed, Dentsply Implant System). A two-stage surgical technique was performed. Six months later, surgical reentry was performed. After 1 week, provisional restorations were screwed onto the implants. After 8 weeks, implant-level impressions were taken and soft tissue thickness was recorded, ranking thin (≤ 2 mm) or thick (≥ 2 mm). Patients were randomly allocated to three experimental groups, based on abutment type: (1) Ti, (2) TiN, and (3) ZrO₂. After 15 weeks, the final restorations were delivered. The mucosal area referring to each abutment was measured for color using a clinical spectrophotometer (Easyshade, VITA); color measurements of the contralateral areas referring to natural teeth were performed at the same time. The data were collected using the Commission Internationale de l'Eclairage (CIE) L*a*b* color system, and ΔE was calculated between peri-implant and contralateral soft tissues. A critical threshold of ΔE = 3.7 was selected. The chi-square test was used to identify statistically significant differences in ΔE between thin and thick mucosal tissues and among the abutment types. Three patients were lost at follow-up. No statistically significant differences were noticed as to the abutment type (P = .966). Statistically significant differences in ΔE were recorded between thick and thin peri-implant soft tissues (P < .001). Only 2 out of 64 patients with thick soft tissues showed a ΔE higher than 3.7: 1 in the TiN group and 1 in the ZrO₂ group. All the patients with thin soft tissues reported color changes that exceeded the critical threshold. The different abutment materials showed comparable results in terms of influence on soft tissue color. Regarding peri-implant soft tissue thickness, the influence of the tested abutments on soft tissue color became clinically relevant for values ≤ 2 mm.

  8. Time delay estimation using new spectral and adaptive filtering methods with applications to underwater target detection

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammed A.

    1997-11-01

    In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.

  9. Enhancing seismic P phase arrival picking based on wavelet denoising and kurtosis picker

    NASA Astrophysics Data System (ADS)

    Shang, Xueyi; Li, Xibing; Weng, Lei

    2018-01-01

    P phase arrival picking of weak signals is still challenging in seismology. A wavelet denoising is proposed to enhance seismic P phase arrival picking, and the kurtosis picker is applied on the wavelet-denoised signal to identify P phase arrival. It has been called the WD-K picker. The WD-K picker, which is different from those traditional wavelet-based pickers on the basis of a single wavelet component or certain main wavelet components, takes full advantage of the reconstruction of main detail wavelet components and the approximate wavelet component. The proposed WD-K picker considers more wavelet components and presents a better P phase arrival feature. The WD-K picker has been evaluated on 500 micro-seismic signals recorded in the Chinese Yongshaba mine. The comparison between the WD-K pickings and manual pickings shows the good picking accuracy of the WD-K picker. Furthermore, the WD-K picking performance has been compared with the main detail wavelet component combining-based kurtosis (WDC-K) picker, the single wavelet component-based kurtosis (SW-K) picker, and certain main wavelet component-based maximum kurtosis (MMW-K) picker. The comparison has demonstrated that the WD-K picker has better picking accuracy than the other three-wavelet and kurtosis-based pickers, thus showing the enhanced ability of wavelet denoising.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudgins, L.H.

    After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scalemore » number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.« less

  11. Wavelet transforms with discrete-time continuous-dilation wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Rao, Raghuveer M.

    1999-03-01

    Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.

  12. Seismic signals hard clipping overcoming

    NASA Astrophysics Data System (ADS)

    Olszowa, Paula; Sokolowski, Jakub

    2018-01-01

    In signal processing the clipping is understand as the phenomenon of limiting the signal beyond certain threshold. It is often related to overloading of a sensor. Two particular types of clipping are being recognized: soft and hard. Beyond the limiting value soft clipping reduces the signal real gain while the hard clipping stiffly sets the signal values at the limit. In both cases certain amount of signal information is lost. Obviously if one possess the model which describes the considered signal and the threshold value (which might be slightly more difficult to obtain in the soft clipping case), the attempt of restoring the signal can be made. Commonly it is assumed that the seismic signals take form of an impulse response of some specific system. This may lead to belief that the sine wave may be the most appropriate to fit in the clipping period. However, this should be tested. In this paper the possibility of overcoming the hard clipping in seismic signals originating from a geoseismic station belonging to an underground mine is considered. A set of raw signals will be hard-clipped manually and then couple different functions will be fitted and compared in terms of least squares. The results will be then analysed.

  13. [Analysis of electrically evoked response (EER) in relation to the central visual pathway of the cat (1). Wave shape of the cat EER].

    PubMed

    Fukatsu, Y; Miyake, Y; Sugita, S; Saito, A; Watanabe, S

    1990-11-01

    To analyze the Electrically evoked response (EER) in relation to the central visual pathway, the authors studied the properties of wave patterns and peak latencies of EER in 35 anesthetized adult cats. The cat EER showed two early positive waves on outward current (cornea cathode) stimulus and three or four early positive waves on inward current (cornea anode) stimulus. These waves were recorded within 50 ms after stimulus onset, and were the most consistent components in cat EER. The stimulus threshold for EER showed a less individual variation than amplitude. The difference of stimulus threshold between outward and inward current stimulus was also essentially negligible. The stimulus threshold was higher in early components than in late components. The peak latency of EER became shorter and the amplitude became higher, as the stimulus intensity was increased. However, this tendency was reversed and some wavelets started to appear when the stimulus was extremely strong. The recording using short stimulus duration and bipolar electrodes enabled us to reduce the electrical artifact of EER. These results obtained from cats were compared with those of humans and rabbits.

  14. Time-frequency analysis of phonocardiogram signals using wavelet transform: a comparative study.

    PubMed

    Ergen, Burhan; Tatar, Yetkin; Gulcur, Halil Ozcan

    2012-01-01

    Analysis of phonocardiogram (PCG) signals provides a non-invasive means to determine the abnormalities caused by cardiovascular system pathology. In general, time-frequency representation (TFR) methods are used to study the PCG signal because it is one of the non-stationary bio-signals. The continuous wavelet transform (CWT) is especially suitable for the analysis of non-stationary signals and to obtain the TFR, due to its high resolution, both in time and in frequency and has recently become a favourite tool. It decomposes a signal in terms of elementary contributions called wavelets, which are shifted and dilated copies of a fixed mother wavelet function, and yields a joint TFR. Although the basic characteristics of the wavelets are similar, each type of the wavelets produces a different TFR. In this study, eight real types of the most known wavelets are examined on typical PCG signals indicating heart abnormalities in order to determine the best wavelet to obtain a reliable TFR. For this purpose, the wavelet energy and frequency spectrum estimations based on the CWT and the spectra of the chosen wavelets were compared with the energy distribution and the autoregressive frequency spectra in order to determine the most suitable wavelet. The results show that Morlet wavelet is the most reliable wavelet for the time-frequency analysis of PCG signals.

  15. Wavelets and distributed approximating functionals

    NASA Astrophysics Data System (ADS)

    Wei, G. W.; Kouri, D. J.; Hoffman, D. K.

    1998-07-01

    A general procedure is proposed for constructing father and mother wavelets that have excellent time-frequency localization and can be used to generate entire wavelet families for use as wavelet transforms. One interesting feature of our father wavelets (scaling functions) is that they belong to a class of generalized delta sequences, which we refer to as distributed approximating functionals (DAFs). We indicate this by the notation wavelet-DAFs. Correspondingly, the mother wavelets generated from these wavelet-DAFs are appropriately called DAF-wavelets. Wavelet-DAFs can be regarded as providing a pointwise (localized) spectral method, which furnishes a bridge between the traditional global methods and local methods for solving partial differential equations. They are shown to provide extremely accurate numerical solutions for a number of nonlinear partial differential equations, including the Korteweg-de Vries (KdV) equation, for which a previous method has encountered difficulties (J. Comput. Phys. 132 (1997) 233).

  16. An Exponential Regulator for Rapidity Divergences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ye; Neill, Duff; Zhu, Hua Xing

    2016-04-01

    Finding an efficient and compelling regularization of soft and collinear degrees of freedom at the same invariant mass scale, but separated in rapidity is a persistent problem in high-energy factorization. In the course of a calculation, one encounters divergences unregulated by dimensional regularization, often called rapidity divergences. Once regulated, a general framework exists for their renormalization, the rapidity renormalization group (RRG), leading to fully resummed calculations of transverse momentum (to the jet axis) sensitive quantities. We examine how this regularization can be implemented via a multi-differential factorization of the soft-collinear phase-space, leading to an (in principle) alternative non-perturbative regularization ofmore » rapidity divergences. As an example, we examine the fully-differential factorization of a color singlet's momentum spectrum in a hadron-hadron collision at threshold. We show how this factorization acts as a mother theory to both traditional threshold and transverse momentum resummation, recovering the classical results for both resummations. Examining the refactorization of the transverse momentum beam functions in the threshold region, we show that one can directly calculate the rapidity renormalized function, while shedding light on the structure of joint resummation. Finally, we show how using modern bootstrap techniques, the transverse momentum spectrum is determined by an expansion about the threshold factorization, leading to a viable higher loop scheme for calculating the relevant anomalous dimensions for the transverse momentum spectrum.« less

  17. Applying wavelet transforms to analyse aircraft-measured turbulence and turbulent fluxes in the atmospheric boundary layer over eastern Siberia

    NASA Astrophysics Data System (ADS)

    Strunin, M. A.; Hiyama, T.

    2004-11-01

    The wavelet spectral method was applied to aircraft-based measurements of atmospheric turbulence obtained during joint Russian-Japanese research on the atmospheric boundary layer near Yakutsk (eastern Siberia) in April-June 2000. Practical ways to apply Fourier and wavelet methods for aircraft-based turbulence data are described. Comparisons between Fourier and wavelet transform results are shown and they demonstrate, in conjunction with theoretical and experimental restrictions, that the Fourier transform method is not useful for studying non-homogeneous turbulence. The wavelet method is free from many disadvantages of Fourier analysis and can yield more informative results. Comparison of Fourier and Morlet wavelet spectra showed good agreement at high frequencies (small scales). The quality of the wavelet transform and corresponding software was estimated by comparing the original data with restored data constructed with an inverse wavelet transform. A Haar wavelet basis was inappropriate for the turbulence data; the mother wavelet function recommended in this study is the Morlet wavelet. Good agreement was also shown between variances and covariances estimated with different mathematical techniques, i.e. through non-orthogonal wavelet spectra and through eddy correlation methods.

  18. VizieR Online Data Catalog: Kepler planetary candidates. V. 3yr Q1-Q12 (Rowe+, 2015)

    NASA Astrophysics Data System (ADS)

    Rowe, J. F.; Coughlin, J. L.; Antoci, V.; Barclay, T.; Batalha, N. M.; Borucki, W. J.; Burke, C. J.; Bryson, S. T.; Caldwell, D. A.; Campbell, J. R.; Catanzarite, J. H.; Christiansen, J. L.; Cochran, W.; Gilliland, R. L.; Girouard, F. R.; Haas, M. R.; Helminiak, K. G.; Henze, C. E.; Hoffman, K. L.; Howell, S. B.; Huber, D.; Hunter, R. C.; Jang-Condell, H.; Jenkins, J. M.; Klaus, T. C.; Latham, D. W.; Li, J.; Lissauer, J. J.; McCauliff, S. D.; Morris, R. L.; Mullally, F.; Ofir, A.; Quarles, B.; Quintana, E.; Sabale, A.; Seader, S.; Shporer, A.; Smith, J. C.; Steffen, J. H.; Still, M.; Tenenbaum, P.; Thompson, S. E.; Twicken, J. D.; van Laerhoven, C.; Wolfgang, A.; Zamudio, K. A.

    2015-04-01

    We began with the transit-event candidate list from Tenenbaum et al. (2013ApJS..206....5T) based on a wavelet, adaptive matched filter to search 192313 Kepler targets for periodic drops in flux indicative of a transiting planet. Detections are known as Threshold Crossing Events (TCEs). Tenenbaum et al. utilized three years of Kepler photometric observations (Q1-Q12) -the same data span employed by this study based on SOC 8.3 as part of Data Release 21 (Thompson S. E., Christiansen J. L., Jenkins J. M. et al. Kepler (KSCI-19061-001)). (3 data files).

  19. Soft x-ray free-electron laser induced damage to inorganic scintillators

    DOE PAGES

    Burian, Tomáš; Hájková, Věra; Chalupský, Jaromír; ...

    2015-01-07

    An irreversible response of inorganic scintillators to intense soft x-ray laser radiation was investigated at the FLASH (Free-electron LASer in Hamburg) facility. Three ionic crystals, namely, Ce:YAG (cerium-doped yttrium aluminum garnet), PbWO4 (lead tungstate), and ZnO (zinc oxide), were exposed to single 4.6 nm ultra-short laser pulses of variable pulse energy (up to 12 μJ) under normal incidence conditions with tight focus. Damaged areas produced with various levels of pulse fluences, were analyzed on the surface of irradiated samples using differential interference contrast (DIC) and atomic force microscopy (AFM). The effective beam area of 22.2 ± 2.2 μm2 was determinedmore » by means of the ablation imprints method with the use of poly(methyl methacrylate) - PMMA. Applied to the three inorganic materials, this procedure gave almost the same values of an effective area. The single-shot damage threshold fluence was determined for each of these inorganic materials. The Ce:YAG sample seems to be the most radiation resistant under the given irradiation conditions, its damage threshold was determined to be as high as 660.8 ± 71.2 mJ/cm2. Contrary to that, the PbWO4 sample exhibited the lowest radiation resistance with a threshold fluence of 62.6 ± 11.9 mJ/cm2. The threshold for ZnO was found to be 167.8 ± 30.8 mJ/cm2. Both interaction and material characteristics responsible for the damage threshold difference are discussed in the article.« less

  20. The effect of instantaneous input dynamic range setting on the speech perception of children with the nucleus 24 implant.

    PubMed

    Davidson, Lisa S; Skinner, Margaret W; Holstad, Beth A; Fears, Beverly T; Richter, Marie K; Matusofsky, Margaret; Brenner, Christine; Holden, Timothy; Birath, Amy; Kettel, Jerrica L; Scollie, Susan

    2009-06-01

    The purpose of this study was to examine the effects of a wider instantaneous input dynamic range (IIDR) setting on speech perception and comfort in quiet and noise for children wearing the Nucleus 24 implant system and the Freedom speech processor. In addition, children's ability to understand soft and conversational level speech in relation to aided sound-field thresholds was examined. Thirty children (age, 7 to 17 years) with the Nucleus 24 cochlear implant system and the Freedom speech processor with two different IIDR settings (30 versus 40 dB) were tested on the Consonant Nucleus Consonant (CNC) word test at 50 and 60 dB SPL, the Bamford-Kowal-Bench Speech in Noise Test, and a loudness rating task for four-talker speech noise. Aided thresholds for frequency-modulated tones, narrowband noise, and recorded Ling sounds were obtained with the two IIDRs and examined in relation to CNC scores at 50 dB SPL. Speech Intelligibility Indices were calculated using the long-term average speech spectrum of the CNC words at 50 dB SPL measured at each test site and aided thresholds. Group mean CNC scores at 50 dB SPL with the 40 IIDR were significantly higher (p < 0.001) than with the 30 IIDR. Group mean CNC scores at 60 dB SPL, loudness ratings, and the signal to noise ratios-50 for Bamford-Kowal-Bench Speech in Noise Test were not significantly different for the two IIDRs. Significantly improved aided thresholds at 250 to 6000 Hz as well as higher Speech Intelligibility Indices afforded improved audibility for speech presented at soft levels (50 dB SPL). These results indicate that an increased IIDR provides improved word recognition for soft levels of speech without compromising comfort of higher levels of speech sounds or sentence recognition in noise.

  1. High-performance wavelet engine

    NASA Astrophysics Data System (ADS)

    Taylor, Fred J.; Mellot, Jonathon D.; Strom, Erik; Koren, Iztok; Lewis, Michael P.

    1993-11-01

    Wavelet processing has shown great promise for a variety of image and signal processing applications. Wavelets are also among the most computationally expensive techniques in signal processing. It is demonstrated that a wavelet engine constructed with residue number system arithmetic elements offers significant advantages over commercially available wavelet accelerators based upon conventional arithmetic elements. Analysis is presented predicting the dynamic range requirements of the reported residue number system based wavelet accelerator.

  2. The immediate effects of atlanto-occipital joint manipulation and suboccipital muscle inhibition technique on active mouth opening and pressure pain sensitivity over latent myofascial trigger points in the masticatory muscles.

    PubMed

    Oliveira-Campelo, Natalia M; Rubens-Rebelatto, José; Martí N-Vallejo, Francisco J; Alburquerque-Sendí N, Francisco; Fernández-de-Las-Peñas, César

    2010-05-01

    A randomized controlled trial. To investigate the immediate effects on pressure pain thresholds over latent trigger points (TrPs) in the masseter and temporalis muscles and active mouth opening following atlanto-occipital joint thrust manipulation or a soft tissue manual intervention targeted to the suboccipital muscles. Previous studies have described hypoalgesic effects of neck manipulative interventions over TrPs in the cervical musculature. There is a lack of studies analyzing these mechanisms over TrPs of muscles innervated by the trigeminal nerve. One hundred twenty-two volunteers, 31 men and 91 women, between the ages of 18 and 30 years, with latent TrPs in the masseter muscle, were randomly divided into 3 groups: a manipulative group who received an atlanto-occipital joint thrust, a soft tissue group who received an inhibition technique over the suboccipital muscles, and a control group who did not receive an intervention. Pressure pain thresholds over latent TrPs in the masseter and temporalis muscles, and active mouth opening were assessed pretreatment and 2 minutes posttreatment by a blinded assessor. Mixed-model analyses of variance (ANOVA) were used to examine the effects of interventions on each outcome, with group as the between-subjects variable and time as the within-subjects variable. The primary analysis was the group-by-time interaction. The 2-by-3 mixed-model ANOVA revealed a significant group-by-time interaction for changes in pressure pain thresholds over masseter (P<.01) and temporalis (P = .003) muscle latent TrPs and also for active mouth opening (P<.001) in favor of the manipulative and soft tissue groups. Between-group effect sizes were small. The application of an atlanto-occipital thrust manipulation or soft tissue technique targeted to the suboccipital muscles led to an immediate increase in pressure pain thresholds over latent TrPs in the masseter and temporalis muscles and an increase in maximum active mouth opening. Nevertheless, the effects of both interventions were small and future studies are required to elucidate the clinical relevance of these changes. Therapy, level 1b.J Orthop Sports Phys Ther 2010;40(5):310-317, Epub 12 April 2010. doi:10.2519/jospt.2010.3257.

  3. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  4. [Study on sensitivity of climatic factors on influenza A (H1N1) based on classification and regression tree and wavelet analysis].

    PubMed

    Xiao, Hong; Lin, Xiao-ling; Dai, Xiang-yu; Gao, Li-dong; Chen, Bi-yun; Zhang, Xi-xing; Zhu, Pei-juan; Tian, Huai-yu

    2012-05-01

    To analyze the periodicity of pandemic influenza A (H1N1) in Changsha in year 2009 and its correlation with sensitive climatic factors. The information of 5439 cases of influenza A (H1N1) and synchronous meteorological data during the period between May 22th and December 31st in year 2009 (223 days in total) in Changsha city were collected. The classification and regression tree (CART) was employed to screen the sensitive climatic factors on influenza A (H1N1); meanwhile, cross wavelet transform and wavelet coherence analysis were applied to assess and compare the periodicity of the pandemic disease and its association with the time-lag phase features of the sensitive climatic factors. The results of CART indicated that the daily minimum temperature and daily absolute humidity were the sensitive climatic factors for the popularity of influenza A (H1N1) in Changsha. The peak of the incidence of influenza A (H1N1) was in the period between October and December (Median (M) = 44.00 cases per day), simultaneously the daily minimum temperature (M = 13°C) and daily absolute humidity (M = 6.69 g/m(3)) were relatively low. The results of wavelet analysis demonstrated that a period of 16 days was found in the epidemic threshold in Changsha, while the daily minimum temperature and daily absolute humidity were the relatively sensitive climatic factors. The number of daily reported patients was statistically relevant to the daily minimum temperature and daily absolute humidity. The frequency domain was mostly in the period of (16 ± 2) days. In the initial stage of the disease (from August 9th and September 8th), a 6-day lag was found between the incidence and the daily minimum temperature. In the peak period of the disease, the daily minimum temperature and daily absolute humidity were negatively relevant to the incidence of the disease. In the pandemic period, the incidence of influenza A (H1N1) showed periodic features; and the sensitive climatic factors did have a "driving effect" on the incidence of influenza A (H1N1).

  5. Design of a Biorthogonal Wavelet Transform Based R-Peak Detection and Data Compression Scheme for Implantable Cardiac Pacemaker Systems.

    PubMed

    Kumar, Ashish; Kumar, Manjeet; Komaragiri, Rama

    2018-04-19

    Bradycardia can be modulated using the cardiac pacemaker, an implantable medical device which sets and balances the patient's cardiac health. The device has been widely used to detect and monitor the patient's heart rate. The data collected hence has the highest authenticity assurance and is convenient for further electric stimulation. In the pacemaker, ECG detector is one of the most important element. The device is available in its new digital form, which is more efficient and accurate in performance with the added advantage of economical power consumption platform. In this work, a joint algorithm based on biorthogonal wavelet transform and run-length encoding (RLE) is proposed for QRS complex detection of the ECG signal and compressing the detected ECG data. Biorthogonal wavelet transform of the input ECG signal is first calculated using a modified demand based filter bank architecture which consists of a series combination of three lowpass filters with a highpass filter. Lowpass and highpass filters are realized using a linear phase structure which reduces the hardware cost of the proposed design approximately by 50%. Then, the location of the R-peak is found by comparing the denoised ECG signal with the threshold value. The proposed R-peak detector achieves the highest sensitivity and positive predictivity of 99.75 and 99.98 respectively with the MIT-BIH arrhythmia database. Also, the proposed R-peak detector achieves a comparatively low data error rate (DER) of 0.002. The use of RLE for the compression of detected ECG data achieves a higher compression ratio (CR) of 17.1. To justify the effectiveness of the proposed algorithm, the results have been compared with the existing methods, like Huffman coding/simple predictor, Huffman coding/adaptive, and slope predictor/fixed length packaging.

  6. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform.

    PubMed

    Rezaee, Kh; Haddadnia, J

    2013-09-01

    Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters' number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output.

  7. A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000

    2015-04-15

    Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less

  8. Wavelet transforms as solutions of partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zweig, G.

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuousmore » wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.« less

  9. Wavelet Transforms using VTK-m

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaomeng; Sewell, Christopher Meyer

    2016-09-27

    These are a set of slides that deal with the topics of wavelet transforms using VTK-m. First, wavelets are discussed and detailed, then VTK-m is discussed and detailed, then wavelets and VTK-m are looked at from a performance comparison, then from an accuracy comparison, and finally lessons learned, conclusion, and what is next. Lessons learned are the following: Launching worklets is expensive; Natural logic of performing 2D wavelet transform: Repeat the same 1D wavelet transform on every row, repeat the same 1D wavelet transform on every column, invoke the 1D wavelet worklet every time: num_rows x num_columns; VTK-m approach ofmore » performing 2D wavelet transform: Create a worklet for 2D that handles both rows and columns, invoke this new worklet only one time; Fast calculation, but cannot reuse 1D implementations.« less

  10. Use of local noise power spectrum and wavelet analysis in quantitative image quality assurance for EPIDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Soyoung

    Purpose: To investigate the use of local noise power spectrum (NPS) to characterize image noise and wavelet analysis to isolate defective pixels and inter-subpanel flat-fielding artifacts for quantitative quality assurance (QA) of electronic portal imaging devices (EPIDs). Methods: A total of 93 image sets including custom-made bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Global quantitative metrics such as modulation transform function (MTF), NPS, and detective quantum efficiency (DQE) were computed for each image set. Local NPS was also calculated for individual subpanels by sampling region of interests within each subpanelmore » of the EPID. The 1D NPS, obtained by radially averaging the 2D NPS, was fitted to a power-law function. The r-square value of the linear regression analysis was used as a singular metric to characterize the noise properties of individual subpanels of the EPID. The sensitivity of the local NPS was first compared with the global quantitative metrics using historical image sets. It was then compared with two commonly used commercial QA systems with images collected after applying two different EPID calibration methods (single-level gain and multilevel gain). To detect isolated defective pixels and inter-subpanel flat-fielding artifacts, Haar wavelet transform was applied on the images. Results: Global quantitative metrics including MTF, NPS, and DQE showed little change over the period of data collection. On the contrary, a strong correlation between the local NPS (r-square values) and the variation of the EPID noise condition was observed. The local NPS analysis indicated image quality improvement with the r-square values increased from 0.80 ± 0.03 (before calibration) to 0.85 ± 0.03 (after single-level gain calibration) and to 0.96 ± 0.03 (after multilevel gain calibration), while the commercial QA systems failed to distinguish the image quality improvement between the two calibration methods. With wavelet analysis, defective pixels and inter-subpanel flat-fielding artifacts were clearly identified as spikes after thresholding the inversely transformed images. Conclusions: The proposed local NPS (r-square values) showed superior sensitivity to the noise level variations of individual subpanels compared with global quantitative metrics such as MTF, NPS, and DQE. Wavelet analysis was effective in detecting isolated defective pixels and inter-subpanel flat-fielding artifacts. The proposed methods are promising for the early detection of imaging artifacts of EPIDs.« less

  11. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    PubMed

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-01-01

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.

  12. Dynamic Bayesian wavelet transform: New methodology for extraction of repetitive transients

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tsui, Kwok-Leung

    2017-05-01

    Thanks to some recent research works, dynamic Bayesian wavelet transform as new methodology for extraction of repetitive transients is proposed in this short communication to reveal fault signatures hidden in rotating machine. The main idea of the dynamic Bayesian wavelet transform is to iteratively estimate posterior parameters of wavelet transform via artificial observations and dynamic Bayesian inference. First, a prior wavelet parameter distribution can be established by one of many fast detection algorithms, such as the fast kurtogram, the improved kurtogram, the enhanced kurtogram, the sparsogram, the infogram, continuous wavelet transform, discrete wavelet transform, wavelet packets, multiwavelets, empirical wavelet transform, empirical mode decomposition, local mean decomposition, etc.. Second, artificial observations can be constructed based on one of many metrics, such as kurtosis, the sparsity measurement, entropy, approximate entropy, the smoothness index, a synthesized criterion, etc., which are able to quantify repetitive transients. Finally, given artificial observations, the prior wavelet parameter distribution can be posteriorly updated over iterations by using dynamic Bayesian inference. More importantly, the proposed new methodology can be extended to establish the optimal parameters required by many other signal processing methods for extraction of repetitive transients.

  13. Wavelet denoising during optical coherence tomography of the prostate nerves using the complex wavelet transform.

    PubMed

    Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M

    2008-01-01

    Preservation of the cavernous nerves during prostate cancer surgery is critical in preserving sexual function after surgery. Optical coherence tomography (OCT) of the prostate nerves has recently been studied for potential use in nerve-sparing prostate surgery. In this study, the discrete wavelet transform and complex dual-tree wavelet transform are implemented for wavelet shrinkage denoising in OCT images of the rat prostate. Applying the complex dual-tree wavelet transform provides improved results for speckle noise reduction in the OCT prostate image. Image quality metrics of the cavernous nerves and signal-to-noise ratio (SNR) were improved significantly using this complex wavelet denoising technique.

  14. Optical phase distribution evaluation by using zero order Generalized Morse Wavelet

    NASA Astrophysics Data System (ADS)

    Kocahan, Özlem; Elmas, Merve Naz; Durmuş, ćaǧla; Coşkun, Emre; Tiryaki, Erhan; Özder, Serhat

    2017-02-01

    When determining the phase from the projected fringes by using continuous wavelet transform (CWT), selection of wavelet is an important step. A new wavelet for phase retrieval from the fringe pattern with the spatial carrier frequency in the x direction is presented. As a mother wavelet, zero order generalized Morse wavelet (GMW) is chosen because of the flexible spatial and frequency localization property, and it is exactly analytic. In this study, GMW method is explained and numerical simulations are carried out to show the validity of this technique for finding the phase distributions. Results for the Morlet and Paul wavelets are compared with the results of GMW analysis.

  15. Computerized Cuff Pressure Algometry as Guidance for Circumferential Tissue Compression for Wearable Soft Robotic Applications: A Systematic Review.

    PubMed

    Kermavnar, Tjaša; Power, Valerie; de Eyto, Adam; O'Sullivan, Leonard W

    2018-02-01

    In this article, we review the literature on quantitative sensory testing of deep somatic pain by means of computerized cuff pressure algometry (CPA) in search of pressure-related safety guidelines for wearable soft exoskeleton and robotics design. Most pressure-related safety thresholds to date are based on interface pressures and skin perfusion, although clinical research suggests the deep somatic tissues to be the most sensitive to excessive loading. With CPA, pain is induced in deeper layers of soft tissue at the limbs. The results indicate that circumferential compression leads to discomfort at ∼16-34 kPa, becomes painful at ∼20-27 kPa, and can become unbearable even below 40 kPa.

  16. Comparisons between real and complex Gauss wavelet transform methods of three-dimensional shape reconstruction

    NASA Astrophysics Data System (ADS)

    Xu, Luopeng; Dan, Youquan; Wang, Qingyuan

    2015-10-01

    The continuous wavelet transform (CWT) introduces an expandable spatial and frequency window which can overcome the inferiority of localization characteristic in Fourier transform and windowed Fourier transform. The CWT method is widely applied in the non-stationary signal analysis field including optical 3D shape reconstruction with remarkable performance. In optical 3D surface measurement, the performance of CWT for optical fringe pattern phase reconstruction usually depends on the choice of wavelet function. A large kind of wavelet functions of CWT, such as Mexican Hat wavelet, Morlet wavelet, DOG wavelet, Gabor wavelet and so on, can be generated from Gauss wavelet function. However, so far, application of the Gauss wavelet transform (GWT) method (i.e. CWT with Gauss wavelet function) in optical profilometry is few reported. In this paper, the method using GWT for optical fringe pattern phase reconstruction is presented first and the comparisons between real and complex GWT methods are discussed in detail. The examples of numerical simulations are also given and analyzed. The results show that both the real GWT method along with a Hilbert transform and the complex GWT method can realize three-dimensional surface reconstruction; and the performance of reconstruction generally depends on the frequency domain appearance of Gauss wavelet functions. For the case of optical fringe pattern of large phase variation with position, the performance of real GWT is better than that of complex one due to complex Gauss series wavelets existing frequency sidelobes. Finally, the experiments are carried out and the experimental results agree well with our theoretical analysis.

  17. Threshold Resummation for Squark-Antisquark and Gluino-Pair Production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesza, A.; Motyka, L.; II Institute for Theoretical Physics, University of Hamburg, Luruper Chaussee 149, D-22761, Germany and Institute of Physics, Jagellonian University, Reymonta 4, 30-059 Krakow

    2009-03-20

    We study the effect of soft gluon emission in the hadroproduction of squark-antisquark and gluino-gluino pairs at the next-to-leading logarithmic (NLL) accuracy within the framework of the minimal supersymmetric model. The one-loop soft anomalous dimension matrices controlling the color evolution of the underlying hard-scattering processes are calculated. We present the resummed total cross sections and show numerical results for proton-proton collisions at 14 TeV. For the gluino-pair production, the theoretical uncertainty due to scale variation is reduced to the few-percent level.

  18. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  19. Comparison between deterministic and statistical wavelet estimation methods through predictive deconvolution: Seismic to well tie example from the North Sea

    NASA Astrophysics Data System (ADS)

    de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode

    2017-01-01

    Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.

  20. An observation of ablation effect of soft biotissue by pulsed Er:YAG laser

    NASA Astrophysics Data System (ADS)

    Zhang, Xianzeng; Xie, Shusen; Ye, Qing; Zhan, Zhenlin

    2007-02-01

    Because of the unique properties with regard to the absorption in organic tissue, pulsed Er:YAG laser has found most interest for various application in medicine, such as dermatology, dentistry, and cosmetic surgery. However, consensus regarding the optimal parameters for clinical use of this tool has not been reached. In this paper, the laser ablation characteristics of soft tissue by Er:YAG laser irradiation was studied. Porcine skin tissue in vitro was used in the experiment. Laser fluences ranged from 25mJ/mm2 to 200mJ/mm2, repetition rates was 5Hz, spot sizes on the tissue surface was 2mm. The ablation effects were assessed by the means of optical microscope, ablation diameters and depths were measured with reading microscope. It was shown that the ablation of soft biotissue by pulsed Er:YAG laser was a threshold process. With appropriate choice of irradiation parameters, high quality ablation with clean, sharp cuts following closely the spatial contour of the incident beam can be achieved. The curves of ablation crater diameter and depth versus laser fluence were obtained, then the ablation threshold and ablation yield were calculated subsequently, and the influence of the number of pulses fired into a crater on ablation crater depth was also discussed.

  1. Denoising embolic Doppler ultrasound signals using Dual Tree Complex Discrete Wavelet Transform.

    PubMed

    Serbes, Gorkem; Aydin, Nizamettin

    2010-01-01

    Early and accurate detection of asymptomatic emboli is important for monitoring of preventive therapy in stroke-prone patients. One of the problems in detection of emboli is the identification of an embolic signal caused by very small emboli. The amplitude of the embolic signal may be so small that advanced processing methods are required to distinguish these signals from Doppler signals arising from red blood cells. In this study instead of conventional discrete wavelet transform, the Dual Tree Complex Discrete Wavelet Transform was used for denoising embolic signals. Performances of both approaches were compared. Unlike the conventional discrete wavelet transform discrete complex wavelet transform is a shift invariant transform with limited redundancy. Results demonstrate that the Dual Tree Complex Discrete Wavelet Transform based denoising outperforms conventional discrete wavelet denoising. Approximately 8 dB improvement is obtained by using the Dual Tree Complex Discrete Wavelet Transform compared to the improvement provided by the conventional Discrete Wavelet Transform (less than 5 dB).

  2. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  3. PULSAR SIGNAL DENOISING METHOD BASED ON LAPLACE DISTRIBUTION IN NO-SUBSAMPLING WAVELET PACKET DOMAIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang

    2016-11-01

    In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.

  4. Wavelet entropy characterization of elevated intracranial pressure.

    PubMed

    Xu, Peng; Scalzo, Fabien; Bergsneider, Marvin; Vespa, Paul; Chad, Miller; Hu, Xiao

    2008-01-01

    Intracranial Hypertension (ICH) often occurs for those patients with traumatic brain injury (TBI), stroke, tumor, etc. Pathology of ICH is still controversial. In this work, we used wavelet entropy and relative wavelet entropy to study the difference existed between normal and hypertension states of ICP for the first time. The wavelet entropy revealed the similar findings as the approximation entropy that entropy during ICH state is smaller than that in normal state. Moreover, with wavelet entropy, we can see that ICH state has the more focused energy in the low wavelet frequency band (0-3.1 Hz) than the normal state. The relative wavelet entropy shows that the energy distribution in the wavelet bands between these two states is actually different. Based on these results, we suggest that ICH may be formed by the re-allocation of oscillation energy within brain.

  5. Iterated oversampled filter banks and wavelet frames

    NASA Astrophysics Data System (ADS)

    Selesnick, Ivan W.; Sendur, Levent

    2000-12-01

    This paper takes up the design of wavelet tight frames that are analogous to Daubechies orthonormal wavelets - that is, the design of minimal length wavelet filters satisfying certain polynomial properties, but now in the oversampled case. The oversampled dyadic DWT considered in this paper is based on a single scaling function and tow distinct wavelets. Having more wavelets than necessary gives a closer spacing between adjacent wavelets within the same scale. As a result, the transform is nearly shift-invariant, and can be used to improve denoising. Because the associated time- frequency lattice preserves the dyadic structure of the critically sampled DWT it can be used with tree-based denoising algorithms that exploit parent-child correlation.

  6. Acoustical Emission Source Location in Thin Rods Through Wavelet Detail Crosscorrelation

    DTIC Science & Technology

    1998-03-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS ACOUSTICAL EMISSION SOURCE LOCATION IN THIN RODS THROUGH WAVELET DETAIL CROSSCORRELATION...ACOUSTICAL EMISSION SOURCE LOCATION IN THIN RODS THROUGH WAVELET DETAIL CROSSCORRELATION 6. AUTHOR(S) Jerauld, Joseph G. 5. FUNDING NUMBERS Grant...frequency characteristics of Wavelet Analysis. Software implementation now enables the exploration of the Wavelet Transform to identify the time of

  7. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  8. Wavelet detection of singularities in the presence of fractal noise

    NASA Astrophysics Data System (ADS)

    Noel, Steven E.; Gohel, Yogesh J.; Szu, Harold H.

    1997-04-01

    Here we detect singularities with generalized quadrature processing using the recently developed Hermitian Hat wavelet. Our intended application is radar target detection for the optimal fuzzing of ship self-defense munitions. We first develop a wavelet-based fractal noise model to represent sea clutter. We then investigate wavelet shrinkage as a way to reduce and smooth the noise before attempting wavelet detection. Finally, we use the complex phase of the Hermitian Hat wavelet to detect a simulated target singularity in the presence of our fractal noise.

  9. Double Density Dual Tree Discrete Wavelet Transform implementation for Degraded Image Enhancement

    NASA Astrophysics Data System (ADS)

    Vimala, C.; Aruna Priya, P.

    2018-04-01

    Wavelet transform is a main tool for image processing applications in modern existence. A Double Density Dual Tree Discrete Wavelet Transform is used and investigated for image denoising. Images are considered for the analysis and the performance is compared with discrete wavelet transform and the Double Density DWT. Peak Signal to Noise Ratio values and Root Means Square error are calculated in all the three wavelet techniques for denoised images and the performance has evaluated. The proposed techniques give the better performance when comparing other two wavelet techniques.

  10. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  11. Intelligent multi-spectral IR image segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Thomas; Luong, Andrew; Heim, Stephen; Patel, Maharshi; Chen, Kang; Chao, Tien-Hsin; Chow, Edward; Torres, Gilbert

    2017-05-01

    This article presents a neural network based multi-spectral image segmentation method. A neural network is trained on the selected features of both the objects and background in the longwave (LW) Infrared (IR) images. Multiple iterations of training are performed until the accuracy of the segmentation reaches satisfactory level. The segmentation boundary of the LW image is used to segment the midwave (MW) and shortwave (SW) IR images. A second neural network detects the local discontinuities and refines the accuracy of the local boundaries. This article compares the neural network based segmentation method to the Wavelet-threshold and Grab-Cut methods. Test results have shown increased accuracy and robustness of this segmentation scheme for multi-spectral IR images.

  12. SEMICONDUCTOR TECHNOLOGY A signal processing method for the friction-based endpoint detection system of a CMP process

    NASA Astrophysics Data System (ADS)

    Chi, Xu; Dongming, Guo; Zhuji, Jin; Renke, Kang

    2010-12-01

    A signal processing method for the friction-based endpoint detection system of a chemical mechanical polishing (CMP) process is presented. The signal process method uses the wavelet threshold denoising method to reduce the noise contained in the measured original signal, extracts the Kalman filter innovation from the denoised signal as the feature signal, and judges the CMP endpoint based on the feature of the Kalman filter innovation sequence during the CMP process. Applying the signal processing method, the endpoint detection experiments of the Cu CMP process were carried out. The results show that the signal processing method can judge the endpoint of the Cu CMP process.

  13. Wavelet transform: fundamentals, applications, and implementation using acousto-optic correlators

    NASA Astrophysics Data System (ADS)

    DeCusatis, Casimer M.; Koay, J.; Litynski, Daniel M.; Das, Pankaj K.

    1995-10-01

    In recent years there has been a great deal of interest in the use of wavelets to supplement or replace conventional Fourier transform signal processing. This paper provides a review of wavelet transforms for signal processing applications, and discusses several emerging applications which benefit from the advantages of wavelets. The wavelet transform can be implemented as an acousto-optic correlator; perfect reconstruction of digital signals may also be achieved using acousto-optic finite impulse response filter banks. Acousto-optic image correlators are discussed as a potential implementation of the wavelet transform, since a 1D wavelet filter bank may be encoded as a 2D image. We discuss applications of the wavelet transform including nondestructive testing of materials, biomedical applications in the analysis of EEG signals, and interference excision in spread spectrum communication systems. Computer simulations and experimental results for these applications are also provided.

  14. The application of super wavelet finite element on temperature-pressure coupled field simulation of LPG tank under jet fire

    NASA Astrophysics Data System (ADS)

    Zhao, Bin

    2015-02-01

    Temperature-pressure coupled field analysis of liquefied petroleum gas (LPG) tank under jet fire can offer theoretical guidance for preventing the fire accidents of LPG tank, the application of super wavelet finite element on it is studied in depth. First, review of related researches on heat transfer analysis of LPG tank under fire and super wavelet are carried out. Second, basic theory of super wavelet transform is studied. Third, the temperature-pressure coupled model of gas phase and liquid LPG under jet fire is established based on the equation of state, the VOF model and the RNG k-ɛ model. Then the super wavelet finite element formulation is constructed using the super wavelet scale function as interpolating function. Finally, the simulation is carried out, and results show that the super wavelet finite element method has higher computing precision than wavelet finite element method.

  15. Wavelets

    NASA Astrophysics Data System (ADS)

    Strang, Gilbert

    1994-06-01

    Several methods are compared that are used to analyze and synthesize a signal. Three ways are mentioned to transform a symphony: into cosine waves (Fourier transform), into pieces of cosines (short-time Fourier transform), and into wavelets (little waves that start and stop). Choosing the best basis, higher dimensions, fast wavelet transform, and Daubechies wavelets are discussed. High-definition television is described. The use of wavelets in identifying fingerprints in the future is related.

  16. Simple rules for passive diffusion through the nuclear pore complex

    PubMed Central

    Mironska, Roxana; Kim, Seung Joong

    2016-01-01

    Passive macromolecular diffusion through nuclear pore complexes (NPCs) is thought to decrease dramatically beyond a 30–60-kD size threshold. Using thousands of independent time-resolved fluorescence microscopy measurements in vivo, we show that the NPC lacks such a firm size threshold; instead, it forms a soft barrier to passive diffusion that intensifies gradually with increasing molecular mass in both the wild-type and mutant strains with various subsets of phenylalanine-glycine (FG) domains and different levels of baseline passive permeability. Brownian dynamics simulations replicate these findings and indicate that the soft barrier results from the highly dynamic FG repeat domains and the diffusing macromolecules mutually constraining and competing for available volume in the interior of the NPC, setting up entropic repulsion forces. We found that FG domains with exceptionally high net charge and low hydropathy near the cytoplasmic end of the central channel contribute more strongly to obstruction of passive diffusion than to facilitated transport, revealing a compartmentalized functional arrangement within the NPC. PMID:27697925

  17. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balsa Terzic, Gabriele Bassi

    In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less

  19. Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach

    NASA Technical Reports Server (NTRS)

    Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.

    2000-01-01

    The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.

  20. Enhancement of Signal-to-noise Ratio in Natural-source Transient Magnetotelluric Data with Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Paulson, K. V.

    For audio-frequency magnetotelluric surveys where the signals are lightning-stroke transients, the conventional Fourier transform method often fails to produce a high quality impedance tensor. An alternative approach is to use the wavelet transform method which is capable of localizing target information simultaneously in both the temporal and frequency domains. Unlike Fourier analysis that yields an average amplitude and phase, the wavelet transform produces an instantaneous estimate of the amplitude and phase of a signal. In this paper a complex well-localized wavelet, the Morlet wavelet, has been used to transform and analyze audio-frequency magnetotelluric data. With the Morlet wavelet, the magnetotelluric impedance tensor can be computed directly in the wavelet transform domain. The lightning-stroke transients are easily identified on the dilation-translation plane. Choosing those wavelet transform values where the signals are located, a higher signal-to-noise ratio estimation of the impedance tensor can be obtained. In a test using real data, the wavelet transform showed a significant improvement in the signal-to-noise ratio over the conventional Fourier transform.

  1. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  2. Sparse Covariance Matrix Estimation With Eigenvalue Constraints

    PubMed Central

    LIU, Han; WANG, Lie; ZHAO, Tuo

    2014-01-01

    We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866

  3. Representation and design of wavelets using unitary circuits

    NASA Astrophysics Data System (ADS)

    Evenbly, Glen; White, Steven R.

    2018-05-01

    The representation of discrete, compact wavelet transformations (WTs) as circuits of local unitary gates is discussed. We employ a similar formalism as used in the multiscale representation of quantum many-body wave functions using unitary circuits, further cementing the relation established in the literature between classical and quantum multiscale methods. An algorithm for constructing the circuit representation of known orthogonal, dyadic, discrete WTs is presented, and the explicit representation for Daubechies wavelets, coiflets, and symlets is provided. Furthermore, we demonstrate the usefulness of the circuit formalism in designing WTs, including various classes of symmetric wavelets and multiwavelets, boundary wavelets, and biorthogonal wavelets.

  4. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  5. F-wave decomposition for time of arrival profile estimation.

    PubMed

    Han, Zhixiu; Kong, Xuan

    2007-01-01

    F-waves are distally recorded muscle responses that result from "backfiring" of motor neurons following stimulation of peripheral nerves. Each F-wave response is a superposition of several motor unit responses (F-wavelets). Initial deflection of the earliest F-wavelet defines the traditional F-wave latency (FWL) and earlier F-wavelet may mask F-wavelets traveling along slower (and possibly diseased) fibers. Unmasking the time of arrival (TOA) of late F-wavelets could improve the diagnostic value of the F-waves. An algorithm for F-wavelet decomposition is presented, followed by results of experimental data analysis.

  6. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity.

  7. SU-E-J-219: A Dixon Based Pseudo-CT Generation Method for MR-Only Radiotherapy Treatment Planning of the Pelvis and Head and Neck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maspero, M.; Meijer, G.J.; Lagendijk, J.J.W.

    2015-06-15

    Purpose: To develop an image processing method for MRI-based generation of electron density maps, known as pseudo-CT (pCT), without usage of model- or atlas-based segmentation, and to evaluate the method in the pelvic and head-neck region against CT. Methods: CT and MRI scans were obtained from the pelvic region of four patients in supine position using a flat table top only for CT. Stratified CT maps were generated by classifying each voxel based on HU ranges into one of four classes: air, adipose tissue, soft tissue or bone.A hierarchical region-selective algorithm, based on automatic thresholding and clustering, was used tomore » classify tissues from MR Dixon reconstructed fat, In-Phase (IP) and Opposed-Phase (OP) images. First, a body mask was obtained by thresholding the IP image. Subsequently, an automatic threshold on the Dixon fat image differentiated soft and adipose tissue. K-means clustering on IP and OP images resulted in a mask that, via a connected neighborhood analysis, allowing the user to select the components corresponding to bone structures.The pCT was estimated through assignment of bulk HU to the tissue classes. Bone-only Digital Reconstructed Radiographs (DRR) were generated as well. The pCT images were rigidly registered to the stratified CT to allow a volumetric and voxelwise comparison. Moreover, pCTs were also calculated within the head-neck region in two volunteers using the same pipeline. Results: The volumetric comparison resulted in differences <1% for each tissue class. A voxelwise comparison showed a good classification, ranging from 64% to 98%. The primary misclassified classes were adipose/soft tissue and bone/soft tissue. As the patients have been imaged on different table tops, part of the misclassification error can be explained by misregistration. Conclusion: The proposed approach does not rely on an anatomy model providing the flexibility to successfully generate the pCT in two different body sites. This research is founded by ZonMw IMDI Programme, project name: “RASOR sharp: MRI based radiotherapy planning using a single MRI sequence”, project number: 10-104003010.« less

  8. Optimization of programming parameters in children with the advanced bionics cochlear implant.

    PubMed

    Baudhuin, Jacquelyn; Cadieux, Jamie; Firszt, Jill B; Reeder, Ruth M; Maxson, Jerrica L

    2012-05-01

    Cochlear implants provide access to soft intensity sounds and therefore improved audibility for children with severe-to-profound hearing loss. Speech processor programming parameters, such as threshold (or T-level), input dynamic range (IDR), and microphone sensitivity, contribute to the recipient's program and influence audibility. When soundfield thresholds obtained through the speech processor are elevated, programming parameters can be modified to improve soft sound detection. Adult recipients show improved detection for low-level sounds when T-levels are set at raised levels and show better speech understanding in quiet when wider IDRs are used. Little is known about the effects of parameter settings on detection and speech recognition in children using today's cochlear implant technology. The overall study aim was to assess optimal T-level, IDR, and sensitivity settings in pediatric recipients of the Advanced Bionics cochlear implant. Two experiments were conducted. Experiment 1 examined the effects of two T-level settings on soundfield thresholds and detection of the Ling 6 sounds. One program set T-levels at 10% of most comfortable levels (M-levels) and another at 10 current units (CUs) below the level judged as "soft." Experiment 2 examined the effects of IDR and sensitivity settings on speech recognition in quiet and noise. Participants were 11 children 7-17 yr of age (mean 11.3) implanted with the Advanced Bionics High Resolution 90K or CII cochlear implant system who had speech recognition scores of 20% or greater on a monosyllabic word test. Two T-level programs were compared for detection of the Ling sounds and frequency modulated (FM) tones. Differing IDR/sensitivity programs (50/0, 50/10, 70/0, 70/10) were compared using Ling and FM tone detection thresholds, CNC (consonant-vowel nucleus-consonant) words at 50 dB SPL, and Hearing in Noise Test for Children (HINT-C) sentences at 65 dB SPL in the presence of four-talker babble (+8 signal-to-noise ratio). Outcomes were analyzed using a paired t-test and a mixed-model repeated measures analysis of variance (ANOVA). T-levels set 10 CUs below "soft" resulted in significantly lower detection thresholds for all six Ling sounds and FM tones at 250, 1000, 3000, 4000, and 6000 Hz. When comparing programs differing by IDR and sensitivity, a 50 dB IDR with a 0 sensitivity setting showed significantly poorer thresholds for low frequency FM tones and voiced Ling sounds. Analysis of group mean scores for CNC words in quiet or HINT-C sentences in noise indicated no significant differences across IDR/sensitivity settings. Individual data, however, showed significant differences between IDR/sensitivity programs in noise; the optimal program differed across participants. In pediatric recipients of the Advanced Bionics cochlear implant device, manually setting T-levels with ascending loudness judgments should be considered when possible or when low-level sounds are inaudible. Study findings confirm the need to determine program settings on an individual basis as well as the importance of speech recognition verification measures in both quiet and noise. Clinical guidelines are suggested for selection of programming parameters in both young and older children. American Academy of Audiology.

  9. Use of the wavelet transform to investigate differences in brain PET images between patient groups

    NASA Astrophysics Data System (ADS)

    Ruttimann, Urs E.; Unser, Michael A.; Rio, Daniel E.; Rawlings, Robert R.

    1993-06-01

    Suitability of the wavelet transform was studied for the analysis of glucose utilization differences between subject groups as displayed in PET images. To strengthen statistical inference, it was of particular interest investigating the tradeoff between signal localization and image decomposition into uncorrelated components. This tradeoff is shown to be controlled by wavelet regularity, with the optimal compromise attained by third-order orthogonal spline wavelets. Testing of the ensuing wavelet coefficients identified only about 1.5% as statistically different (p < .05) from noise, which then served to resynthesize the difference images by the inverse wavelet transform. The resulting images displayed relatively uniform, noise-free regions of significant differences with, due to the good localization maintained by the wavelets, very little reconstruction artifacts.

  10. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images.

  11. iSAP: Interactive Sparse Astronomical Data Analysis Packages

    NASA Astrophysics Data System (ADS)

    Fourt, O.; Starck, J.-L.; Sureau, F.; Bobin, J.; Moudden, Y.; Abrial, P.; Schmitt, J.

    2013-03-01

    iSAP consists of three programs, written in IDL, which together are useful for spherical data analysis. MR/S (MultiResolution on the Sphere) contains routines for wavelet, ridgelet and curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and Independent Component Analysis on the Sphere. MR/S has been designed for the PLANCK project, but can be used for many other applications. SparsePol (Polarized Spherical Wavelets and Curvelets) has routines for polarized wavelet, polarized ridgelet and polarized curvelet transform on the sphere, and applications such denoising on the sphere using wavelets and/or curvelets, Gaussianity tests and blind source separation on the Sphere. SparsePol has been designed for the PLANCK project. MS-VSTS (Multi-Scale Variance Stabilizing Transform on the Sphere), designed initially for the FERMI project, is useful for spherical mono-channel and multi-channel data analysis when the data are contaminated by a Poisson noise. It contains routines for wavelet/curvelet denoising, wavelet deconvolution, multichannel wavelet denoising and deconvolution.

  12. Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer

    NASA Astrophysics Data System (ADS)

    Sreewirote, Bancha; Ngaopitakkul, Atthapol

    2018-03-01

    The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.

  13. Adjusting Wavelet-based Multiresolution Analysis Boundary Conditions for Robust Long-term Streamflow Forecasting Model

    NASA Astrophysics Data System (ADS)

    Maslova, I.; Ticlavilca, A. M.; McKee, M.

    2012-12-01

    There has been an increased interest in wavelet-based streamflow forecasting models in recent years. Often overlooked in this approach are the circularity assumptions of the wavelet transform. We propose a novel technique for minimizing the wavelet decomposition boundary condition effect to produce long-term, up to 12 months ahead, forecasts of streamflow. A simulation study is performed to evaluate the effects of different wavelet boundary rules using synthetic and real streamflow data. A hybrid wavelet-multivariate relevance vector machine model is developed for forecasting the streamflow in real-time for Yellowstone River, Uinta Basin, Utah, USA. The inputs of the model utilize only the past monthly streamflow records. They are decomposed into components formulated in terms of wavelet multiresolution analysis. It is shown that the model model accuracy can be increased by using the wavelet boundary rule introduced in this study. This long-term streamflow modeling and forecasting methodology would enable better decision-making and managing water availability risk.

  14. The 4D hyperspherical diffusion wavelet: A new method for the detection of localized anatomical variation.

    PubMed

    Hosseinbor, Ameer Pasha; Kim, Won Hwa; Adluru, Nagesh; Acharya, Amit; Vorperian, Houri K; Chung, Moo K

    2014-01-01

    Recently, the HyperSPHARM algorithm was proposed to parameterize multiple disjoint objects in a holistic manner using the 4D hyperspherical harmonics. The HyperSPHARM coefficients are global; they cannot be used to directly infer localized variations in signal. In this paper, we present a unified wavelet framework that links Hyper-SPHARM to the diffusion wavelet transform. Specifically, we will show that the HyperSPHARM basis forms a subset of a wavelet-based multiscale representation of surface-based signals. This wavelet, termed the hyperspherical diffusion wavelet, is a consequence of the equivalence of isotropic heat diffusion smoothing and the diffusion wavelet transform on the hypersphere. Our framework allows for the statistical inference of highly localized anatomical changes, which we demonstrate in the first-ever developmental study on the hyoid bone investigating gender and age effects. We also show that the hyperspherical wavelet successfully picks up group-wise differences that are barely detectable using SPHARM.

  15. The 4D Hyperspherical Diffusion Wavelet: A New Method for the Detection of Localized Anatomical Variation

    PubMed Central

    Hosseinbor, A. Pasha; Kim, Won Hwa; Adluru, Nagesh; Acharya, Amit; Vorperian, Houri K.; Chung, Moo K.

    2014-01-01

    Recently, the HyperSPHARM algorithm was proposed to parameterize multiple disjoint objects in a holistic manner using the 4D hyperspherical harmonics. The HyperSPHARM coefficients are global; they cannot be used to directly infer localized variations in signal. In this paper, we present a unified wavelet framework that links HyperSPHARM to the diffusion wavelet transform. Specifically, we will show that the HyperSPHARM basis forms a subset of a wavelet-based multiscale representation of surface-based signals. This wavelet, termed the hyperspherical diffusion wavelet, is a consequence of the equivalence of isotropic heat diffusion smoothing and the diffusion wavelet transform on the hypersphere. Our framework allows for the statistical inference of highly localized anatomical changes, which we demonstrate in the firstever developmental study on the hyoid bone investigating gender and age effects. We also show that the hyperspherical wavelet successfully picks up group-wise differences that are barely detectable using SPHARM. PMID:25320783

  16. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  17. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  18. Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anant, K.S.

    1997-06-01

    In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the Pmore » as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the feature identification methods of Chapter 3, the compression methods of Chapter 4, as well as the wavelet design methods of Chapter 5, are general enough to be easily applied to other transient signals.« less

  19. Islanding detection technique using wavelet energy in grid-connected PV system

    NASA Astrophysics Data System (ADS)

    Kim, Il Song

    2016-08-01

    This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.

  20. Design of almost symmetric orthogonal wavelet filter bank via direct optimization.

    PubMed

    Murugesan, Selvaraaju; Tay, David B H

    2012-05-01

    It is a well-known fact that (compact-support) dyadic wavelets [based on the two channel filter banks (FBs)] cannot be simultaneously orthogonal and symmetric. Although orthogonal wavelets have the energy preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function.

  1. Wavelets and molecular structure

    NASA Astrophysics Data System (ADS)

    Carson, Mike

    1996-08-01

    The wavelet method offers possibilities for display, editing, and topological comparison of proteins at a user-specified level of detail. Wavelets are a mathematical tool that first found application in signal processing. The multiresolution analysis of a signal via wavelets provides a hierarchical series of `best' lower-resolution approximations. B-spline ribbons model the protein fold, with one control point per residue. Wavelet analysis sets limits on the information required to define the winding of the backbone through space, suggesting a recognizable fold is generated from a number of points equal to 1/4 or less the number of residues. Wavelets applied to surfaces and volumes show promise in structure-based drug design.

  2. Polar Wavelet Transform and the Associated Uncertainty Principles

    NASA Astrophysics Data System (ADS)

    Shah, Firdous A.; Tantary, Azhar Y.

    2018-06-01

    The polar wavelet transform- a generalized form of the classical wavelet transform has been extensively used in science and engineering for finding directional representations of signals in higher dimensions. The aim of this paper is to establish new uncertainty principles associated with the polar wavelet transforms in L2(R2). Firstly, we study some basic properties of the polar wavelet transform and then derive the associated generalized version of Heisenberg-Pauli-Weyl inequality. Finally, following the idea of Beckner (Proc. Amer. Math. Soc. 123, 1897-1905 1995), we drive the logarithmic version of uncertainty principle for the polar wavelet transforms in L2(R2).

  3. Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis

    PubMed Central

    Lajnef, Tarek; Chaibi, Sahbi; Eichenlaub, Jean-Baptiste; Ruby, Perrine M.; Aguera, Pierre-Emmanuel; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

    2015-01-01

    A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG) signals are split into oscillatory (spindles) and transient (K-complex) components. This decomposition is conveniently achieved by applying morphological component analysis (MCA) to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT). Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i) the transient component to reveal K-complexes and (ii) the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity vs. FDR) on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1), the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR) of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62 and 49.09%, respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are discussed. PMID:26283943

  4. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  5. On the wavelet optimized finite difference method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1994-01-01

    When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.

  6. Nonlinearity and Scaling Behavior in Lead Zirconate Titanate Piezoceramic

    NASA Astrophysics Data System (ADS)

    Mueller, V.

    1998-03-01

    The results of a comprehensive study of the nonlinear dielectric and electromechanical response of lead zirconate titanate (PZT) piezoceramics are presented. The piezoelectric strain of a series of donor doped (soft PZT) and acceptor doped (hard PZT) polycrystalline systems was measured under quasistatic (nonresonant) conditions. The measuring field was applied both parallel and perpendicular to the poling direction of the ceramic in order to investigate the influence of different symmetry conditions. Dielectric properties were studied in addition to the electromechanical measurements which enables us to compare piezoelectric and dielectric nonlinearities. Due to the different level and type of dopants, the piezoceramics examined differ significantly with regard to its Curie temperature (190^o CE_c2 the nonlinearity can be described in the same way as in soft PZT. The results indicate that irreversible motion of (ferroelastic) non-180^o walls causes the nonlinearity of PZT and that the contribution of (non-ferroelastic) 180^o walls to the linear and nonlinear coefficients is negligibly small. The experimentally observed non-analytic scaling behavior is qualitatively inconsistent with the assumption that the nonlinearity is related to the anharmonicity of the domain wall potential. We suggest that the dynamics of the domain wall in a randomly pinned medium dominates the piezoelectric and dielectric nonlinearity at field strengths well below the limiting field necessary to depole the piezoceramic. The analysis of results obtained at different ceramic systems indicates that linear and nonlinear coefficients are not independent from each other. The observed relationship between linear and nonlinear properties leads us to the suggestion that another extrinsic contribution to the permittivity exists in PZT which may not be attributed to domain wall motion but related to the dielectric dispersion at microwave frequencies.

  7. Analysis of the tennis racket vibrations during forehand drives: Selection of the mother wavelet.

    PubMed

    Blache, Y; Hautier, C; Lefebvre, F; Djordjevic, A; Creveaux, T; Rogowski, I

    2017-08-16

    The time-frequency analysis of the tennis racket and hand vibrations is of great interest for discomfort and pathology prevention. This study aimed to (i) to assess the stationarity of the vibratory signal of the racket and hand and (ii) to identify the best mother wavelet to perform future time-frequency analysis, (iii) to determine if the stroke spin, racket characteristics and impact zone can influence the selection of the best mother wavelet. A total of 2364 topspin and flat forehand drives were performed by fourteen male competitive tennis players with six different rackets. One tri-axial and one mono-axial accelerometer were taped on the racket throat and dominant hand respectively. The signal stationarity was tested through the wavelet spectrum test. Eighty-nine mother wavelet were tested to select the best mother wavelet based on continuous and discrete transforms. On average only 25±17%, 2±5%, 5±7% and 27±27% of the signal tested respected the hypothesis of stationarity for the three axes of the racket and the hand respectively. Regarding the two methods for the detection of the best mother wavelet, the Daubechy 45 wavelet presented the highest average ranking. No effect of the stroke spin, racket characteristics and impact zone was observed for the selection of the best mother wavelet. It was concluded that alternative approach to Fast Fourier Transform should be used to interpret tennis vibration signals. In the case where wavelet transform is chosen, the Daubechy 45 mother wavelet appeared to be the most suitable. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Nonlinear negative refraction in reorientational soft matter

    NASA Astrophysics Data System (ADS)

    Alberucci, Alessandro; Jisha, Chandroth P.; Assanto, Gaetano

    2015-09-01

    We analyze the propagation of self-trapped optical beams close to the Fréedericksz threshold in nematic liquid crystals. Accounting for power-dependent changes in walk-off due to the all-optical response, we demonstrate that light beams can switch from positive to negative refraction according to the excitation.

  9. Evaluating the accuracy of the XVI dual registration tool compared with manual soft tissue matching to localise tumour volumes for post-prostatectomy patients receiving radiotherapy.

    PubMed

    Campbell, Amelia; Owen, Rebecca; Brown, Elizabeth; Pryor, David; Bernard, Anne; Lehman, Margot

    2015-08-01

    Cone beam computerised tomography (CBCT) enables soft tissue visualisation to optimise matching in the post-prostatectomy setting, but is associated with inter-observer variability. This study assessed the accuracy and consistency of automated soft tissue localisation using XVI's dual registration tool (DRT). Sixty CBCT images from ten post-prostatectomy patients were matched using: (i) the DRT and (ii) manual soft tissue registration by six radiation therapists (RTs). Shifts in the three Cartesian planes were recorded. The accuracy of the match was determined by comparing shifts to matches performed by two genitourinary radiation oncologists (ROs). A Bland-Altman method was used to assess the 95% levels of agreement (LoA). A clinical threshold of 3 mm was used to define equivalence between methods of matching. The 95% LoA between DRT-ROs in the superior/inferior, left/right and anterior/posterior directions were -2.21 to +3.18 mm, -0.77 to +0.84 mm, and -1.52 to +4.12 mm, respectively. The 95% LoA between RTs-ROs in the superior/inferior, left/right and anterior/posterior directions were -1.89 to +1.86 mm, -0.71 to +0.62 mm and -2.8 to +3.43 mm, respectively. Five DRT CBCT matches (8.33%) were outside the 3-mm threshold, all in the setting of bladder underfilling or rectal gas. The mean time for manual matching was 82 versus 65 s for DRT. XVI's DRT is comparable with RTs manually matching soft tissue on CBCT. The DRT can minimise RT inter-observer variability; however, involuntary bladder and rectal filling can influence the tools accuracy, highlighting the need for RT evaluation of the DRT match. © 2015 The Royal Australian and New Zealand College of Radiologists.

  10. Advantages of soft subdural implants for the delivery of electrochemical neuromodulation therapies to the spinal cord

    NASA Astrophysics Data System (ADS)

    Capogrosso, Marco; Gandar, Jerome; Greiner, Nathan; Moraud, Eduardo Martin; Wenger, Nikolaus; Shkorbatova, Polina; Musienko, Pavel; Minev, Ivan; Lacour, Stephanie; Courtine, Grégoire

    2018-04-01

    Objective. We recently developed soft neural interfaces enabling the delivery of electrical and chemical stimulation to the spinal cord. These stimulations restored locomotion in animal models of paralysis. Soft interfaces can be placed either below or above the dura mater. Theoretically, the subdural location combines many advantages, including increased selectivity of electrical stimulation, lower stimulation thresholds, and targeted chemical stimulation through local drug delivery. However, these advantages have not been documented, nor have their functional impact been studied in silico or in a relevant animal model of neurological disorders using a multimodal neural interface. Approach. We characterized the recruitment properties of subdural interfaces using a realistic computational model of the rat spinal cord that included explicit representation of the spinal roots. We then validated and complemented computer simulations with electrophysiological experiments in rats. We additionally performed behavioral experiments in rats that received a lateral spinal cord hemisection and were implanted with a soft interface. Main results. In silico and in vivo experiments showed that the subdural location decreased stimulation thresholds compared to the epidural location while retaining high specificity. This feature reduces power consumption and risks of long-term damage in the tissues, thus increasing the clinical safety profile of this approach. The hemisection induced a transient paralysis of the leg ipsilateral to the injury. During this period, the delivery of electrical stimulation restricted to the injured side combined with local chemical modulation enabled coordinated locomotor movements of the paralyzed leg without affecting the non-impaired leg in all tested rats. Electrode properties remained stable over time, while anatomical examinations revealed excellent bio-integration properties. Significance. Soft neural interfaces inserted subdurally provide the opportunity to deliver electrical and chemical neuromodulation therapies using a single, bio-compatible and mechanically compliant device that effectively alleviates locomotor deficits after spinal cord injury.

  11. A Framework for Final Drive Simultaneous Failure Diagnosis Based on Fuzzy Entropy and Sparse Bayesian Extreme Learning Machine

    PubMed Central

    Ye, Qing; Pan, Hao; Liu, Changhua

    2015-01-01

    This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717

  12. It's Harder to Splash on Soft Solids.

    PubMed

    Howland, Christopher J; Antkowiak, Arnaud; Castrejón-Pita, J Rafael; Howison, Sam D; Oliver, James M; Style, Robert W; Castrejón-Pita, Alfonso A

    2016-10-28

    Droplets splash when they impact dry, flat substrates above a critical velocity that depends on parameters such as droplet size, viscosity, and air pressure. By imaging ethanol drops impacting silicone gels of different stiffnesses, we show that substrate stiffness also affects the splashing threshold. Splashing is reduced or even eliminated: droplets on the softest substrates need over 70% more kinetic energy to splash than they do on rigid substrates. We show that this is due to energy losses caused by deformations of soft substrates during the first few microseconds of impact. We find that solids with Young's moduli ≲100  kPa reduce splashing, in agreement with simple scaling arguments. Thus, materials like soft gels and elastomers can be used as simple coatings for effective splash prevention. Soft substrates also serve as a useful system for testing splash-formation theories and sheet-ejection mechanisms, as they allow the characteristics of ejection sheets to be controlled independently of the bulk impact dynamics of droplets.

  13. It's Harder to Splash on Soft Solids

    NASA Astrophysics Data System (ADS)

    Howland, Christopher J.; Antkowiak, Arnaud; Castrejón-Pita, J. Rafael; Howison, Sam D.; Oliver, James M.; Style, Robert W.; Castrejón-Pita, Alfonso A.

    2016-10-01

    Droplets splash when they impact dry, flat substrates above a critical velocity that depends on parameters such as droplet size, viscosity, and air pressure. By imaging ethanol drops impacting silicone gels of different stiffnesses, we show that substrate stiffness also affects the splashing threshold. Splashing is reduced or even eliminated: droplets on the softest substrates need over 70% more kinetic energy to splash than they do on rigid substrates. We show that this is due to energy losses caused by deformations of soft substrates during the first few microseconds of impact. We find that solids with Young's moduli ≲100 kPa reduce splashing, in agreement with simple scaling arguments. Thus, materials like soft gels and elastomers can be used as simple coatings for effective splash prevention. Soft substrates also serve as a useful system for testing splash-formation theories and sheet-ejection mechanisms, as they allow the characteristics of ejection sheets to be controlled independently of the bulk impact dynamics of droplets.

  14. Wavelets, non-linearity and turbulence in fusion plasmas

    NASA Astrophysics Data System (ADS)

    van Milligen, B. Ph.

    Introduction Linear spectral analysis tools Wavelet analysis Wavelet spectra and coherence Joint wavelet phase-frequency spectra Non-linear spectral analysis tools Wavelet bispectra and bicoherence Interpretation of the bicoherence Analysis of computer-generated data Coupled van der Pol oscillators A large eddy simulation model for two-fluid plasma turbulence A long wavelength plasma drift wave model Analysis of plasma edge turbulence from Langmuir probe data Radial coherence observed on the TJ-IU torsatron Bicoherence profile at the L/H transition on CCT Conclusions

  15. Bayesian reconstruction of gravitational wave bursts using chirplets

    NASA Astrophysics Data System (ADS)

    Millhouse, Margaret; Cornish, Neil; Littenberg, Tyson

    2017-01-01

    The BayesWave algorithm has been shown to accurately reconstruct unmodeled short duration gravitational wave bursts and to distinguish between astrophysical signals and transient noise events. BayesWave does this by using a variable number of sine-Gaussian (Morlet) wavelets to reconstruct data in multiple interferometers. While the Morlet wavelets can be summed together to produce any possible waveform, there could be other wavelet functions that improve the performance. Because we expect most astrophysical gravitational wave signals to evolve in frequency, modified Morlet wavelets with linear frequency evolution - called chirplets - may better reconstruct signals with fewer wavelets. We compare the performance of BayesWave using Morlet wavelets and chirplets on a variety of simulated signals.

  16. Research on artificial neural network intrusion detection photochemistry based on the improved wavelet analysis and transformation

    NASA Astrophysics Data System (ADS)

    Li, Hong; Ding, Xue

    2017-03-01

    This paper combines wavelet analysis and wavelet transform theory with artificial neural network, through the pretreatment on point feature attributes before in intrusion detection, to make them suitable for improvement of wavelet neural network. The whole intrusion classification model gets the better adaptability, self-learning ability, greatly enhances the wavelet neural network for solving the problem of field detection invasion, reduces storage space, contributes to improve the performance of the constructed neural network, and reduces the training time. Finally the results of the KDDCup99 data set simulation experiment shows that, this method reduces the complexity of constructing wavelet neural network, but also ensures the accuracy of the intrusion classification.

  17. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  18. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  19. The Wavelet Element Method. Part 2; Realization and Additional Features in 2D and 3D

    NASA Technical Reports Server (NTRS)

    Canuto, Claudio; Tabacco, Anita; Urban, Karsten

    1998-01-01

    The Wavelet Element Method (WEM) provides a construction of multiresolution systems and biorthogonal wavelets on fairly general domains. These are split into subdomains that are mapped to a single reference hypercube. Tensor products of scaling functions and wavelets defined on the unit interval are used on the reference domain. By introducing appropriate matching conditions across the interelement boundaries, a globally continuous biorthogonal wavelet basis on the general domain is obtained. This construction does not uniquely define the basis functions but rather leaves some freedom for fulfilling additional features. In this paper we detail the general construction principle of the WEM to the 1D, 2D and 3D cases. We address additional features such as symmetry, vanishing moments and minimal support of the wavelet functions in each particular dimension. The construction is illustrated by using biorthogonal spline wavelets on the interval.

  20. Two-dimensional wavelet transform for reliability-guided phase unwrapping in optical fringe pattern analysis.

    PubMed

    Li, Sikun; Wang, Xiangzhao; Su, Xianyu; Tang, Feng

    2012-04-20

    This paper theoretically discusses modulus of two-dimensional (2D) wavelet transform (WT) coefficients, calculated by using two frequently used 2D daughter wavelet definitions, in an optical fringe pattern analysis. The discussion shows that neither is good enough to represent the reliability of the phase data. The differences between the two frequently used 2D daughter wavelet definitions in the performance of 2D WT also are discussed. We propose a new 2D daughter wavelet definition for reliability-guided phase unwrapping of optical fringe pattern. The modulus of the advanced 2D WT coefficients, obtained by using a daughter wavelet under this new daughter wavelet definition, includes not only modulation information but also local frequency information of the deformed fringe pattern. Therefore, it can be treated as a good parameter that represents the reliability of the retrieved phase data. Computer simulation and experimentation show the validity of the proposed method.

  1. Wavelet-based Adaptive Mesh Refinement Method for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.

    2011-12-01

    Numerical modeling of global atmospheric chemical transport presents enormous computational difficulties, associated with simulating a wide range of time and spatial scales. The described difficulties are exacerbated by the fact that hundreds of chemical species and thousands of chemical reactions typically are used for chemical kinetic mechanism description. These computational requirements very often forces researches to use relatively crude quasi-uniform numerical grids with inadequate spatial resolution that introduces significant numerical diffusion into the system. It was shown that this spurious diffusion significantly distorts the pollutant mixing and transport dynamics for typically used grid resolution. The described numerical difficulties have to be systematically addressed considering that the demand for fast, high-resolution chemical transport models will be exacerbated over the next decade by the need to interpret satellite observations of tropospheric ozone and related species. In this study we offer dynamically adaptive multilevel Wavelet-based Adaptive Mesh Refinement (WAMR) method for numerical modeling of atmospheric chemical evolution equations. The adaptive mesh refinement is performed by adding and removing finer levels of resolution in the locations of fine scale development and in the locations of smooth solution behavior accordingly. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution that are used in conjunction with an appropriate threshold criteria to adapt the non-uniform grid. Other essential features of the numerical algorithm include: an efficient wavelet spatial discretization that allows to minimize the number of degrees of freedom for a prescribed accuracy, a fast algorithm for computing wavelet amplitudes, and efficient and accurate derivative approximations on an irregular grid. The method has been tested for a variety of benchmark problems including numerical simulation of transpacific traveling pollution plumes. The generated pollution plumes are diluted due to turbulent mixing as they are advected downwind. Despite this dilution, it was recently discovered that pollution plumes in the remote troposphere can preserve their identity as well-defined structures for two weeks or more as they circle the globe. Present Global Chemical Transport Models (CTMs) implemented for quasi-uniform grids are completely incapable of reproducing these layered structures due to high numerical plume dilution caused by numerical diffusion combined with non-uniformity of atmospheric flow. It is shown that WAMR algorithm solutions of comparable accuracy as conventional numerical techniques are obtained with more than an order of magnitude reduction in number of grid points, therefore the adaptive algorithm is capable to produce accurate results at a relatively low computational cost. The numerical simulations demonstrate that WAMR algorithm applied the traveling plume problem accurately reproduces the plume dynamics unlike conventional numerical methods that utilizes quasi-uniform numerical grids.

  2. Discrete wavelet approach to multifractality

    NASA Astrophysics Data System (ADS)

    Isaacson, Susana I.; Gabbanelli, Susana C.; Busch, Jorge R.

    2000-12-01

    The use of wavelet techniques for the multifractal analysis generalizes the box counting approach, and in addition provides information on eventual deviations of multifractal behavior. By the introduction of a wavelet partition function Wq and its corresponding free energy (beta) (q), the discrepancies between (beta) (q) and the multifractal free energy r(q) are shown to be indicative of these deviations. We study with Daubechies wavelets (D4) some 1D examples previously treated with Haar wavelets, and we apply the same ideas to some 2D Monte Carlo configurations, that simulate a solution under the action of an attractive potential. In this last case, we study the influence in the multifractal spectra and partition functions of four physical parameters: the intensity of the pairwise potential, the temperature, the range of the model potential, and the concentration of the solution. The wavelet partition function Wq carries more information about the cluster statistics than the multifractal partition function Zq, and the location of its peaks contributes to the determination of characteristic sales of the measure. In our experiences, the information provided by Daubechies wavelet sis slightly more accurate than the one obtained by Haar wavelets.

  3. A Comparative Analysis for Selection of Appropriate Mother Wavelet for Detection of Stationary Disturbances

    NASA Astrophysics Data System (ADS)

    Kamble, Saurabh Prakash; Thawkar, Shashank; Gaikwad, Vinayak G.; Kothari, D. P.

    2017-12-01

    Detection of disturbances is the first step of mitigation. Power electronics plays a crucial role in modern power system which makes system operation efficient but it also bring stationary disturbances in the power system and added impurities to the supply. It happens because of the non-linear loads used in modern day power system which inject disturbances like harmonic disturbances, flickers, sag etc. in power grid. These impurities can damage equipments so it is necessary to mitigate these impurities present in the supply very quickly. So, digital signal processing techniques are incorporated for detection purpose. Signal processing techniques like fast Fourier transform, short-time Fourier transform, Wavelet transform etc. are widely used for the detection of disturbances. Among all, wavelet transform is widely used because of its better detection capabilities. But, which mother wavelet has to use for detection is still a mystery. Depending upon the periodicity, the disturbances are classified as stationary and non-stationary disturbances. This paper presents the importance of selection of mother wavelet for analyzing stationary disturbances using discrete wavelet transform. Signals with stationary disturbances of various frequencies are generated using MATLAB. The analysis of these signals is done using various mother wavelets like Daubechies and bi-orthogonal wavelets and the measured root mean square value of stationary disturbance is obtained. The measured value obtained by discrete wavelet transform is compared with the exact RMS value of the frequency component and the percentage differences are presented which helps to select optimum mother wavelet.

  4. Wavelet extractor: A Bayesian well-tie and wavelet extraction program

    NASA Astrophysics Data System (ADS)

    Gunning, James; Glinsky, Michael E.

    2006-06-01

    We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.

  5. A novel neural-wavelet approach for process diagnostics and complex system modeling

    NASA Astrophysics Data System (ADS)

    Gao, Rong

    Neural networks have been effective in several engineering applications because of their learning abilities and robustness. However certain shortcomings, such as slow convergence and local minima, are always associated with neural networks, especially neural networks applied to highly nonlinear and non-stationary problems. These problems can be effectively alleviated by integrating a new powerful tool, wavelets, into conventional neural networks. The multi-resolution analysis and feature localization capabilities of the wavelet transform offer neural networks new possibilities for learning. A neural wavelet network approach developed in this thesis enjoys fast convergence rate with little possibility to be caught at a local minimum. It combines the localization properties of wavelets with the learning abilities of neural networks. Two different testbeds are used for testing the efficiency of the new approach. The first is magnetic flowmeter-based process diagnostics: here we extend previous work, which has demonstrated that wavelet groups contain process information, to more general process diagnostics. A loop at Applied Intelligent Systems Lab (AISL) is used for collecting and analyzing data through the neural-wavelet approach. The research is important for thermal-hydraulic processes in nuclear and other engineering fields. The neural-wavelet approach developed is also tested with data from the electric power grid. More specifically, the neural-wavelet approach is used for performing short-term and mid-term prediction of power load demand. In addition, the feasibility of determining the type of load using the proposed neural wavelet approach is also examined. The notion of cross scale product has been developed as an expedient yet reliable discriminator of loads. Theoretical issues involved in the integration of wavelets and neural networks are discussed and future work outlined.

  6. Multiresolution Distance Volumes for Progressive Surface Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laney, D E; Bertram, M; Duchaineau, M A

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distancemore » volumes for surface compression and progressive reconstruction for complex high genus surfaces.« less

  7. A user's guide to the ssWavelets package

    Treesearch

    J.H. ​Gove

    2017-01-01

    ssWavelets is an R package that is meant to be used in conjunction with the sampSurf package (Gove, 2012) to perform wavelet decomposition on the results of a sampling surface simulation. In general, the wavelet filter decomposes the sampSurf simulation results by scale (distance), with each scale corresponding to a different level of the...

  8. A novel method of identifying motor primitives using wavelet decomposition*

    PubMed Central

    Popov, Anton; Olesh, Erienne V.; Yakovenko, Sergiy; Gritsenko, Valeriya

    2018-01-01

    This study reports a new technique for extracting muscle synergies using continuous wavelet transform. The method allows to quantify coincident activation of muscle groups caused by the physiological processes of fixed duration, thus enabling the extraction of wavelet modules of arbitrary groups of muscles. Hierarchical clustering and identification of the repeating wavelet modules across subjects and across movements, was used to identify consistent muscle synergies. Results indicate that the most frequently repeated wavelet modules comprised combinations of two muscles that are not traditional agonists and span different joints. We have also found that these wavelet modules were flexibly combined across different movement directions in a pattern resembling directional tuning. This method is extendable to multiple frequency domains and signal modalities.

  9. EnvironmentalWaveletTool: Continuous and discrete wavelet analysis and filtering for environmental time series

    NASA Astrophysics Data System (ADS)

    Galiana-Merino, J. J.; Pla, C.; Fernandez-Cortes, A.; Cuezva, S.; Ortiz, J.; Benavente, D.

    2014-10-01

    A MATLAB-based computer code has been developed for the simultaneous wavelet analysis and filtering of several environmental time series, particularly focused on the analyses of cave monitoring data. The continuous wavelet transform, the discrete wavelet transform and the discrete wavelet packet transform have been implemented to provide a fast and precise time-period examination of the time series at different period bands. Moreover, statistic methods to examine the relation between two signals have been included. Finally, the entropy of curves and splines based methods have also been developed for segmenting and modeling the analyzed time series. All these methods together provide a user-friendly and fast program for the environmental signal analysis, with useful, practical and understandable results.

  10. Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix

    Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.

  11. Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production

    DOE PAGES

    Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix

    2017-11-20

    Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.

  12. Threshold resummation of the rapidity distribution for Higgs production at NNLO +NNLL

    NASA Astrophysics Data System (ADS)

    Banerjee, Pulak; Das, Goutam; Dhani, Prasanna K.; Ravindran, V.

    2018-03-01

    We present a formalism that resums threshold-enhanced logarithms to all orders in perturbative QCD for the rapidity distribution of any colorless particle produced in hadron colliders. We achieve this by exploiting the factorization properties and K +G equations satisfied by the soft and virtual parts of the cross section. We compute for the first time compact and most general expressions in two-dimensional Mellin space for the resummed coefficients. Using various state-of-the-art multiloop and multileg results, we demonstrate the numerical impact of our resummed results up to next-to-next-to-leading order for the rapidity distribution of the Higgs boson at the LHC. We find that inclusion of these threshold logs through resummation improves the reliability of perturbative predictions.

  13. Optimizing the motion of a folding molecular motor in soft matter.

    PubMed

    Rajonson, Gabriel; Ciobotarescu, Simona; Teboul, Victor

    2018-04-18

    We use molecular dynamics simulations to investigate the displacement of a periodically folding molecular motor in a viscous environment. Our aim is to find significant parameters to optimize the displacement of the motor. We find that the choice of a massy host or of small host molecules significantly increase the motor displacements. While in the same environment, the motor moves with hopping solid-like motions while the host moves with diffusive liquid-like motions, a result that originates from the motor's larger size. Due to hopping motions, there are thresholds on the force necessary for the motor to reach stable positions in the medium. These force thresholds result in a threshold in the size of the motor to induce a significant displacement, that is followed by plateaus in the motor displacement.

  14. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  15. Remote sensing of soil organic matter of farmland with hyperspectral image

    NASA Astrophysics Data System (ADS)

    Gu, Xiaohe; Wang, Lei; Yang, Guijun; Zhang, Liyan

    2017-10-01

    Monitoring soil organic matter (SOM) of cultivated land quantitively and mastering its spatial change are helpful for fertility adjustment and sustainable development of agriculture. The study aimed to analyze the response between SOM and reflectivity of hyperspectral image with different pixel size and develop the optimal model of estimating SOM with imaging spectral technology. The wavelet transform method was used to analyze the correlation between the hyperspectral reflectivity and SOM. Then the optimal pixel size and sensitive wavelet feature scale were screened to develop the inversion model of SOM. Result showed that wavelet transform of soil hyperspectrum was help to improve the correlation between the wavelet features and SOM. In the visible wavelength range, the susceptible wavelet features of SOM mainly concentrated 460 603 nm. As the wavelength increased, the wavelet scale corresponding correlation coefficient increased maximum and then gradually decreased. In the near infrared wavelength range, the susceptible wavelet features of SOM mainly concentrated 762 882 nm. As the wavelength increased, the wavelet scale gradually decreased. The study developed multivariate model of continuous wavelet transforms by the method of stepwise linear regression (SLR). The CWT-SLR models reached higher accuracies than those of univariate models. With the resampling scale increasing, the accuracies of CWT-SLR models gradually increased, while the determination coefficients (R2) fluctuated from 0.52 to 0.59. The R2 of 5*5 scale reached highest (0.5954), while the RMSE reached lowest (2.41 g/kg). It indicated that multivariate model based on continuous wavelet transform had better ability for estimating SOM than univariate model.

  16. The effects of preferred and non-preferred running strike patterns on tissue vibration properties.

    PubMed

    Enders, Hendrik; von Tscharner, Vinzenz; Nigg, Benno M

    2014-03-01

    To characterize soft tissue vibrations during running with a preferred and a non-preferred strike pattern in shoes and barefoot. Cross-sectional study. Participants ran at 3.5 m s(-1) on a treadmill in shoes and barefoot using a rearfoot and a forefoot strike for each footwear condition. The preferred strike patterns for the subjects were a rearfoot strike and a forefoot strike for shod and barefoot running, respectively. Vibrations were recorded with an accelerometer overlying the belly of the medial gastrocnemius. Thirteen non-linearly scaled wavelets were used for the analysis. Damping was calculated as the overall decay of power in the acceleration signal post ground contact. A higher damping coefficient indicates higher damping capacities of the soft tissue. The shod rearfoot strike showed a 93% lower damping coefficient than the shod forefoot strike (p<0.001). A lower damping coefficient indicates less damping of the vibrations. The barefoot forefoot strike showed a trend toward a lower damping coefficient compared to a barefoot rearfoot strike. Running barefoot with a forefoot strike resulted in a significantly lower damping coefficient than a forefoot strike when wearing shoes (p<0.001). The shod rearfoot strike showed lower damping compared to a barefoot rearfoot strike (p<0.001). While rearfoot striking showed lower vibration frequencies in shod and barefoot running, it did not consistently result in lower damping coefficients. This study showed that the use of a preferred movement resulted in lower damping coefficients of running related soft tissue vibrations. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  17. Segmentation and detection of breast cancer in mammograms combining wavelet analysis and genetic algorithm.

    PubMed

    Pereira, Danilo Cesar; Ramos, Rodrigo Pereira; do Nascimento, Marcelo Zanchetta

    2014-04-01

    In Brazil, the National Cancer Institute (INCA) reports more than 50,000 new cases of the disease, with risk of 51 cases per 100,000 women. Radiographic images obtained from mammography equipments are one of the most frequently used techniques for helping in early diagnosis. Due to factors related to cost and professional experience, in the last two decades computer systems to support detection (Computer-Aided Detection - CADe) and diagnosis (Computer-Aided Diagnosis - CADx) have been developed in order to assist experts in detection of abnormalities in their initial stages. Despite the large number of researches on CADe and CADx systems, there is still a need for improved computerized methods. Nowadays, there is a growing concern with the sensitivity and reliability of abnormalities diagnosis in both views of breast mammographic images, namely cranio-caudal (CC) and medio-lateral oblique (MLO). This paper presents a set of computational tools to aid segmentation and detection of mammograms that contained mass or masses in CC and MLO views. An artifact removal algorithm is first implemented followed by an image denoising and gray-level enhancement method based on wavelet transform and Wiener filter. Finally, a method for detection and segmentation of masses using multiple thresholding, wavelet transform and genetic algorithm is employed in mammograms which were randomly selected from the Digital Database for Screening Mammography (DDSM). The developed computer method was quantitatively evaluated using the area overlap metric (AOM). The mean ± standard deviation value of AOM for the proposed method was 79.2 ± 8%. The experiments demonstrate that the proposed method has a strong potential to be used as the basis for mammogram mass segmentation in CC and MLO views. Another important aspect is that the method overcomes the limitation of analyzing only CC and MLO views. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Designing an Algorithm for Cancerous Tissue Segmentation Using Adaptive K-means Cluttering and Discrete Wavelet Transform

    PubMed Central

    Rezaee, Kh.; Haddadnia, J.

    2013-01-01

    Background: Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. Objective: This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. Method: In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters’ number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. Results: We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. Conclusion: The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output. PMID:25505753

  19. Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain.

    PubMed

    Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L; Aziz, Tipu Z; Wang, Shouyan

    2018-01-01

    In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep brain stimulation based on neural states integrating multiple neural oscillations.

  20. Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain

    PubMed Central

    Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L.; Aziz, Tipu Z.; Wang, Shouyan

    2018-01-01

    In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep brain stimulation based on neural states integrating multiple neural oscillations. PMID:29695951

  1. Fast generation of computer-generated holograms using wavelet shrinkage.

    PubMed

    Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2017-01-09

    Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.

  2. Determination of phase from the ridge of CWT using generalized Morse wavelet

    NASA Astrophysics Data System (ADS)

    Kocahan, Ozlem; Tiryaki, Erhan; Coskun, Emre; Ozder, Serhat

    2018-03-01

    The selection of wavelet is an important step in order to determine the phase from the fringe patterns. In the present work, a new wavelet for phase retrieval from the ridge of continuous wavelet transform (CWT) is presented. The phase distributions have been extracted from the optical fringe pattern by choosing the zero order generalized morse wavelet (GMW) as a mother wavelet. The aim of the study is to reveal the ways in which the two varying parameters of GMW affect the phase calculation. To show the validity of this method, an experimental study has been conducted by using the diffraction phase microscopy (DPM) setup; consequently, the profiles of red blood cells have been retrieved. The results for the CWT ridge technique with GMW have been compared with the results for the Morlet wavelet and the Paul wavelet; the results are almost identical for Paul and zero order GMW because of their degree of freedom. Also, for further discussion, the Fourier transform and the Stockwell transform have been applied comparatively. The outcome of the comparison reveals that GMWs are highly applicable to the research in various areas, predominantly biomedicine.

  3. On-Line Loss of Control Detection Using Wavelets

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J. (Technical Monitor); Thompson, Peter M.; Klyde, David H.; Bachelder, Edward N.; Rosenthal, Theodore J.

    2005-01-01

    Wavelet transforms are used for on-line detection of aircraft loss of control. Wavelet transforms are compared with Fourier transform methods and shown to more rapidly detect changes in the vehicle dynamics. This faster response is due to a time window that decreases in length as the frequency increases. New wavelets are defined that further decrease the detection time by skewing the shape of the envelope. The wavelets are used for power spectrum and transfer function estimation. Smoothing is used to tradeoff the variance of the estimate with detection time. Wavelets are also used as front-end to the eigensystem reconstruction algorithm. Stability metrics are estimated from the frequency response and models, and it is these metrics that are used for loss of control detection. A Matlab toolbox was developed for post-processing simulation and flight data using the wavelet analysis methods. A subset of these methods was implemented in real time and named the Loss of Control Analysis Tool Set or LOCATS. A manual control experiment was conducted using a hardware-in-the-loop simulator for a large transport aircraft, in which the real time performance of LOCATS was demonstrated. The next step is to use these wavelet analysis tools for flight test support.

  4. New Energy-Dependent Soft X-Rav Damage In MOS Devices

    NASA Astrophysics Data System (ADS)

    Chan, Tung-Yi; Gaw, Henry; Seligson, Daniel; Pan, Lawrence; King, Paul L.; Pianetta, Piero

    1988-06-01

    An energy-dependent soft x-ray-induced device damage has been discovered in MOS devices fabricated using standard CMOS process. MOS devices were irradiated by monochromatic x-rays in energy range just above and below the silicon K-edge (1.84 keV). Photons below the K-edge is found to create more damage in the oxide and oxide/silicon interface than photons above the K-edge. This energy-dependent damage effect is believed to be due to charge traps generated during device fabrication. It is found that data for both n- and p-type devices lie along a universal curve if normalized threshold voltage shifts are plotted against absorbed dose in the oxide. The threshold voltage shift saturates when the absorbed dose in the oxide exceeds 1.4X105 mJ/cm3, corresponding to 6 Mrad in the oxide. Using isochronal anneals, the trapped charge damage is found to recover with an activation energy of 0.38 eV. A discrete radiation-induced damage state appears in the low frequency C-V curve in a temperature range from 1750C to 325°C.

  5. Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method

    NASA Astrophysics Data System (ADS)

    Lombardini, Richard; Nowara, Ewa; Johnson, Bruce

    2015-03-01

    The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.

  6. Efficacy Evaluation of Different Wavelet Feature Extraction Methods on Brain MRI Tumor Detection

    NASA Astrophysics Data System (ADS)

    Nabizadeh, Nooshin; John, Nigel; Kubat, Miroslav

    2014-03-01

    Automated Magnetic Resonance Imaging brain tumor detection and segmentation is a challenging task. Among different available methods, feature-based methods are very dominant. While many feature extraction techniques have been employed, it is still not quite clear which of feature extraction methods should be preferred. To help improve the situation, we present the results of a study in which we evaluate the efficiency of using different wavelet transform features extraction methods in brain MRI abnormality detection. Applying T1-weighted brain image, Discrete Wavelet Transform (DWT), Discrete Wavelet Packet Transform (DWPT), Dual Tree Complex Wavelet Transform (DTCWT), and Complex Morlet Wavelet Transform (CMWT) methods are applied to construct the feature pool. Three various classifiers as Support Vector Machine, K Nearest Neighborhood, and Sparse Representation-Based Classifier are applied and compared for classifying the selected features. The results show that DTCWT and CMWT features classified with SVM, result in the highest classification accuracy, proving of capability of wavelet transform features to be informative in this application.

  7. Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy

    PubMed Central

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734

  8. Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.

    PubMed

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.

  9. Research on the fault diagnosis of bearing based on wavelet and demodulation

    NASA Astrophysics Data System (ADS)

    Li, Jiapeng; Yuan, Yu

    2017-05-01

    As a most commonly-used machine part, antifriction bearing is extensively used in mechanical equipment. Vibration signal analysis is one of the methods to monitor and diagnose the running status of antifriction bearings. Therefore, using wavelet analysis for demising is of great importance in the engineering practice. This paper firstly presented the basic theory of wavelet analysis to study the transformation, decomposition and reconstruction of wavelet. In addition, edition software LabVIEW was adopted to conduct wavelet and demodulation upon the vibration signal of antifriction bearing collected. With the combination of Hilbert envelop demodulation analysis, the fault character frequencies of the demised signal were extracted to conduct fault diagnosis analysis, which serves as a reference for the wavelet and demodulation of the vibration signal in engineering practice.

  10. Simplified flexible-PON upstream transmission using pulse position modulation at ONU and DSP-enabled soft-combining at OLT for adaptive link budgets.

    PubMed

    Liu, Xiang; Effenberger, Frank; Chand, Naresh

    2015-03-09

    We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.

  11. Poiseuille flow of soft glasses in narrow channels: from quiescence to steady state.

    PubMed

    Chaudhuri, Pinaki; Horbach, Jürgen

    2014-10-01

    Using numerical simulations, the onset of Poiseuille flow in a confined soft glass is investigated. Starting from the quiescent state, steady flow sets in at a time scale which increases with a decrease in applied forcing. At this onset time scale, a rapid transition occurs via the simultaneous fluidization of regions having different local stresses. In the absence of steady flow at long times, creep is observed even in regions where the local stress is larger than the bulk yielding threshold. Finally, we show that the time scale to attain steady flow depends strongly on the history of the initial state.

  12. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  13. Correlation Filtering of Modal Dynamics using the Laplace Wavelet

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.

    1997-01-01

    Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.

  14. Alcoholism detection in magnetic resonance imaging by Haar wavelet transform and back propagation neural network

    NASA Astrophysics Data System (ADS)

    Yu, Yali; Wang, Mengxia; Lima, Dimas

    2018-04-01

    In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.

  15. Efficient Prediction of Low-Visibility Events at Airports Using Machine-Learning Regression

    NASA Astrophysics Data System (ADS)

    Cornejo-Bueno, L.; Casanova-Mateo, C.; Sanz-Justo, J.; Cerro-Prada, E.; Salcedo-Sanz, S.

    2017-11-01

    We address the prediction of low-visibility events at airports using machine-learning regression. The proposed model successfully forecasts low-visibility events in terms of the runway visual range at the airport, with the use of support-vector regression, neural networks (multi-layer perceptrons and extreme-learning machines) and Gaussian-process algorithms. We assess the performance of these algorithms based on real data collected at the Valladolid airport, Spain. We also propose a study of the atmospheric variables measured at a nearby tower related to low-visibility atmospheric conditions, since they are considered as the inputs of the different regressors. A pre-processing procedure of these input variables with wavelet transforms is also described. The results show that the proposed machine-learning algorithms are able to predict low-visibility events well. The Gaussian process is the best algorithm among those analyzed, obtaining over 98% of the correct classification rate in low-visibility events when the runway visual range is {>}1000 m, and about 80% under this threshold. The performance of all the machine-learning algorithms tested is clearly affected in extreme low-visibility conditions ({<}500 m). However, we show improved results of all the methods when data from a neighbouring meteorological tower are included, and also with a pre-processing scheme using a wavelet transform. Also presented are results of the algorithm performance in daytime and nighttime conditions, and for different prediction time horizons.

  16. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  17. Automatic seizure detection in SEEG using high frequency activities in wavelet domain.

    PubMed

    Ayoubian, L; Lacoma, H; Gotman, J

    2013-03-01

    Existing automatic detection techniques show high sensitivity and moderate specificity, and detect seizures a relatively long time after onset. High frequency (80-500 Hz) activity has recently been shown to be prominent in the intracranial EEG of epileptic patients but has not been used in seizure detection. The purpose of this study is to investigate if these frequencies can contribute to seizure detection. The system was designed using 30 h of intracranial EEG, including 15 seizures in 15 patients. Wavelet decomposition, feature extraction, adaptive thresholding and artifact removal were employed in training data. An EMG removal algorithm was developed based on two features: Lack of correlation between frequency bands and energy-spread in frequency. Results based on the analysis of testing data (36 h of intracranial EEG, including 18 seizures) show a sensitivity of 72%, a false detection of 0.7/h and a median delay of 5.7 s. Missed seizures originated mainly from seizures with subtle or absent high frequencies or from EMG removal procedures. False detections were mainly due to weak EMG or interictal high frequency activities. The system performed sufficiently well to be considered for clinical use, despite the exclusive use of frequencies not usually considered in clinical interpretation. High frequencies have the potential to contribute significantly to the detection of epileptic seizures. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  18. Automatic seizure detection in SEEG using high frequency activities in wavelet domain

    PubMed Central

    Ayoubian, L.; Lacoma, H.; Gotman, J.

    2015-01-01

    Existing automatic detection techniques show high sensitivity and moderate specificity, and detect seizures a relatively long time after onset. High frequency (80–500 Hz) activity has recently been shown to be prominent in the intracranial EEG of epileptic patients but has not been used in seizure detection. The purpose of this study is to investigate if these frequencies can contribute to seizure detection. The system was designed using 30 h of intracranial EEG, including 15 seizures in 15 patients. Wavelet decomposition, feature extraction, adaptive thresholding and artifact removal were employed in training data. An EMG removal algorithm was developed based on two features: Lack of correlation between frequency bands and energy-spread in frequency. Results based on the analysis of testing data (36 h of intracranial EEG, including 18 seizures) show a sensitivity of 72%, a false detection of 0.7/h and a median delay of 5.7 s. Missed seizures originated mainly from seizures with subtle or absent high frequencies or from EMG removal procedures. False detections were mainly due to weak EMG or interictal high frequency activities. The system performed sufficiently well to be considered for clinical use, despite the exclusive use of frequencies not usually considered in clinical interpretation. High frequencies have the potential to contribute significantly to the detection of epileptic seizures. PMID:22647836

  19. Wavelet analysis in two-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  20. Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms

    DTIC Science & Technology

    2004-08-06

    wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus

  1. Sediment Dynamics in a Vegetated Tidally Influenced Interdistributary Island: Wax Lake, Louisiana

    DTIC Science & Technology

    2017-07-01

    60 Appendix A: Time Series of Wax Lake Hydrological Measurements...north-south wind stress (right). In each plot, the global wavelet spectrum is shown to the right of the wavelet plot, and and the original time series ...for Hs. The global wavelet spectrum is shown to the right of the wavelet plot, and and the original time series is shown below

  2. Implementing wavelet inverse-transform processor with surface acoustic wave device.

    PubMed

    Lu, Wenke; Zhu, Changchun; Liu, Qinghong; Zhang, Jingduan

    2013-02-01

    The objective of this research was to investigate the implementation schemes of the wavelet inverse-transform processor using surface acoustic wave (SAW) device, the length function of defining the electrodes, and the possibility of solving the load resistance and the internal resistance for the wavelet inverse-transform processor using SAW device. In this paper, we investigate the implementation schemes of the wavelet inverse-transform processor using SAW device. In the implementation scheme that the input interdigital transducer (IDT) and output IDT stand in a line, because the electrode-overlap envelope of the input IDT is identical with the one of the output IDT (i.e. the two transducers are identical), the product of the input IDT's frequency response and the output IDT's frequency response can be implemented, so that the wavelet inverse-transform processor can be fabricated. X-112(0)Y LiTaO(3) is used as a substrate material to fabricate the wavelet inverse-transform processor. The size of the wavelet inverse-transform processor using this implementation scheme is small, so its cost is low. First, according to the envelope function of the wavelet function, the length function of the electrodes is defined, then, the lengths of the electrodes can be calculated from the length function of the electrodes, finally, the input IDT and output IDT can be designed according to the lengths and widths for the electrodes. In this paper, we also present the load resistance and the internal resistance as the two problems of the wavelet inverse-transform processor using SAW devices. The solutions to these problems are achieved in this study. When the amplifiers are subjected to the input end and output end for the wavelet inverse-transform processor, they can eliminate the influence of the load resistance and the internal resistance on the output voltage of the wavelet inverse-transform processor using SAW device. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Comparison between wavelet and wavelet packet transform features for classification of faults in distribution system

    NASA Astrophysics Data System (ADS)

    Arvind, Pratul

    2012-11-01

    The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.

  4. Wavelets on the Group SO(3) and the Sphere S3

    NASA Astrophysics Data System (ADS)

    Bernstein, Swanhild

    2007-09-01

    The construction of wavelets relies on translations and dilations which are perfectly given in R. On the sphere translations can be considered as rotations but it difficult to say what are dilations. For the 2-dimensional sphere there exist two different approaches to obtain wavelets which are worth to be considered. The first concept goes back to Freeden and collaborators [2] which defines wavelets by means of kernels of spherical singular integrals. The other concept developed by Antoine and Vandergheynst and coworkers [3] is a purely group theoretical approach and defines dilations as dilations in the tangent plane. Surprisingly both concepts coincides for zonal functions. We will define wavelets on the 3-dimensional sphere by means of kernels of singular integrals and demonstrate that wavelets constructed by Antoine and Vandergheynst for zonal functions meet our definition.

  5. Simulation of groundwater level variations using wavelet combined with neural network, linear regression and support vector machine

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Hadi; Rajaee, Taher

    2017-01-01

    Simulation of groundwater level (GWL) fluctuations is an important task in management of groundwater resources. In this study, the effect of wavelet analysis on the training of the artificial neural network (ANN), multi linear regression (MLR) and support vector regression (SVR) approaches was investigated, and the ANN, MLR and SVR along with the wavelet-ANN (WNN), wavelet-MLR (WLR) and wavelet-SVR (WSVR) models were compared in simulating one-month-ahead of GWL. The only variable used to develop the models was the monthly GWL data recorded over a period of 11 years from two wells in the Qom plain, Iran. The results showed that decomposing GWL time series into several sub-time series, extremely improved the training of the models. For both wells 1 and 2, the Meyer and Db5 wavelets produced better results compared to the other wavelets; which indicated wavelet types had similar behavior in similar case studies. The optimal number of delays was 6 months, which seems to be due to natural phenomena. The best WNN model, using Meyer mother wavelet with two decomposition levels, simulated one-month-ahead with RMSE values being equal to 0.069 m and 0.154 m for wells 1 and 2, respectively. The RMSE values for the WLR model were 0.058 m and 0.111 m, and for WSVR model were 0.136 m and 0.060 m for wells 1 and 2, respectively.

  6. A wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Yang, Yanxi

    2018-05-01

    We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.

  7. Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy

    NASA Astrophysics Data System (ADS)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.

  8. Experimental study on the crack detection with optimized spatial wavelet analysis and windowing

    NASA Astrophysics Data System (ADS)

    Ghanbari Mardasi, Amir; Wu, Nan; Wu, Christine

    2018-05-01

    In this paper, a high sensitive crack detection is experimentally realized and presented on a beam under certain deflection by optimizing spatial wavelet analysis. Due to the crack existence in the beam structure, a perturbation/slop singularity is induced in the deflection profile. Spatial wavelet transformation works as a magnifier to amplify the small perturbation signal at the crack location to detect and localize the damage. The profile of a deflected aluminum cantilever beam is obtained for both intact and cracked beams by a high resolution laser profile sensor. Gabor wavelet transformation is applied on the subtraction of intact and cracked data sets. To improve detection sensitivity, scale factor in spatial wavelet transformation and the transformation repeat times are optimized. Furthermore, to detect the possible crack close to the measurement boundaries, wavelet transformation edge effect, which induces large values of wavelet coefficient around the measurement boundaries, is efficiently reduced by introducing different windowing functions. The result shows that a small crack with depth of less than 10% of the beam height can be localized with a clear perturbation. Moreover, the perturbation caused by a crack at 0.85 mm away from one end of the measurement range, which is covered by wavelet transform edge effect, emerges by applying proper window functions.

  9. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  10. Image Retrieval using Integrated Features of Binary Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Agarwal, Megha; Maheshwari, R. P.

    2011-12-01

    In this paper a new approach for image retrieval is proposed with the application of binary wavelet transform. This new approach facilitates the feature calculation with the integration of histogram and correlogram features extracted from binary wavelet subbands. Experiments are performed to evaluate and compare the performance of proposed method with the published literature. It is verified that average precision and average recall of proposed method (69.19%, 41.78%) is significantly improved compared to optimal quantized wavelet correlogram (OQWC) [6] (64.3%, 38.00%) and Gabor wavelet correlogram (GWC) [10] (64.1%, 40.6%). All the experiments are performed on Corel 1000 natural image database [20].

  11. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  12. Watermarking on 3D mesh based on spherical wavelet transform.

    PubMed

    Jin, Jian-Qiu; Dai, Min-Ya; Bao, Hu-Jun; Peng, Qun-Sheng

    2004-03-01

    In this paper we propose a robust watermarking algorithm for 3D mesh. The algorithm is based on spherical wavelet transform. Our basic idea is to decompose the original mesh into a series of details at different scales by using spherical wavelet transform; the watermark is then embedded into the different levels of details. The embedding process includes: global sphere parameterization, spherical uniform sampling, spherical wavelet forward transform, embedding watermark, spherical wavelet inverse transform, and at last resampling the mesh watermarked to recover the topological connectivity of the original model. Experiments showed that our algorithm can improve the capacity of the watermark and the robustness of watermarking against attacks.

  13. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  14. Continuous time wavelet entropy of auditory evoked potentials.

    PubMed

    Cek, M Emre; Ozgoren, Murat; Savaci, F Acar

    2010-01-01

    In this paper, the continuous time wavelet entropy (CTWE) of auditory evoked potentials (AEP) has been characterized by evaluating the relative wavelet energies (RWE) in specified EEG frequency bands. Thus, the rapid variations of CTWE due to the auditory stimulation could be detected in post-stimulus time interval. This approach removes the probability of missing the information hidden in short time intervals. The discrete time and continuous time wavelet based wavelet entropy variations were compared on non-target and target AEP data. It was observed that CTWE can also be an alternative method to analyze entropy as a function of time. 2009 Elsevier Ltd. All rights reserved.

  15. A lung sound classification system based on the rational dilation wavelet transform.

    PubMed

    Ulukaya, Sezer; Serbes, Gorkem; Sen, Ipek; Kahya, Yasemin P

    2016-08-01

    In this work, a wavelet based classification system that aims to discriminate crackle, normal and wheeze lung sounds is presented. While the previous works related with this problem use constant low Q-factor wavelets, which have limited frequency resolution and can not cope with oscillatory signals, in the proposed system, the Rational Dilation Wavelet Transform, whose Q-factors can be tuned, is employed. Proposed system yields an accuracy of 95 % for crackle, 97 % for wheeze, 93.50 % for normal and 95.17 % for total sound signal types using energy feature subset and proposed approach is superior to conventional low Q-factor wavelet analysis.

  16. A human auditory tuning curves matched wavelet function.

    PubMed

    Abolhassani, Mohammad D; Salimpour, Yousef

    2008-01-01

    This paper proposes a new quantitative approach to the problem of matching a wavelet function to a human auditory tuning curves. The auditory filter shapes were derived from the psychophysical measurements in normal-hearing listeners using the variant of the notched-noise method for brief signals in forward and simultaneous masking. These filters were used as templates for the designing a wavelet function that has the maximum matching to a tuning curve. The scaling function was calculated from the matched wavelet function and by using these functions, low pass and high pass filters were derived for the implementation of a filter bank. Therefore, new wavelet families were derived.

  17. Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.

    PubMed

    Punys, Vytenis; Maknickas, Ramunas

    2011-01-01

    Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.

  18. A simulation study for determination of refractive index dispersion of dielectric film from reflectance spectrum by using Paul wavelet

    NASA Astrophysics Data System (ADS)

    Tiryaki, Erhan; Coşkun, Emre; Kocahan, Özlem; Özder, Serhat

    2017-02-01

    In this work, the Continuous Wavelet Transform (CWT) with Paul wavelet was improved as a tool for determination of refractive index dispersion of dielectric film by using the reflectance spectrum of the film. The reflectance spectrum was generated theoretically in the range of 0.8333 - 3.3333 μm wavenumber and it was analyzed with presented method. Obtained refractive index determined from various resolution of Paul wavelet were compared with the input values, and the importance of the tunable resolution with Paul wavelet was discussed briefly. The noise immunity and uncertainty of the method was also studied.

  19. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  20. Characteristic Analysis of Air-gun Source Wavelet based on the Vertical Cable Data

    NASA Astrophysics Data System (ADS)

    Xing, L.

    2016-12-01

    Air guns are important sources for marine seismic exploration. Far-field wavelets of air gun arrays, as a necessary parameter for pre-stack processing and source models, plays an important role during marine seismic data processing and interpretation. When an air gun fires, it generates a series of air bubbles. Similar to onshore seismic exploration, the water forms a plastic fluid near the bubble; the farther the air gun is located from the measurement, the more steady and more accurately represented the wavelet will be. In practice, hydrophones should be placed more than 100 m from the air gun; however, traditional seismic cables cannot meet this requirement. On the other hand, vertical cables provide a viable solution to this problem. This study uses a vertical cable to receive wavelets from 38 air guns and data are collected offshore Southeast Qiong, where the water depth is over 1000 m. In this study, the wavelets measured using this technique coincide very well with the simulated wavelets and can therefore represent the real shape of the wavelets. This experiment fills a technology gap in China.

  1. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  2. A Wavelet Model for Vocalic Speech Coarticulation

    DTIC Science & Technology

    1994-10-01

    control vowel’s signal as the mother wavelet. A practical experiment is conducted to evaluate the coarticulation channel using samples 01 real speech...transformation from a control speech state (input) to an effected speech state (output). Specifically, a vowel produced in isolation is transformed into an...the wavelet transform of the effected vowel’s signal, using the control vowel’s signal as the mother wavelet. A practical experiment is conducted to

  3. Signal processing method and system for noise removal and signal extraction

    DOEpatents

    Fu, Chi Yung; Petrich, Loren

    2009-04-14

    A signal processing method and system combining smooth level wavelet pre-processing together with artificial neural networks all in the wavelet domain for signal denoising and extraction. Upon receiving a signal corrupted with noise, an n-level decomposition of the signal is performed using a discrete wavelet transform to produce a smooth component and a rough component for each decomposition level. The n.sup.th level smooth component is then inputted into a corresponding neural network pre-trained to filter out noise in that component by pattern recognition in the wavelet domain. Additional rough components, beginning at the highest level, may also be retained and inputted into corresponding neural networks pre-trained to filter out noise in those components also by pattern recognition in the wavelet domain. In any case, an inverse discrete wavelet transform is performed on the combined output from all the neural networks to recover a clean signal back in the time domain.

  4. A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.

    PubMed

    Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian

    2017-05-01

    This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.

  5. Use of the Morlet mother wavelet in the frequency-scale domain decomposition technique for the modal identification of ambient vibration responses

    NASA Astrophysics Data System (ADS)

    Le, Thien-Phu

    2017-10-01

    The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.

  6. The massive soft anomalous dimension matrix at two loops

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander; Sterman, George; Sung, Ilmo

    2009-05-01

    We study two-loop anomalous dimension matrices in QCD and related gauge theories for products of Wilson lines coupled at a point. We verify by an analysis in Euclidean space that the contributions to these matrices from diagrams that link three massive Wilson lines do not vanish in general. We show, however, that for two-to-two processes the two-loop anomalous dimension matrix is diagonal in the same color-exchange basis as the one-loop matrix for arbitrary masses at absolute threshold and for scattering at 90 degrees in the center of mass. This result is important for applications of threshold resummation in heavy quark production.

  7. Impact of view reduction in CT on radiation dose for patients

    NASA Astrophysics Data System (ADS)

    Parcero, E.; Flores, L.; Sánchez, M. G.; Vidal, V.; Verdú, G.

    2017-08-01

    Iterative methods have become a hot topic of research in computed tomography (CT) imaging because of their capacity to resolve the reconstruction problem from a limited number of projections. This allows the reduction of radiation exposure on patients during the data acquisition. The reconstruction time and the high radiation dose imposed on patients are the two major drawbacks in CT. To solve them effectively we adapted the method for sparse linear equations and sparse least squares (LSQR) with soft threshold filtering (STF) and the fast iterative shrinkage-thresholding algorithm (FISTA) to computed tomography reconstruction. The feasibility of the proposed methods is demonstrated numerically.

  8. MO-F-CAMPUS-J-05: Toward MRI-Only Radiotherapy: Novel Tissue Segmentation and Pseudo-CT Generation Techniques Based On T1 MRI Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aouadi, S; McGarry, M; Hammoud, R

    Purpose: To develop and validate a 4 class tissue segmentation approach (air cavities, background, bone and soft-tissue) on T1 -weighted brain MRI and to create a pseudo-CT for MRI-only radiation therapy verification. Methods: Contrast-enhanced T1-weighted fast-spin-echo sequences (TR = 756ms, TE= 7.152ms), acquired on a 1.5T GE MRI-Simulator, are used.MRIs are firstly pre-processed to correct for non uniformity using the non parametric, non uniformity intensity normalization algorithm. Subsequently, a logarithmic inverse scaling log(1/image) is applied, prior to segmentation, to better differentiate bone and air from soft-tissues. Finally, the following method is enrolled to classify intensities into air cavities, background, bonemore » and soft-tissue:Thresholded region growing with seed points in image corners is applied to get a mask of Air+Bone+Background. The background is, afterward, separated by the scan-line filling algorithm. The air mask is extracted by morphological opening followed by a post-processing based on knowledge about air regions geometry. The remaining rough bone pre-segmentation is refined by applying 3D geodesic active contours; bone segmentation evolves by the sum of internal forces from contour geometry and external force derived from image gradient magnitude.Pseudo-CT is obtained by assigning −1000HU to air and background voxels, performing linear mapping of soft-tissue MR intensities in [-400HU, 200HU] and inverse linear mapping of bone MR intensities in [200HU, 1000HU]. Results: Three brain patients having registered MRI and CT are used for validation. CT intensities classification into 4 classes is performed by thresholding. Dice and misclassification errors are quantified. Correct classifications for soft-tissue, bone, and air are respectively 89.67%, 77.8%, and 64.5%. Dice indices are acceptable for bone (0.74) and soft-tissue (0.91) but low for air regions (0.48). Pseudo-CT produces DRRs with acceptable clinical visual agreement to CT-based DRR. Conclusion: The proposed approach makes it possible to use T1-weighted MRI to generate accurate pseudo-CT from 4-class segmentation.« less

  9. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Exploration of EEG features of Alzheimer's disease using continuous wavelet transform.

    PubMed

    Ghorbanian, Parham; Devilbiss, David M; Hess, Terry; Bernstein, Allan; Simon, Adam J; Ashrafiuon, Hashem

    2015-09-01

    We have developed a novel approach to elucidate several discriminating EEG features of Alzheimer's disease. The approach is based on the use of a variety of continuous wavelet transforms, pairwise statistical tests with multiple comparison correction, and several decision tree algorithms, in order to choose the most prominent EEG features from a single sensor. A pilot study was conducted to record EEG signals from Alzheimer's disease (AD) patients and healthy age-matched control (CTL) subjects using a single dry electrode device during several eyes-closed (EC) and eyes-open (EO) resting conditions. We computed the power spectrum distribution properties and wavelet and sample entropy of the wavelet coefficients time series at scale ranges approximately corresponding to the major brain frequency bands. A predictive index was developed using the results from statistical tests and decision tree algorithms to identify the most reliable significant features of the AD patients when compared to healthy controls. The three most dominant features were identified as larger absolute mean power and larger standard deviation of the wavelet scales corresponding to 4-8 Hz (θ) during EO and lower wavelet entropy of the wavelet scales corresponding to 8-12 Hz (α) during EC, respectively. The fourth reliable set of distinguishing features of AD patients was lower relative power of the wavelet scales corresponding to 12-30 Hz (β) followed by lower skewness of the wavelet scales corresponding to 2-4 Hz (upper δ), both during EO. In general, the results indicate slowing and lower complexity of EEG signal in AD patients using a very easy-to-use and convenient single dry electrode device.

  11. Fuzzy entropy thresholding and multi-scale morphological approach for microscopic image enhancement

    NASA Astrophysics Data System (ADS)

    Zhou, Jiancan; Li, Yuexiang; Shen, Linlin

    2017-07-01

    Microscopic images provide lots of useful information for modern diagnosis and biological research. However, due to the unstable lighting condition during image capturing, two main problems, i.e., high-level noises and low image contrast, occurred in the generated cell images. In this paper, a simple but efficient enhancement framework is proposed to address the problems. The framework removes image noises using a hybrid method based on wavelet transform and fuzzy-entropy, and enhances the image contrast with an adaptive morphological approach. Experiments on real cell dataset were made to assess the performance of proposed framework. The experimental results demonstrate that our proposed enhancement framework increases the cell tracking accuracy to an average of 74.49%, which outperforms the benchmark algorithm, i.e., 46.18%.

  12. Parameters effective on estimating a nonstationary mixed-phase wavelet using cumulant matching approach

    NASA Astrophysics Data System (ADS)

    Vosoughi, Ehsan; Javaherian, Abdolrahim

    2018-01-01

    Seismic inversion is a process performed to remove the effects of propagated wavelets in order to recover the acoustic impedance. To obtain valid velocity and density values related to subsurface layers through the inversion process, it is highly essential to perform reliable wavelet estimation such as cumulant matching approach. For this purpose, the seismic data were windowed in this work in such a way that two consecutive windows were only one sample apart. Also, we did not consider any fixed wavelet for any window and let the phase of each wavelet rotate in each sample in the window. Comparing the fourth order cumulant of the whitened trace and fourth-order moment of the all-pass operator in each window generated a cost function that should be minimized with a non-linear optimization method. In this regard, parameters effective on the estimation of the nonstationary mixed-phase wavelets were tested over the created nonstationary seismic trace at 0.82 s and 1.6 s. Besides, we compared the consequences of each parameter on estimated wavelets at two mentioned times. The parameters studied in this work are window length, taper type, the number of iteration, signal-to-noise ratio, bandwidth to central frequency ratio, and Q factor. The results show that applying the optimum values of the effective parameters, the average correlation of the estimated mixed-phase wavelets with the original ones is about 87%. Moreover, the effectiveness of the proposed approach was examined on a synthetic nonstationary seismic section with variable Q factor values alongside the time and offset axis. Eventually, the cumulant matching method was applied on a cross line of the migrated data from a 3D data set of an oilfield in the Persian Gulf. Also, the effect of the wrong Q estimation on the estimated mixed-phase wavelet was considered on the real data set. It is concluded that the accuracy of the estimated wavelet relied on the estimated Q and more than 10% error in the estimated value of Q is acceptable. Eventually, an 88% correlation was found between the estimated mixed-phase wavelets and the original ones for three horizons. The estimated wavelets applied to the data and the result of deconvolution processes was presented.

  13. Effect of Age and Severity of Facial Palsy on Taste Thresholds in Bell's Palsy Patients

    PubMed Central

    Park, Jung Min; Kim, Myung Gu; Jung, Junyang; Kim, Sung Su; Jung, A Ra; Kim, Sang Hoon

    2017-01-01

    Background and Objectives To investigate whether taste thresholds, as determined by electrogustometry (EGM) and chemical taste tests, differ by age and the severity of facial palsy in patients with Bell's palsy. Subjects and Methods This study included 29 patients diagnosed with Bell's palsy between January 2014 and May 2015 in our hospital. Patients were assorted into age groups and by severity of facial palsy, as determined by House-Brackmann Scale, and their taste thresholds were assessed by EGM and chemical taste tests. Results EGM showed that taste thresholds at four locations on the tongue and one location on the central soft palate, 1 cm from the palatine uvula, were significantly higher in Bell's palsy patients than in controls (p<0.05). In contrast, chemical taste tests showed no significant differences in taste thresholds between the two groups (p>0.05). The severity of facial palsy did not affect taste thresholds, as determined by both EGM and chemical taste tests (p>0.05). The overall mean electrical taste thresholds on EGM were higher in younger Bell's palsy patients than in healthy subjects, with the difference at the back-right area of the tongue differing significantly (p<0.05). In older individuals, however, no significant differences in taste thresholds were observed between Bell's palsy patients and healthy subjects (p>0.05). Conclusions Electrical taste thresholds were higher in Bell's palsy patients than in controls. These differences were observed in younger, but not in older, individuals. PMID:28417103

  14. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  15. The generalized Morse wavelet method to determine refractive index dispersion of dielectric films

    NASA Astrophysics Data System (ADS)

    Kocahan, Özlem; Özcan, Seçkin; Coşkun, Emre; Özder, Serhat

    2017-04-01

    The continuous wavelet transform (CWT) method is a useful tool for the determination of refractive index dispersion of dielectric films. Mother wavelet selection is an important factor for the accuracy of the results when using CWT. In this study, generalized Morse wavelet (GMW) was proposed as the mother wavelet because of having two degrees of freedom. The simulation studies, based on error calculations and Cauchy Coefficient comparisons, were presented and also the noisy signal was tested by CWT method with GMW. The experimental validity of this method was checked by D263 T schott glass having 100 μm thickness and the results were compared to those from the catalog value.

  16. The minimal SUSY B - L model: simultaneous Wilson lines and string thresholds

    DOE PAGES

    Deen, Rehan; Ovrut, Burt A.; Purves, Austin

    2016-07-08

    In previous work, we presented a statistical scan over the soft supersymmetry breaking parameters of the minimal SUSY B - L model. For specificity of calculation, unification of the gauge parameters was enforced by allowing the two Z 3×Z 3 Wilson lines to have mass scales separated by approximately an order of magnitude. This introduced an additional “left-right” sector below the unification scale. In this paper, for three important reasons, we modify our previous analysis by demanding that the mass scales of the two Wilson lines be simultaneous and equal to an “average unification” mass U >. The present analysismore » is 1) more “natural” than the previous calculations, which were only valid in a very specific region of the Calabi-Yau moduli space, 2) the theory is conceptually simpler in that the left-right sector has been removed and 3) in the present analysis the lack of gauge unification is due to threshold effects — particularly heavy string thresholds, which we calculate statistically in detail. As in our previous work, the theory is renormalization group evolved from U > to the electroweak scale — being subjected, sequentially, to the requirement of radiative B - L and electroweak symmetry breaking, the present experimental lower bounds on the B - L vector boson and sparticle masses, as well as the lightest neutral Higgs mass of ~125 GeV. The subspace of soft supersymmetry breaking masses that satisfies all such constraints is presented and shown to be substantial.« less

  17. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  18. Wavelet Decomposition for Discrete Probability Maps

    DTIC Science & Technology

    2007-08-01

    using other wavelet basis functions, such as those mentioned in Section 7 15 DSTO–TN–0760 References 1. P. M. Bentley and J . T . E. McDonnell. Wavelet...84, 1995. 0272-1716. 18. E. J . Stollnitz, T . D. DeRose, and D. H. Salesin. Wavelets for computer graphics: a primer. 2. Computer Graphics and...and Computer Modelling in 2006 from the University of South Australia, Mawson Lakes. Part of this de- gree was undertaken at the University of Twente

  19. Evidence for asymmetric inertial instability in the FIRE satellite dataset

    NASA Technical Reports Server (NTRS)

    Stevens, Duane E.; Ciesielski, Paul E.

    1990-01-01

    One of the main goals of the First ISCCP Regional Experiment (FIRE) is obtaining the basic knowledge to better interpret satellite image of clouds on regional and smaller scales. An analysis of a mesoscale circulation phenomenon as observed in hourly FIRE satellite images is presented. Specifically, the phenomenon of interest appeared on satellite images as a group of propagating cloud wavelets located on the edge of a cirrus canopy on the anticylonic side of a strong, upper-level subtropical jet. These wavelets, which were observed between 1300 and 2200 GMT on 25 February 1987, are seen most distinctly in the GOES-West infrared satellite picture at 1800 GMT. The purpose is to document that these wavelets were a manifestation of asymmetric inertial instability. During their lifetime, the wavelets were located over the North American synoptic sounding network, so that the meteorological conditions surrounding their occurrence could be examined. A particular emphasis of the analysis is on the jet streak in which the wavelets were imbedded. The characteristics of the wavelets are examined using hourly satellite imagery. The hypothesis that inertial instability is the dynamical mechanism responsible for generating the observed cloud wavelets was examined. To further substantiate this contention, the observed characteristics of the wavelets are compared to, and found to be consistent with, a theoretical model of inertia instability by Stevens and Ciesielski.

  20. Most suitable mother wavelet for the analysis of fractal properties of stride interval time series via the average wavelet coefficient

    PubMed Central

    Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan

    2016-01-01

    Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102

  1. Hierarchical analysis of spatial pattern and processes of Douglas-fir forests. Ph.D. Thesis, 10 Sep. 1991 Abstract Only

    NASA Technical Reports Server (NTRS)

    Bradshaw, G. A.

    1995-01-01

    There has been an increased interest in the quantification of pattern in ecological systems over the past years. This interest is motivated by the desire to construct valid models which extend across many scales. Spatial methods must quantify pattern, discriminate types of pattern, and relate hierarchical phenomena across scales. Wavelet analysis is introduced as a method to identify spatial structure in ecological transect data. The main advantage of the wavelet transform over other methods is its ability to preserve and display hierarchical information while allowing for pattern decomposition. Two applications of wavelet analysis are illustrated, as a means to: (1) quantify known spatial patterns in Douglas-fir forests at several scales, and (2) construct spatially-explicit hypotheses regarding pattern generating mechanisms. Application of the wavelet variance, derived from the wavelet transform, is developed for forest ecosystem analysis to obtain additional insight into spatially-explicit data. Specifically, the resolution capabilities of the wavelet variance are compared to the semi-variogram and Fourier power spectra for the description of spatial data using a set of one-dimensional stationary and non-stationary processes. The wavelet cross-covariance function is derived from the wavelet transform and introduced as a alternative method for the analysis of multivariate spatial data of understory vegetation and canopy in Douglas-fir forests of the western Cascades of Oregon.

  2. Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2015-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.

  3. Image processing for quantifying fracture orientation and length scale transitions during brittle deformation

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; Farrell, N. J.

    2017-12-01

    We have implemented a novel image processing tool, namely two-dimensional (2D) Morlet wavelet analysis, capable of detecting changes occurring in fracture patterns at different scales of observation, and able of recognising the dominant fracture orientations and the spatial configurations for progressively larger (or smaller) scale of analysis. Because of its inherited anisotropy, the Morlet wavelet is proved to be an excellent choice for detecting directional linear features, i.e. regions where the amplitude of the signal is regular along one direction and has sharp variation along the perpendicular direction. Performances of the Morlet wavelet are tested against the 'classic' Mexican hat wavelet, deploying a complex synthetic fracture network. When applied to a natural fracture network, formed triaxially (σ1>σ2=σ3) deforming a core sample of the Hopeman sandstone, the combination of 2D Morlet wavelet and wavelet coefficient maps allows for the detection of characteristic scale orientation and length transitions, associated with the shifts from distributed damage to the growth of localised macroscopic shear fracture. A complementary outcome arises from the wavelet coefficient maps produced by increasing the wavelet scale parameter. These maps can be used to chart the variations in the spatial distribution of the analysed entities, meaning that it is possible to retrieve information on the density of fracture patterns at specific length scales during deformation.

  4. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features

    PubMed Central

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2017-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329

  5. The ssWavelets package

    Treesearch

    Jeffrey H. Gove

    2017-01-01

    This package adds several classes, generics and associated methods as well as a few various functions to help with wavelet decomposition of sampling surfaces generated using sampSurf. As such, it can be thought of as an extension to sampSurf for wavelet analysis.

  6. Laser therapy in the management of dental and oro-facial trauma

    NASA Astrophysics Data System (ADS)

    Darbar, Arun A.

    2007-02-01

    This is a clinical presentation demonstrating the efficacy of laser therapy in the treatment of patients presenting with trauma to both the hard and soft tissue in the orofacial region. The use of laser therapy aids the management of these cases where the patients often present with anxiety and a low pain threshold. The outcomes in these cases indicate good patient acceptance of the treatment, enhanced repair and tissue response suggesting that this form of treatment can be indicated for these patients. A combination of hard and soft lasers are used for the comprehensive dental management and treatment of these cases. The lasers used are a 810nm diode and an Er.CrYSGG.

  7. [Thinking of therapeutic mechanism of small knife needle in treating closed myofascitis].

    PubMed

    Zhao, Yong; Fang, Wei; Qin, Wei-kai

    2014-09-01

    The authors investigated and discussed therapeutic mechanism of small knife needle in treating closed myofascitis on the basis of pathomechanism of modern medicine and acupuncture theory of TCM among numbers of clinical cases and experimental data. Therapeutic mechanism lies in 6 aspects: (1) Relieve the energy crisis of tenderness point on muscular fasciae; (2) Affect nervous system and reduce induction of harmful stimulating signal; (3) Inhibit aseptic inflammatory reaction on muscular fasciae; (4) Regulate dynamic equilibrium of soft tissue by cutting scar and releasing the conglutination; (5) Increase patients' regional threshold of feeling; (6) Reduce tension and pressure of soft tissue of tenderness point so as to relieve extrusion of nervus cutaneous.

  8. Direct micromachining of quartz glass plates using pulsed laser plasma soft x-rays

    NASA Astrophysics Data System (ADS)

    Makimura, Tetsuya; Miyamoto, Hisao; Kenmotsu, Youichi; Murakami, Kouichi; Niino, Hiroyuki

    2005-03-01

    We have investigated direct micromachining of quartz glass, using pulsed laser plasma soft x-rays (LPSXs) having a potential capability of nanomachining because the diffraction limit is ˜10nm. The LPSX's were generated by irradiation of a Ta target with 532nm laser light from a conventional Q switched Nd :YAG laser at 700mJ/pulse. In order to achieve a sufficient power density of LPSX's beyond the ablation threshold, we developed an ellipsoidal mirror to obtain efficient focusing of LPSXs at around 10nm. It was found that quartz glass plates are smoothly ablated at 45nm/shot using the focused and pulsed LPSX's.

  9. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  10. The Principle of the Micro-Electronic Neural Bridge and a Prototype System Design.

    PubMed

    Huang, Zong-Hao; Wang, Zhi-Gong; Lu, Xiao-Ying; Li, Wen-Yuan; Zhou, Yu-Xuan; Shen, Xiao-Yan; Zhao, Xin-Tai

    2016-01-01

    The micro-electronic neural bridge (MENB) aims to rebuild lost motor function of paralyzed humans by routing movement-related signals from the brain, around the damage part in the spinal cord, to the external effectors. This study focused on the prototype system design of the MENB, including the principle of the MENB, the neural signal detecting circuit and the functional electrical stimulation (FES) circuit design, and the spike detecting and sorting algorithm. In this study, we developed a novel improved amplitude threshold spike detecting method based on variable forward difference threshold for both training and bridging phase. The discrete wavelet transform (DWT), a new level feature coefficient selection method based on Lilliefors test, and the k-means clustering method based on Mahalanobis distance were used for spike sorting. A real-time online spike detecting and sorting algorithm based on DWT and Euclidean distance was also implemented for the bridging phase. Tested by the data sets available at Caltech, in the training phase, the average sensitivity, specificity, and clustering accuracies are 99.43%, 97.83%, and 95.45%, respectively. Validated by the three-fold cross-validation method, the average sensitivity, specificity, and classification accuracy are 99.43%, 97.70%, and 96.46%, respectively.

  11. The Wavelet ToolKat: A set of tools for the analysis of series through wavelet transforms. Application to the channel curvature and the slope control of three free meandering rivers in the Amazon basin.

    NASA Astrophysics Data System (ADS)

    Vaudor, Lise; Piegay, Herve; Wawrzyniak, Vincent; Spitoni, Marie

    2016-04-01

    The form and functioning of a geomorphic system result from processes operating at various spatial and temporal scales. Longitudinal channel characteristics thus exhibit complex patterns which vary according to the scale of study, might be periodic or segmented, and are generally blurred by noise. Describing the intricate, multiscale structure of such signals, and identifying at which scales the patterns are dominant and over which sub-reach, could help determine at which scales they should be investigated, and provide insights into the main controlling factors. Wavelet transforms aim at describing data at multiple scales (either in time or space), and are now exploited in geophysics for the analysis of nonstationary series of data. They provide a consistent, non-arbitrary, and multiscale description of a signal's variations and help explore potential causalities. Nevertheless, their use in fluvial geomorphology, notably to study longitudinal patterns, is hindered by a lack of user-friendly tools to help understand, implement, and interpret them. We have developed a free application, The Wavelet ToolKat, designed to facilitate the use of wavelet transforms on temporal or spatial series. We illustrate its usefulness describing longitudinal channel curvature and slope of three freely meandering rivers in the Amazon basin (the Purus, Juruá and Madre de Dios rivers), using topographic data generated from NASA's Shuttle Radar Topography Mission (SRTM) in 2000. Three types of wavelet transforms are used, with different purposes. Continuous Wavelet Transforms are used to identify in a non-arbitrary way the dominant scales and locations at which channel curvature and slope vary. Cross-wavelet transforms, and wavelet coherence and phase are used to identify scales and locations exhibiting significant channel curvature and slope co-variations. Maximal Overlap Discrete Wavelet Transforms decompose data into their variations at a series of scales and are used to provide smoothed descriptions of the series at the scales deemed relevant.

  12. Forecasting of particulate matter time series using wavelet analysis and wavelet-ARMA/ARIMA model in Taiyuan, China.

    PubMed

    Zhang, Hong; Zhang, Sheng; Wang, Ping; Qin, Yuzhe; Wang, Huifeng

    2017-07-01

    Particulate matter with aerodynamic diameter below 10 μm (PM 10 ) forecasting is difficult because of the uncertainties in describing the emission and meteorological fields. This paper proposed a wavelet-ARMA/ARIMA model to forecast the short-term series of the PM 10 concentrations. It was evaluated by experiments using a 10-year data set of daily PM 10 concentrations from 4 stations located in Taiyuan, China. The results indicated the following: (1) PM 10 concentrations of Taiyuan had a decreasing trend during 2005 to 2012 but increased in 2013. PM 10 concentrations had an obvious seasonal fluctuation related to coal-fired heating in winter and early spring. (2) Spatial differences among the four stations showed that the PM 10 concentrations in industrial and heavily trafficked areas were higher than those in residential and suburb areas. (3) Wavelet analysis revealed that the trend variation and the changes of the PM 10 concentration of Taiyuan were complicated. (4) The proposed wavelet-ARIMA model could be efficiently and successfully applied to the PM 10 forecasting field. Compared with the traditional ARMA/ARIMA methods, this wavelet-ARMA/ARIMA method could effectively reduce the forecasting error, improve the prediction accuracy, and realize multiple-time-scale prediction. Wavelet analysis can filter noisy signals and identify the variation trend and the fluctuation of the PM 10 time-series data. Wavelet decomposition and reconstruction reduce the nonstationarity of the PM 10 time-series data, and thus improve the accuracy of the prediction. This paper proposed a wavelet-ARMA/ARIMA model to forecast the PM 10 time series. Compared with the traditional ARMA/ARIMA method, this wavelet-ARMA/ARIMA method could effectively reduce the forecasting error, improve the prediction accuracy, and realize multiple-time-scale prediction. The proposed model could be efficiently and successfully applied to the PM 10 forecasting field.

  13. Wavelet entropy of BOLD time series: An application to Rolandic epilepsy.

    PubMed

    Gupta, Lalit; Jansen, Jacobus F A; Hofman, Paul A M; Besseling, René M H; de Louw, Anton J A; Aldenkamp, Albert P; Backes, Walter H

    2017-12-01

    To assess the wavelet entropy for the characterization of intrinsic aberrant temporal irregularities in the time series of resting-state blood-oxygen-level-dependent (BOLD) signal fluctuations. Further, to evaluate the temporal irregularities (disorder/order) on a voxel-by-voxel basis in the brains of children with Rolandic epilepsy. The BOLD time series was decomposed using the discrete wavelet transform and the wavelet entropy was calculated. Using a model time series consisting of multiple harmonics and nonstationary components, the wavelet entropy was compared with Shannon and spectral (Fourier-based) entropy. As an application, the wavelet entropy in 22 children with Rolandic epilepsy was compared to 22 age-matched healthy controls. The images were obtained by performing resting-state functional magnetic resonance imaging (fMRI) using a 3T system, an 8-element receive-only head coil, and an echo planar imaging pulse sequence ( T2*-weighted). The wavelet entropy was also compared to spectral entropy, regional homogeneity, and Shannon entropy. Wavelet entropy was found to identify the nonstationary components of the model time series. In Rolandic epilepsy patients, a significantly elevated wavelet entropy was observed relative to controls for the whole cerebrum (P = 0.03). Spectral entropy (P = 0.41), regional homogeneity (P = 0.52), and Shannon entropy (P = 0.32) did not reveal significant differences. The wavelet entropy measure appeared more sensitive to detect abnormalities in cerebral fluctuations represented by nonstationary effects in the BOLD time series than more conventional measures. This effect was observed in the model time series as well as in Rolandic epilepsy. These observations suggest that the brains of children with Rolandic epilepsy exhibit stronger nonstationary temporal signal fluctuations than controls. 2 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1728-1737. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics

    PubMed Central

    Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.

    2010-01-01

    We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positives voxels. PMID:21034833

  15. Admissible Diffusion Wavelets and Their Applications in Space-Frequency Processing.

    PubMed

    Hou, Tingbo; Qin, Hong

    2013-01-01

    As signal processing tools, diffusion wavelets and biorthogonal diffusion wavelets have been propelled by recent research in mathematics. They employ diffusion as a smoothing and scaling process to empower multiscale analysis. However, their applications in graphics and visualization are overshadowed by nonadmissible wavelets and their expensive computation. In this paper, our motivation is to broaden the application scope to space-frequency processing of shape geometry and scalar fields. We propose the admissible diffusion wavelets (ADW) on meshed surfaces and point clouds. The ADW are constructed in a bottom-up manner that starts from a local operator in a high frequency, and dilates by its dyadic powers to low frequencies. By relieving the orthogonality and enforcing normalization, the wavelets are locally supported and admissible, hence facilitating data analysis and geometry processing. We define the novel rapid reconstruction, which recovers the signal from multiple bands of high frequencies and a low-frequency base in full resolution. It enables operations localized in both space and frequency by manipulating wavelet coefficients through space-frequency filters. This paper aims to build a common theoretic foundation for a host of applications, including saliency visualization, multiscale feature extraction, spectral geometry processing, etc.

  16. S2LET: A code to perform fast wavelet analysis on the sphere

    NASA Astrophysics Data System (ADS)

    Leistedt, B.; McEwen, J. D.; Vandergheynst, P.; Wiaux, Y.

    2013-10-01

    We describe S2LET, a fast and robust implementation of the scale-discretised wavelet transform on the sphere. Wavelets are constructed through a tiling of the harmonic line and can be used to probe spatially localised, scale-dependent features of signals on the sphere. The reconstruction of a signal from its wavelets coefficients is made exact here through the use of a sampling theorem on the sphere. Moreover, a multiresolution algorithm is presented to capture all information of each wavelet scale in the minimal number of samples on the sphere. In addition S2LET supports the HEALPix pixelisation scheme, in which case the transform is not exact but nevertheless achieves good numerical accuracy. The core routines of S2LET are written in C and have interfaces in Matlab, IDL and Java. Real signals can be written to and read from FITS files and plotted as Mollweide projections. The S2LET code is made publicly available, is extensively documented, and ships with several examples in the four languages supported. At present the code is restricted to axisymmetric wavelets but will be extended to directional, steerable wavelets in a future release.

  17. Acoustic emission detection for mass fractions of materials based on wavelet packet technology.

    PubMed

    Wang, Xianghong; Xiang, Jianjun; Hu, Hongwei; Xie, Wei; Li, Xiongbing

    2015-07-01

    Materials are often damaged during the process of detecting mass fractions by traditional methods. Acoustic emission (AE) technology combined with wavelet packet analysis is used to evaluate the mass fractions of microcrystalline graphite/polyvinyl alcohol (PVA) composites in this study. Attenuation characteristics of AE signals across the composites with different mass fractions are investigated. The AE signals are decomposed by wavelet packet technology to obtain the relationships between the energy and amplitude attenuation coefficients of feature wavelet packets and mass fractions as well. Furthermore, the relationship is validated by a sample. The larger proportion of microcrystalline graphite will correspond to the higher attenuation of energy and amplitude. The attenuation characteristics of feature wavelet packets with the frequency range from 125 kHz to 171.85 kHz are more suitable for the detection of mass fractions than those of the original AE signals. The error of the mass fraction of microcrystalline graphite calculated by the feature wavelet packet (1.8%) is lower than that of the original signal (3.9%). Therefore, AE detection base on wavelet packet analysis is an ideal NDT method for evaluate mass fractions of composite materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Segmentation-based wavelet transform for still-image compression

    NASA Astrophysics Data System (ADS)

    Mozelle, Gerard; Seghier, Abdellatif; Preteux, Francoise J.

    1996-10-01

    In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.

  19. Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction

    NASA Astrophysics Data System (ADS)

    Rizal Isnanto, R.

    2015-06-01

    Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)

  20. Necessary and sufficient conditions for discrete wavelet frames in CN

    NASA Astrophysics Data System (ADS)

    Deepshikha; Vashisht, Lalit K.

    2017-07-01

    We present necessary and sufficient conditions with explicit frame bounds for a discrete wavelet system of the form {DaTk ϕ } a ∈ U(N) , k ∈IN to be a frame for the unitary space CN. It is shown that the canonical dual of a discrete wavelet frame for CN has the same structure. This is not true (well known) for canonical dual of a wavelet frame for L2(R) . Several numerical examples are given to illustrate the results.

  1. Skin image retrieval using Gabor wavelet texture feature.

    PubMed

    Ou, X; Pan, W; Zhang, X; Xiao, P

    2016-12-01

    Skin imaging plays a key role in many clinical studies. We have used many skin imaging techniques, including the recently developed capacitive contact skin imaging based on fingerprint sensors. The aim of this study was to develop an effective skin image retrieval technique using Gabor wavelet transform, which can be used on different types of skin images, but with a special focus on skin capacitive contact images. Content-based image retrieval (CBIR) is a useful technology to retrieve stored images from database by supplying query images. In a typical CBIR, images are retrieved based on colour, shape, texture, etc. In this study, texture feature is used for retrieving skin images, and Gabor wavelet transform is used for texture feature description and extraction. The results show that the Gabor wavelet texture features can work efficiently on different types of skin images. Although Gabor wavelet transform is slower compared with other image retrieval techniques, such as principal component analysis (PCA) and grey-level co-occurrence matrix (GLCM), Gabor wavelet transform is the best for retrieving skin capacitive contact images and facial images with different orientations. Gabor wavelet transform can also work well on facial images with different expressions and skin cancer/disease images. We have developed an effective skin image retrieval method based on Gabor wavelet transform, that it is useful for retrieving different types of images, namely digital colour face images, digital colour skin cancer and skin disease images, and particularly greyscale skin capacitive contact images. Gabor wavelet transform can also be potentially useful for face recognition (with different orientation and expressions) and skin cancer/disease diagnosis. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  2. Necessary and sufficient condition for the realization of the complex wavelet

    NASA Astrophysics Data System (ADS)

    Keita, Alpha; Qing, Qianqin; Wang, Nengchao

    1997-04-01

    Wavelet theory is a whole new signal analysis theory in recent years, and the appearance of which is attracting lots of experts in many different fields giving it a deepen study. Wavelet transformation is a new kind of time. Frequency domain analysis method of localization in can-be- realized time domain or frequency domain. It has many perfect characteristics that many other kinds of time frequency domain analysis, such as Gabor transformation or Viginier. For example, it has orthogonality, direction selectivity, variable time-frequency domain resolution ratio, adjustable local support, parsing data in little amount, and so on. All those above make wavelet transformation a very important new tool and method in signal analysis field. Because the calculation of complex wavelet is very difficult, in application, real wavelet function is used. In this paper, we present a necessary and sufficient condition that the real wavelet function can be obtained by the complex wavelet function. This theorem has some significant values in theory. The paper prepares its technique from Hartley transformation, then, it gives the complex wavelet was a signal engineering expert. His Hartley transformation, which also mentioned by Hartley, had been overlooked for about 40 years, for the social production conditions at that time cannot help to show its superiority. Only when it came to the end of 70s and the early 80s, after the development of the fast algorithm of Fourier transformation and the hardware implement to some degree, the completely some positive-negative transforming method was coming to take seriously. W transformation, which mentioned by Zhongde Wang, pushed the studying work of Hartley transformation and its fast algorithm forward. The kernel function of Hartley transformation.

  3. Continuous wavelet transforms for the simultaneous quantitative analysis and dissolution testing of lamivudine-zidovudine tablets.

    PubMed

    Dinç, Erdal; Özdemir, Nurten; Üstündağ, Özgür; Tilkan, Müşerref Günseli

    2013-01-01

    Dissolution testing has a very vital importance for a quality control test and prediction of the in vivo behavior of the oral dosage formulation. This requires the use of a powerful analytical method to get reliable, accurate and precise results for the dissolution experiments. In this context, new signal processing approaches, continuous wavelet transforms (CWTs) were improved for the simultaneous quantitative estimation and dissolution testing of lamivudine (LAM) and zidovudine (ZID) in a tablet dosage form. The CWT approaches are based on the application of the continuous wavelet functions to the absorption spectra-data vectors of LAM and ZID in the wavelet domain. After applying many wavelet functions, the families consisting of Mexican hat wavelet with the scaling factor a=256, Symlets wavelet with the scaling factor a=512 and the order of 5 and Daubechies wavelet at the scale factor a=450 and the order of 10 were found to be suitable for the quantitative determination of the mentioned drugs. These wavelet applications were named as mexh-CWT, sym5-CWT and db10-CWT methods. Calibration graphs for LAM and ZID in the working range of 2.0-50.0 µg/mL and 2.0-60.0 µg/mL were obtained measuring the mexh-CWT, sym5-CWT and db10-CWT amplitudes at the wavelength points corresponding to zero crossing points. The validity and applicability of the improved mexh-CWT, sym5-CWT and db10-CWT approaches was carried out by the analysis of the synthetic mixtures containing the analyzed drugs. Simultaneous determination of LAM and ZID in tablets was accomplished by the proposed CWT methods and their dissolution profiles were graphically explored.

  4. Wavelets and the squeezed states of quantum optics

    NASA Technical Reports Server (NTRS)

    Defacio, B.

    1992-01-01

    Wavelets are new mathematical objects which act as 'designer trigonometric functions.' To obtain a wavelet, the original function space of finite energy signals is generalized to a phase-space, and the translation operator in the original space has a scale change in the new variable adjoined to the translation. Localization properties in the phase-space can be improved and unconditional bases are obtained for a broad class of function and distribution spaces. Operators in phase space are 'almost diagonal' instead of the traditional condition of being diagonal in the original function space. These wavelets are applied to the squeezed states of quantum optics. The scale change required for a quantum wavelet is shown to be a Yuen squeeze operator acting on an arbitrary density operator.

  5. Multi-resolution analysis for ear recognition using wavelet features

    NASA Astrophysics Data System (ADS)

    Shoaib, M.; Basit, A.; Faye, I.

    2016-11-01

    Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.

  6. Non-stationary dynamics in the bouncing ball: A wavelet perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behera, Abhinna K., E-mail: abhinna@iiserkol.ac.in; Panigrahi, Prasanta K., E-mail: pprasanta@iiserkol.ac.in; Sekar Iyengar, A. N., E-mail: ansekar.iyengar@saha.ac.in

    2014-12-01

    The non-stationary dynamics of a bouncing ball, comprising both periodic as well as chaotic behavior, is studied through wavelet transform. The multi-scale characterization of the time series displays clear signatures of self-similarity, complex scaling behavior, and periodicity. Self-similar behavior is quantified by the generalized Hurst exponent, obtained through both wavelet based multi-fractal detrended fluctuation analysis and Fourier methods. The scale dependent variable window size of the wavelets aptly captures both the transients and non-stationary periodic behavior, including the phase synchronization of different modes. The optimal time-frequency localization of the continuous Morlet wavelet is found to delineate the scales corresponding tomore » neutral turbulence, viscous dissipation regions, and different time varying periodic modulations.« less

  7. [Recognition of landscape characteristic scale based on two-dimension wavelet analysis].

    PubMed

    Gao, Yan-Ni; Chen, Wei; He, Xing-Yuan; Li, Xiao-Yu

    2010-06-01

    Three wavelet bases, i. e., Haar, Daubechies, and Symlet, were chosen to analyze the validity of two-dimension wavelet analysis in recognizing the characteristic scales of the urban, peri-urban, and rural landscapes of Shenyang. Owing to the transform scale of two-dimension wavelet must be the integer power of 2, some characteristic scales cannot be accurately recognized. Therefore, the pixel resolution of images was resampled to 3, 3.5, 4, and 4.5 m to densify the scale in analysis. It was shown that two-dimension wavelet analysis worked effectively in checking characteristic scale. Haar, Daubechies, and Symle were the optimal wavelet bases to the peri-urban landscape, urban landscape, and rural landscape, respectively. Both Haar basis and Symlet basis played good roles in recognizing the fine characteristic scale of rural landscape and in detecting the boundary of peri-urban landscape. Daubechies basis and Symlet basis could be also used to detect the boundary of urban landscape and rural landscape, respectively.

  8. Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.

    PubMed

    Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar

    2018-05-29

    Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.

  9. The wavelet response as a multiscale characterization of scattering processes at granular interfaces.

    PubMed

    Le Gonidec, Yves; Gibert, Dominique

    2006-11-01

    We perform a multiscale analysis of the backscattering properties of a complex interface between water and a layer of randomly arranged glass beads with diameter D=1 mm. An acoustical experiment is done to record the wavelet response of the interface in a large frequency range from lambda/D=0.3 to lambda/D=15. The wavelet response is a physical analog of the mathematical wavelet transform which possesses nice properties to detect and characterize abrupt changes in signals. The experimental wavelet response allows to identify five frequency domains corresponding to different backscattering properties of the complex interface. This puts quantitative limits to the validity domains of the models used to represent the interface and which are flat elastic, flat visco-elastic, rough random half-space with multiple scattering, and rough elastic from long to short wavelengths respectively. A physical explanation based on Mie scattering theory is proposed to explain the origin of the five frequency domains identified in the wavelet response.

  10. Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin

    2015-10-25

    I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less

  11. Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson's disease prediction.

    PubMed

    Khan, Maryam Mahsal; Mendes, Alexandre; Chalup, Stephan K

    2018-01-01

    Wavelet Neural Networks are a combination of neural networks and wavelets and have been mostly used in the area of time-series prediction and control. Recently, Evolutionary Wavelet Neural Networks have been employed to develop cancer prediction models. The present study proposes to use ensembles of Evolutionary Wavelet Neural Networks. The search for a high quality ensemble is directed by a fitness function that incorporates the accuracy of the classifiers both independently and as part of the ensemble itself. The ensemble approach is tested on three publicly available biomedical benchmark datasets, one on Breast Cancer and two on Parkinson's disease, using a 10-fold cross-validation strategy. Our experimental results show that, for the first dataset, the performance was similar to previous studies reported in literature. On the second dataset, the Evolutionary Wavelet Neural Network ensembles performed better than all previous methods. The third dataset is relatively new and this study is the first to report benchmark results.

  12. Evolutionary Wavelet Neural Network ensembles for breast cancer and Parkinson’s disease prediction

    PubMed Central

    Mendes, Alexandre; Chalup, Stephan K.

    2018-01-01

    Wavelet Neural Networks are a combination of neural networks and wavelets and have been mostly used in the area of time-series prediction and control. Recently, Evolutionary Wavelet Neural Networks have been employed to develop cancer prediction models. The present study proposes to use ensembles of Evolutionary Wavelet Neural Networks. The search for a high quality ensemble is directed by a fitness function that incorporates the accuracy of the classifiers both independently and as part of the ensemble itself. The ensemble approach is tested on three publicly available biomedical benchmark datasets, one on Breast Cancer and two on Parkinson’s disease, using a 10-fold cross-validation strategy. Our experimental results show that, for the first dataset, the performance was similar to previous studies reported in literature. On the second dataset, the Evolutionary Wavelet Neural Network ensembles performed better than all previous methods. The third dataset is relatively new and this study is the first to report benchmark results. PMID:29420578

  13. Bootstrapping rapidity anomalous dimensions for transverse-momentum resummation

    DOE PAGES

    Li, Ye; Zhu, Hua Xing

    2017-01-11

    Soft function relevant for transverse-momentum resummation for Drell-Yan or Higgs production at hadron colliders are computed through to three loops in the expansion of strong coupling, with the help of bootstrap technique and supersymmetric decomposition. The corresponding rapidity anomalous dimension is extracted. Furthermore, an intriguing relation between anomalous dimensions for transverse-momentum resummation and threshold resummation is found.

  14. Primary stability, insertion torque, and bone density of conical implants with internal hexagon: is there a relationship?

    PubMed

    Trisi, Paolo; Berardi, Davide; Paolantonio, Michele; Spoto, Giuseppe; D'Addona, Antonio; Perfetti, Giorgio

    2013-05-01

    Between implants and peri-implant bone, there should be a minimum gap, without micromotions over a threshold, which could cause resorption and fibrosis. The higher the implant insertion torque, the higher will be the initial stability. The aim was to evaluate in vitro the correlation between micromotions and insertion torque of implants in bone of different densities. The test was performed on bovine bone of hard, medium, and soft density: 150 implants were used, 10 for each torque (20, 35, 45, 70, and 100 N/cm). Samples were fixed on a loading device. On each sample, we applied a 25-N horizontal force. Insertion torque and micromotions are statistically correlated. In soft bone with an insertion force of 20 and 35 N/cm, the micromotion resulted significantly over the risk threshold, which was not found with an insertion force of 45 and 70 N/cm and in hard and medium bones with any insertion torque. The increase in insertion torque reduces the amount of micromotions between implant and bone. Therefore, the immediate loading may be considered a valid therapeutic choice, even in low-density bone, as long as at least 45 N/cm of insertion torque is reached.

  15. Experimental Analysis of the Mechanism of Hearing under Water

    PubMed Central

    Chordekar, Shai; Kishon-Rabin, Liat; Kriksunov, Leonid; Adelman, Cahtia; Sohmer, Haim

    2015-01-01

    The mechanism of human hearing under water is debated. Some suggest it is by air conduction (AC), others by bone conduction (BC), and others by a combination of AC and BC. A clinical bone vibrator applied to soft tissue sites on the head, neck, and thorax also elicits hearing by a mechanism called soft tissue conduction (STC) or nonosseous BC. The present study was designed to test whether underwater hearing at low intensities is by AC or by osseous BC based on bone vibrations or by nonosseous BC (STC). Thresholds of normal hearing participants to bone vibrator stimulation with their forehead in air were recorded and again when forehead and bone vibrator were under water. A vibrometer detected vibrations of a dry human skull in all similar conditions (in air and under water) but not when water was the intermediary between the sound source and the skull forehead. Therefore, the intensities required to induce vibrations of the dry skull in water were significantly higher than the underwater hearing thresholds of the participants, under conditions when hearing by AC and osseous BC is not likely. The results support the hypothesis that hearing under water at low sound intensities may be attributed to nonosseous BC (STC). PMID:26770975

  16. A new method to detect transitory signatures and local time/space variability structures in the climate system: the scale-dependent correlation analysis

    NASA Astrophysics Data System (ADS)

    Rodó, Xavier; Rodríguez-Arias, Miquel-Àngel

    2006-10-01

    The study of transitory signals and local variability structures in both/either time and space and their role as sources of climatic memory, is an important but often neglected topic in climate research despite its obvious importance and extensive coverage in the literature. Transitory signals arise either from non-linearities, in the climate system, transitory atmosphere-ocean couplings, and other processes in the climate system evolving after a critical threshold is crossed. These temporary interactions that, though intense, may not last long, can be responsible for a large amount of unexplained variability but are normally considered of limited relevance and often, discarded. With most of the current techniques at hand these typology of signatures are difficult to isolate because the low signal-to-noise ratio in midlatitudes, the limited recurrence of the transitory signals during a customary interval of data considered. Also, there is often a serious problem arising from the smoothing of local or transitory processes if statistical techniques are applied, that consider all the length of data available, rather than taking into account the size of the specific variability structure under investigation. Scale-dependent correlation (SDC) analysis is a new statistical method capable of highlighting the presence of transitory processes, these former being understood as temporary significant lag-dependent autocovariance in a single series, or covariance structures between two series. This approach, therefore, complements other approaches such as those resulting from the families of wavelet analysis, singular-spectrum analysis and recurrence plots. A main feature of SDC is its high-performance for short time series, its ability to characterize phase-relationships and thresholds in the bivariate domain. Ultimately, SDC helps tracking short-lagged relationships among processes that locally or temporarily couple and uncouple. The use of SDC is illustrated in the present paper by means of some synthetic time-series examples of increasing complexity, and it is compared with wavelet analysis in order to provide a well-known reference of its capabilities. A comparison between SDC and companion techniques is also addressed and results are exemplified for the specific case of some relevant El Niño-Southern Oscillation teleconnections.

  17. Turbidity forecasting at a karst spring using combined machine learning and wavelet multiresolution analysis.

    NASA Astrophysics Data System (ADS)

    Savary, M.; Massei, N.; Johannet, A.; Dupont, J. P.; Hauchard, E.

    2016-12-01

    25% of the world populations drink water extracted from karst aquifer. The comprehension and the protection of these aquifers appear as crucial due to an increase of drinking water needs. In Normandie(North-West of France), the principal exploited aquifer is the chalk aquifer. The chalk aquifer highly karstified is an important water resource, regionally speaking. Connections between surface and underground waters thanks to karstification imply turbidity that decreases water quality. Both numerous parameters and phenomenons, and the non-linearity of the rainfall/turbidity relation influence the turbidity causing difficulties to model and forecast turbidity peaks. In this context, the Yport pumping well provides half of Le Havreconurbation drinking water supply (236 000 inhabitants). The aim of this work is thus to perform prediction of the turbidity peaks in order to help pumping well managers to decrease the impact of turbidity on water treatment. Database consists in hourly rainfalls coming from six rain gauges located on the alimentation basin since 2009 and hourly turbidity since 1993. Because of the lack of accurate physical description of the karst system and its surface basin, the systemic paradigm is chosen and a black box model: a neural network model is chosen. In a first step, correlation analyses are used to design the original model architecture by identifying the relation between output and input. The following optimization phases bring us four different architectures. These models were experimented to forecast 12h ahead turbidity and threshold surpassing. The first model is a simple multilayer perceptron. The second is a two-branches model designed to better represent the fast (rainfall) and low (evapotranspiration) dynamics. Each kind of model is developed using both a recurrent and feed-forward architecture. This work highlights that feed-forward multilayer perceptron is better to predict turbidity peaks when feed-forward two-branches model is better to predict threshold surpassing. In a second step, the implementation of wavelet decomposition within the neural network model to better apprehend slow and fast dynamics is tested and discussed, which could also allows accounting for non-linearity of the turbid response to some extent. This second approach is still under realization so far.

  18. Do we need a threshold conception of competence?

    PubMed

    den Hartogh, Govert

    2016-03-01

    On the standard view we assess a person's competence by considering her relevant abilities without reference to the actual decision she is about to make. If she is deemed to satisfy certain threshold conditions of competence, it is still an open question whether her decision could ever be overruled on account of its harmful consequences for her ('hard paternalism'). In practice, however, one normally uses a variable, risk dependent conception of competence, which really means that in considering whether or not to respect a person's decision-making authority we weigh her decision on several relevant dimensions at the same time: its harmful consequences, its importance in terms of the person's own relevant values, the infringement of her autonomy involved in overruling it, and her decision-making abilities. I argue that we should openly recognize the multi-dimensional nature of this judgment. This implies rejecting both the threshold conception of competence and the categorical distinction between hard and soft paternalism.

  19. Behaviour of nematic liquid crystals doped with ferroelectric nanoparticles in the presence of an electric field

    NASA Astrophysics Data System (ADS)

    Emdadi, M.; Poursamad, J. B.; Sahrai, M.; Moghaddas, F.

    2018-06-01

    A planar nematic liquid crystal cell (NLC) doped with spherical ferroelectric nanoparticles is considered. Polarisation of the nanoparticles are assumed to be along the NLC molecules parallel and antiparallel to the director with equal probability. The NLC molecules anchoring to the cell walls are considered to be strong, while soft anchoring at the nanoparticles surface is supposed. Behaviour of the NLC molecules and nanoparticles in the presence of a perpendicular electric field to the NLC cell is theoretically investigated. The electric field of the nanoparticles is taken into account in the calculations. Freedericksz transition (FT) threshold field in the presence of nanoparticles is found. Then, the director and particles reorientations for the electric fields larger than the threshold field are studied. Measuring the onset of the nanoparticles reorientation is proposed as a new method for the FT threshold measurement.

  20. Co:MgF2 laser ablation of tissue: effect of wavelength on ablation threshold and thermal damage.

    PubMed

    Schomacker, K T; Domankevitz, Y; Flotte, T J; Deutsch, T F

    1991-01-01

    The wavelength dependence of the ablation threshold of a variety of tissues has been studied by using a tunable pulsed Co:MgF2 laser to determine how closely it tracks the optical absorption length of water. The Co:MgF2 laser was tuned between 1.81 and 2.14 microns, a wavelength region in which the absorption length varies by a decade. For soft tissues the ablation threshold tracks the optical absorption length; for bone there is little wavelength dependence, consistent with the low water content of bone. Thermal damage vs. wavelength was also studied for cornea and bone. Thermal damage to cornea has a weak wavelength dependence, while that to bone shows little wavelength dependence. Framing-camera pictures of the ablation of both cornea and liver show explosive removal of material, but differ as to the nature of the explosion.

  1. Laser damage threshold of gelatin and a copper phthalocyanine doped gelatin optical limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brant, M.C.; McLean, D.G.; Sutherland, R.L.

    1996-12-31

    The authors demonstrate optical limiting in a unique guest-host system which uses neither the typical liquid or solid host. Instead, they dope a gelatin gel host with a water soluble Copper (II) phthalocyaninetetrasulfonic acid, tetrasodium salt (CuPcTs). They report on the gelatin`s viscoelasticity, laser damage threshold, and self healing of this damage. The viscoelastic gelatin has mechanical properties quite different than a liquid or solid. The authors` laser measurements demonstrate that the single shot damage threshold of the undoped gelatin host increases with decreasing gelatin concentration. The gelatin also has a much higher laser damage threshold than a stiff acrylic.more » Unlike brittle solids, the soft gelatin self heals from laser induced damage. Optical limiting test also show the utility of a gelatin host doped with CuPcTs. The CuPcTs/gelatin matrix is not damaged at incident laser energies 5 times the single shot damage threshold of the gelatin host. However, at this high laser energy the CuPcTs is photo bleached at the beam waist. The authors repair photo bleached sites by annealing the CuPcTs/gelatin matrix.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sattarivand, Mike; Summers, Clare; Robar, James

    Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less

  3. Vector coding of wavelet-transformed images

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua

    1998-09-01

    Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.

  4. Fabric wrinkle characterization and classification using modified wavelet coefficients and optimized support-vector-machine classifier

    USDA-ARS?s Scientific Manuscript database

    This paper presents a novel wrinkle evaluation method that uses modified wavelet coefficients and an optimized support-vector-machine (SVM) classification scheme to characterize and classify wrinkle appearance of fabric. Fabric images were decomposed with the wavelet transform (WT), and five parame...

  5. Measurement of entanglement entropy in the two-dimensional Potts model using wavelet analysis.

    PubMed

    Tomita, Yusuke

    2018-05-01

    A method is introduced to measure the entanglement entropy using a wavelet analysis. Using this method, the two-dimensional Haar wavelet transform of a configuration of Fortuin-Kasteleyn (FK) clusters is performed. The configuration represents a direct snapshot of spin-spin correlations since spin degrees of freedom are traced out in FK representation. A snapshot of FK clusters loses image information at each coarse-graining process by the wavelet transform. It is shown that the loss of image information measures the entanglement entropy in the Potts model.

  6. On the Daubechies-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a Daubechies-based wavelet basis is constructed and superconvergence is proven. That is, it will be proven that under the assumption of periodic boundary conditions that the differentiation matrix is accurate of order 2M, even though the approximation subspace can represent exactly only polynomials up to degree M-1, where M is the number of vanishing moments of the associated wavelet. It is illustrated that Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small-scale structure is present.

  7. An efficient computer based wavelets approximation method to solve Fuzzy boundary value differential equations

    NASA Astrophysics Data System (ADS)

    Alam Khan, Najeeb; Razzaq, Oyoon Abdul

    2016-03-01

    In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.

  8. Periodized Daubechies wavelets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.

    1996-03-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.

  9. Interactions between Uterine EMG at Different Sites Investigated Using Wavelet Analysis: Comparison of Pregnancy and Labor Contractions

    NASA Astrophysics Data System (ADS)

    Hassan, Mahmoud; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine

    2010-12-01

    This paper describes the use of the Morlet wavelet transform to investigate the difference in the time-frequency plane between uterine EMG signals recorded simultaneously on two different sites on women's abdomen, both during pregnancy and in labor. The methods used are wavelet transform, cross wavelet transform, phase/amplitude correlation, and phase synchronization. We computed the linear relationship and phase synchronization between uterine signals measured during the same contractions at two different sites on data obtained from women during pregnancy and labor. The results show that the Morlet wavelet transform can successfully analyze and quantify the relationship between uterine electrical activities at different sites and could be employed to investigate the evolution of uterine contraction from pregnancy to labor.

  10. Directional dual-tree rational-dilation complex wavelet transform.

    PubMed

    Serbes, Gorkem; Gulcur, Halil Ozcan; Aydin, Nizamettin

    2014-01-01

    Dyadic discrete wavelet transform (DWT) has been used successfully in processing signals having non-oscillatory transient behaviour. However, due to the low Q-factor property of their wavelet atoms, the dyadic DWT is less effective in processing oscillatory signals such as embolic signals (ESs). ESs are extracted from quadrature Doppler signals, which are the output of Doppler ultrasound systems. In order to process ESs, firstly, a pre-processing operation known as phase filtering for obtaining directional signals from quadrature Doppler signals must be employed. Only then, wavelet based methods can be applied to these directional signals for further analysis. In this study, a directional dual-tree rational-dilation complex wavelet transform, which can be applied directly to quadrature signals and has the ability of extracting directional information during analysis, is introduced.

  11. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  12. Wavelet-space correlation imaging for high-speed MRI without motion monitoring or data segmentation.

    PubMed

    Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles

    2015-12-01

    This study aims to (i) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and (ii) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called "wavelet-space correlation imaging", is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI, and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. © 2014 Wiley Periodicals, Inc.

  13. A high-performance seizure detection algorithm based on Discrete Wavelet Transform (DWT) and EEG

    PubMed Central

    Chen, Duo; Wan, Suiren; Xiang, Jing; Bao, Forrest Sheng

    2017-01-01

    In the past decade, Discrete Wavelet Transform (DWT), a powerful time-frequency tool, has been widely used in computer-aided signal analysis of epileptic electroencephalography (EEG), such as the detection of seizures. One of the important hurdles in the applications of DWT is the settings of DWT, which are chosen empirically or arbitrarily in previous works. The objective of this study aimed to develop a framework for automatically searching the optimal DWT settings to improve accuracy and to reduce computational cost of seizure detection. To address this, we developed a method to decompose EEG data into 7 commonly used wavelet families, to the maximum theoretical level of each mother wavelet. Wavelets and decomposition levels providing the highest accuracy in each wavelet family were then searched in an exhaustive selection of frequency bands, which showed optimal accuracy and low computational cost. The selection of frequency bands and features removed approximately 40% of redundancies. The developed algorithm achieved promising performance on two well-tested EEG datasets (accuracy >90% for both datasets). The experimental results of the developed method have demonstrated that the settings of DWT affect its performance on seizure detection substantially. Compared with existing seizure detection methods based on wavelet, the new approach is more accurate and transferable among datasets. PMID:28278203

  14. Wavelet-space Correlation Imaging for High-speed MRI without Motion Monitoring or Data Segmentation

    PubMed Central

    Li, Yu; Wang, Hui; Tkach, Jean; Roach, David; Woods, Jason; Dumoulin, Charles

    2014-01-01

    Purpose This study aims to 1) develop a new high-speed MRI approach by implementing correlation imaging in wavelet-space, and 2) demonstrate the ability of wavelet-space correlation imaging to image human anatomy with involuntary or physiological motion. Methods Correlation imaging is a high-speed MRI framework in which image reconstruction relies on quantification of data correlation. The presented work integrates correlation imaging with a wavelet transform technique developed originally in the field of signal and image processing. This provides a new high-speed MRI approach to motion-free data collection without motion monitoring or data segmentation. The new approach, called “wavelet-space correlation imaging”, is investigated in brain imaging with involuntary motion and chest imaging with free-breathing. Results Wavelet-space correlation imaging can exceed the speed limit of conventional parallel imaging methods. Using this approach with high acceleration factors (6 for brain MRI, 16 for cardiac MRI and 8 for lung MRI), motion-free images can be generated in static brain MRI with involuntary motion and nonsegmented dynamic cardiac/lung MRI with free-breathing. Conclusion Wavelet-space correlation imaging enables high-speed MRI in the presence of involuntary motion or physiological dynamics without motion monitoring or data segmentation. PMID:25470230

  15. Multidimensional, mapping-based complex wavelet transforms.

    PubMed

    Fernandes, Felix C A; van Spaendonck, Rutger L C; Burrus, C Sidney

    2005-01-01

    Although the discrete wavelet transform (DWT) is a powerful tool for signal and image processing, it has three serious disadvantages: shift sensitivity, poor directionality, and lack of phase information. To overcome these disadvantages, we introduce multidimensional, mapping-based, complex wavelet transforms that consist of a mapping onto a complex function space followed by a DWT of the complex mapping. Unlike other popular transforms that also mitigate DWT shortcomings, the decoupled implementation of our transforms has two important advantages. First, the controllable redundancy of the mapping stage offers a balance between degree of shift sensitivity and transform redundancy. This allows us to create a directional, nonredundant, complex wavelet transform with potential benefits for image coding systems. To the best of our knowledge, no other complex wavelet transform is simultaneously directional and nonredundant. The second advantage of our approach is the flexibility to use any DWT in the transform implementation. As an example, we exploit this flexibility to create the complex double-density DWT: a shift-insensitive, directional, complex wavelet transform with a low redundancy of (3M - 1)/(2M - 1) in M dimensions. No other transform achieves all these properties at a lower redundancy, to the best of our knowledge. By exploiting the advantages of our multidimensional, mapping-based complex wavelet transforms in seismic signal-processing applications, we have demonstrated state-of-the-art results.

  16. Exact reconstruction with directional wavelets on the sphere

    NASA Astrophysics Data System (ADS)

    Wiaux, Y.; McEwen, J. D.; Vandergheynst, P.; Blanc, O.

    2008-08-01

    A new formalism is derived for the analysis and exact reconstruction of band-limited signals on the sphere with directional wavelets. It represents an evolution of a previously developed wavelet formalism developed by Antoine & Vandergheynst and Wiaux et al. The translations of the wavelets at any point on the sphere and their proper rotations are still defined through the continuous three-dimensional rotations. The dilations of the wavelets are directly defined in harmonic space through a new kernel dilation, which is a modification of an existing harmonic dilation. A family of factorized steerable functions with compact harmonic support which are suitable for this kernel dilation are first identified. A scale-discretized wavelet formalism is then derived, relying on this dilation. The discrete nature of the analysis scales allows the exact reconstruction of band-limited signals. A corresponding exact multi-resolution algorithm is finally described and an implementation is tested. The formalism is of interest notably for the denoising or the deconvolution of signals on the sphere with a sparse expansion in wavelets. In astrophysics, it finds a particular application for the identification of localized directional features in the cosmic microwave background data, such as the imprint of topological defects, in particular, cosmic strings, and for their reconstruction after separation from the other signal components.

  17. Texture feature extraction based on wavelet transform and gray-level co-occurrence matrices applied to osteosarcoma diagnosis.

    PubMed

    Hu, Shan; Xu, Chao; Guan, Weiqiao; Tang, Yong; Liu, Yana

    2014-01-01

    Osteosarcoma is the most common malignant bone tumor among children and adolescents. In this study, image texture analysis was made to extract texture features from bone CR images to evaluate the recognition rate of osteosarcoma. To obtain the optimal set of features, Sym4 and Db4 wavelet transforms and gray-level co-occurrence matrices were applied to the image, with statistical methods being used to maximize the feature selection. To evaluate the performance of these methods, a support vector machine algorithm was used. The experimental results demonstrated that the Sym4 wavelet had a higher classification accuracy (93.44%) than the Db4 wavelet with respect to osteosarcoma occurrence in the epiphysis, whereas the Db4 wavelet had a higher classification accuracy (96.25%) for osteosarcoma occurrence in the diaphysis. Results including accuracy, sensitivity, specificity and ROC curves obtained using the wavelets were all higher than those obtained using the features derived from the GLCM method. It is concluded that, a set of texture features can be extracted from the wavelets and used in computer-aided osteosarcoma diagnosis systems. In addition, this study also confirms that multi-resolution analysis is a useful tool for texture feature extraction during bone CR image processing.

  18. Synthesis of wavelet envelope in 2-D random media having power-law spectra: comparison with FD simulations

    NASA Astrophysics Data System (ADS)

    Sato, Haruo; Fehler, Michael C.

    2016-10-01

    The envelope broadening and the peak delay of the S-wavelet of a small earthquake with increasing travel distance are results of scattering by random velocity inhomogeneities in the earth medium. As a simple mathematical model, Sato proposed a new stochastic synthesis of the scalar wavelet envelope in 3-D von Kármán type random media when the centre wavenumber of the wavelet is in the power-law spectral range of the random velocity fluctuation. The essential idea is to split the random medium spectrum into two components using the centre wavenumber as a reference: the long-scale (low-wavenumber spectral) component produces the peak delay and the envelope broadening by multiple scattering around the forward direction; the short-scale (high-wavenumber spectral) component attenuates wave amplitude by wide angle scattering. The former is calculated by the Markov approximation based on the parabolic approximation and the latter is calculated by the Born approximation. Here, we extend the theory for the envelope synthesis of a wavelet in 2-D random media, which makes it easy to compare with finite difference (FD) simulation results. The synthetic wavelet envelope is analytically written by using the random medium parameters in the angular frequency domain. For the case that the power spectral density function of the random velocity fluctuation has a steep roll-off at large wavenumbers, the envelope broadening is small and frequency independent, and scattering attenuation is weak. For the case of a small roll-off, however, the envelope broadening is large and increases with frequency, and the scattering attenuation is strong and increases with frequency. As a preliminary study, we compare synthetic wavelet envelopes with the average of FD simulation wavelet envelopes in 50 synthesized random media, which are characterized by the RMS fractional velocity fluctuation ε = 0.05, correlation scale a = 5 km and the background wave velocity V0 = 4 km s-1. We use the radiation of a 2 Hz Ricker wavelet from a point source. For all the cases of von Kármán order κ = 0.1, 0.5 and 1, we find the synthetic wavelet envelopes are a good match to the characteristics of FD simulation wavelet envelopes in a time window starting from the onset through the maximum peak to the time when the amplitude decreases to half the peak amplitude.

  19. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    NASA Astrophysics Data System (ADS)

    Marinkovic, Slavica; Guillemot, Christine

    2006-12-01

    Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  20. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  1. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  2. Asymmetric vibration in a two-layer vocal fold model with left-right stiffness asymmetry: Experiment and simulation

    PubMed Central

    Zhang, Zhaoyan; Hieu Luu, Trung

    2012-01-01

    Vibration characteristics of a self-oscillating two-layer vocal fold model with left-right asymmetry in body-layer stiffness were experimentally and numerically investigated. Two regimes of distinct vibratory pattern were identified as a function of left-right stiffness mismatch. In the first regime with extremely large left-right stiffness mismatch, phonation onset resulted from an eigenmode synchronization process that involved only eigenmodes of the soft fold. Vocal fold vibration in this regime was dominated by a large-amplitude vibration of the soft fold, and phonation frequency was determined by the properties of the soft fold alone. The stiff fold was only enslaved to vibrate at a much reduced amplitude. In the second regime with small left-right stiffness mismatch, eigenmodes of both folds actively participated in the eigenmode synchronization process. The two folds vibrated with comparable amplitude, but the stiff fold consistently led the soft fold in phase for all conditions. A qualitatively good agreement was obtained between experiment and simulation, although the simulations generally underestimated phonation threshold pressure and onset frequency. The clinical implications of the results of this study are also discussed. PMID:22978891

  3. Asymmetric vibration in a two-layer vocal fold model with left-right stiffness asymmetry: experiment and simulation.

    PubMed

    Zhang, Zhaoyan; Luu, Trung Hieu

    2012-09-01

    Vibration characteristics of a self-oscillating two-layer vocal fold model with left-right asymmetry in body-layer stiffness were experimentally and numerically investigated. Two regimes of distinct vibratory pattern were identified as a function of left-right stiffness mismatch. In the first regime with extremely large left-right stiffness mismatch, phonation onset resulted from an eigenmode synchronization process that involved only eigenmodes of the soft fold. Vocal fold vibration in this regime was dominated by a large-amplitude vibration of the soft fold, and phonation frequency was determined by the properties of the soft fold alone. The stiff fold was only enslaved to vibrate at a much reduced amplitude. In the second regime with small left-right stiffness mismatch, eigenmodes of both folds actively participated in the eigenmode synchronization process. The two folds vibrated with comparable amplitude, but the stiff fold consistently led the soft fold in phase for all conditions. A qualitatively good agreement was obtained between experiment and simulation, although the simulations generally underestimated phonation threshold pressure and onset frequency. The clinical implications of the results of this study are also discussed.

  4. A multi-physics model for ultrasonically activated soft tissue.

    PubMed

    Suvranu De, Rahul

    2017-02-01

    A multi-physics model has been developed to investigate the effects of cellular level mechanisms on the thermomechanical response of ultrasonically activated soft tissue. Cellular level cavitation effects have been incorporated in the tissue level continuum model to accurately determine the thermodynamic states such as temperature and pressure. A viscoelastic material model is assumed for the macromechanical response of the tissue. The cavitation model based equation-of-state provides the additional pressure arising from evaporation of intracellular and cellular water by absorbing heat due to structural and viscoelastic heating in the tissue, and temperature to the continuum level thermomechanical model. The thermomechanical response of soft tissue is studied for the operational range of frequencies of oscillations and applied loads for typical ultrasonically activated surgical instruments. The model is shown to capture characteristics of ultrasonically activated soft tissue deformation and temperature evolution. At the cellular level, evaporation of water below the boiling temperature under ambient conditions is indicative of protein denaturation around the temperature threshold for coagulation of tissues. Further, with increasing operating frequency (or loading), the temperature rises faster leading to rapid evaporation of tissue cavity water, which may lead to accelerated protein denaturation and coagulation.

  5. Droplet impact on soft viscoelastic surfaces.

    PubMed

    Chen, Longquan; Bonaccurso, Elmar; Deng, Peigang; Zhang, Haibo

    2016-12-01

    In this work, we experimentally investigate the impact of water droplets onto soft viscoelastic surfaces with a wide range of impact velocities. Several impact phenomena, which depend on the dynamic interaction between the droplets and viscoelastic surfaces, have been identified and analyzed. At low We, complete rebound is observed when the impact velocity is between a lower and an upper threshold, beyond which droplets are deposited on the surface after impact. At intermediate We, entrapment of an air bubble inside the impinging droplets is found on soft surfaces, while a bubble entrapment on the surface is observed on rigid surfaces. At high We, partial rebound is only identified on the most rigid surface at We≳92. Rebounding droplets behave similarly to elastic drops rebounding on superhydrophobic surfaces and the impact process is independent of surface viscoelasticity. Further, surface viscoelasticity does not influence drop spreading after impact-as the surfaces behave like rigid surfaces-but it does affect drop recoiling. Also, the postimpact drop oscillation on soft viscoelastic surfaces is influenced by dynamic wettability of these surfaces. Comparing sessile drop oscillation with a damped harmonic oscillator allows us to conclude that surface viscoelasticity affects the damping coefficient and liquid surface tension sets the spring constant of the system.

  6. Friction on a granular-continuum interface: Effects of granular media

    NASA Astrophysics Data System (ADS)

    Ecke, Robert; Geller, Drew

    We consider the frictional interactions of two soft plates with interposed granular material subject to normal and shear forces. The plates are soft photo-elastic material, have length 50 cm, and are separated by a gap of variable width from 0 to 20 granular particle diameters. The granular materials are two-dimensional rods that are bi-dispersed in size to prevent crystallization. Different rod materials with frictional coefficients between 0 . 04 < μ < 0 . 5 are used to explore the effects of inter-granular friction on the effective friction of a granular medium. The gap is varied to test the dependence of the friction coefficient on the thickness of the granular layer. Because the soft plates absorb most of the displacement associated with the compressional normal force, the granular packing fractions are close to a jamming threshold, probably a shear jamming criterion. The overall shear and normal forces are measured using force sensors and the local strain tensor over a central portion of the gap is obtained using relative displacements of fiducial markers on the soft elastic material. These measurements provide a good characterization of the global and local forces giving rise to an effective friction coefficient. Funded by US DOE LDRD Program.

  7. Automated segmentations of skin, soft-tissue, and skeleton, from torso CT images

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Hara, Takeshi; Fujita, Hiroshi; Yokoyama, Ryujiro; Kiryu, Takuji; Hoshi, Hiroaki

    2004-05-01

    We have been developing a computer-aided diagnosis (CAD) scheme for automatically recognizing human tissue and organ regions from high-resolution torso CT images. We show some initial results for extracting skin, soft-tissue and skeleton regions. 139 patient cases of torso CT images (male 92, female 47; age: 12-88) were used in this study. Each case was imaged with a common protocol (120kV/320mA) and covered the whole torso with isotopic spatial resolution of about 0.63 mm and density resolution of 12 bits. A gray-level thresholding based procedure was applied to separate the human body from background. The density and distance features to body surface were used to determine the skin, and separate soft-tissue from the others. A 3-D region growing based method was used to extract the skeleton. We applied this system to the 139 cases and found that the skin, soft-tissue and skeleton regions were recognized correctly for 93% of the patient cases. The accuracy of segmentation results was acceptable by evaluating the results slice by slice. This scheme will be included in CAD systems for detecting and diagnosing the abnormal lesions in multi-slice torso CT images.

  8. Experimental evidence for an absorbing phase transition underlying yielding of a soft glass

    NASA Astrophysics Data System (ADS)

    Nagamanasa, K. Hima; Gokhale, Shreyas; Sood, A. K.; Ganapathy, Rajesh

    2014-03-01

    A characteristic feature of solids ranging from foams to atomic crystals is the existence of a yield point, which marks the threshold stress beyond which a material undergoes plastic deformation. In hard materials, it is well-known that local yield events occur collectively in the form of intermittent avalanches. The avalanche size distributions exhibit power-law scaling indicating the presence of self-organized criticality. These observations led to predictions of a non-equilibrium phase transition at the yield point. By contrast, for soft solids like gels and dense suspensions, no such predictions exist. In the present work, by combining particle scale imaging with bulk rheology, we provide a direct evidence for a non-equilibrium phase transition governing yielding of an archetypal soft solid - a colloidal glass. The order parameter and the relaxation time exponents revealed that yielding is an absorbing phase transition that belongs to the conserved directed percolation universality class. We also identified a growing length scale associated with clusters of particles with high Debye-Waller factor. Our findings highlight the importance of correlations between local yield events and may well stimulate the development of a unified description of yielding of soft solids.

  9. Pattern Transitions in a Soft Cylindrical Shell

    NASA Astrophysics Data System (ADS)

    Yang, Yifan; Dai, Hui-Hui; Xu, Fan; Potier-Ferry, Michel

    2018-05-01

    Instability patterns of rolling up a sleeve appear more intricate than the ones of walking over a rug on floor, both characterized as systems of uniaxially compressed soft film on stiff substrate. This can be explained by curvature effects. To investigate pattern transitions on a curved surface, we study a soft shell sliding on a rigid cylinder by experiments, computations and theoretical analyses. We reveal a novel postbuckling phenomenon involving multiple successive bifurcations: smooth-wrinkle-ridge-sagging transitions. The shell initially buckles into periodic axisymmetric wrinkles at the threshold and then a wrinkle-to-ridge transition occurs upon further axial compression. When the load increases to the third bifurcation, the amplitude of the ridge reaches its limit and the symmetry is broken with the ridge sagging into a recumbent fold. It is identified that hysteresis loops and the Maxwell equal-energy conditions are associated with the coexistence of wrinkle-ridge or ridge-sagging patterns. Such a bifurcation scenario is inherently general and independent of material constitutive models.

  10. Soft Expansion of Double-Real-Virtual Corrections to Higgs Production at N$^3$LO

    DOE PAGES

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; ...

    2015-05-15

    We present methods to compute higher orders in the threshold expansion for the one-loop production of a Higgs boson in association with two partons at hadron colliders. This process contributes to the N 3LO Higgs production cross section beyond the soft-virtual approximation. We use reverse unitarity to expand the phase-space integrals in the small kinematic parameters and to reduce the coefficients of the expansion to a small set of master integrals. We describe two methods for the calculation of the master integrals. The first was introduced for the calculation of the soft triple-real radiation relevant to N 3LO Higgs production.more » The second uses a particular factorization of the three body phase-space measure and the knowledge of the scaling properties of the integral itself. Our result is presented as a Laurent expansion in the dimensional regulator, although some of the master integrals are computed to all orders in this parameter.« less

  11. Wavelet based de-noising of breath air absorption spectra profiles for improved classification by principal component analysis

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Shapovalov, A. V.; Borisov, A. V.; Vrazhnov, D. A.; Nikolaev, V. V.; Nikiforova, O. Yu.

    2015-11-01

    The comparison results of different mother wavelets used for de-noising of model and experimental data which were presented by profiles of absorption spectra of exhaled air are presented. The impact of wavelets de-noising on classification quality made by principal component analysis are also discussed.

  12. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  13. MRS3D: 3D Spherical Wavelet Transform on the Sphere

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2011-12-01

    Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D Spherical Fourier-Bessel (SFB) analysis is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. We present a new fast Discrete Spherical Fourier-Bessel Transform (DSFBT) based on both a discrete Bessel Transform and the HEALPIX angular pixelisation scheme. We tested the 3D wavelet transform and as a toy-application, applied a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and found we can successfully remove noise without much loss to the large scale structure. The new spherical 3D isotropic wavelet transform, called MRS3D, is ideally suited to analysing and denoising future 3D spherical cosmological surveys; it uses a novel discrete spherical Fourier-Bessel Transform. MRS3D is based on two packages, IDL and Healpix and can be used only if these two packages have been installed.

  14. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    PubMed

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  15. Segmentation of dermoscopy images using wavelet networks.

    PubMed

    Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeed; Gheissari, Niloofar; Mokhtari, Mojgan; Kolahdouzan, Farzaneh

    2013-04-01

    This paper introduces a new approach for the segmentation of skin lesions in dermoscopic images based on wavelet network (WN). The WN presented here is a member of fixed-grid WNs that is formed with no need of training. In this WN, after formation of wavelet lattice, determining shift and scale parameters of wavelets with two screening stage and selecting effective wavelets, orthogonal least squares algorithm is used to calculate the network weights and to optimize the network structure. The existence of two stages of screening increases globality of the wavelet lattice and provides a better estimation of the function especially for larger scales. R, G, and B values of a dermoscopy image are considered as the network inputs and the network structure formation. Then, the image is segmented and the skin lesions exact boundary is determined accordingly. The segmentation algorithm were applied to 30 dermoscopic images and evaluated with 11 different metrics, using the segmentation result obtained by a skilled pathologist as the ground truth. Experimental results show that our method acts more effectively in comparison with some modern techniques that have been successfully used in many medical imaging problems.

  16. Speckle reduction during all-fiber common-path optical coherence tomography of the cavernous nerves

    NASA Astrophysics Data System (ADS)

    Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M.

    2009-02-01

    Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery, which are responsible for erectile function, may improve nerve preservation and postoperative sexual potency. In this study, we use a rat prostate, ex vivo, to evaluate the feasibility of optical coherence tomography (OCT) as a diagnostic tool for real-time imaging and identification of the cavernous nerves. A novel OCT system based on an all single-mode fiber common-path interferometer-based scanning system is used for this purpose. A wavelet shrinkage denoising technique using Stein's unbiased risk estimator (SURE) algorithm to calculate a data-adaptive threshold is implemented for speckle noise reduction in the OCT image. The signal-to-noise ratio (SNR) was improved by 9 dB and the image quality metrics of the cavernous nerves also improved significantly.

  17. A Wavelet Neural Network Optimal Control Model for Traffic-Flow Prediction in Intelligent Transport Systems

    NASA Astrophysics Data System (ADS)

    Huang, Darong; Bai, Xing-Rong

    Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.

  18. Effective implementation of wavelet Galerkin method

    NASA Astrophysics Data System (ADS)

    Finěk, Václav; Šimunková, Martina

    2012-11-01

    It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.

  19. Wavelet-based energy features for glaucomatous image classification.

    PubMed

    Dua, Sumeet; Acharya, U Rajendra; Chowriappa, Pradeep; Sree, S Vinitha

    2012-01-01

    Texture features within images are actively pursued for accurate and efficient glaucoma classification. Energy distribution over wavelet subbands is applied to find these important texture features. In this paper, we investigate the discriminatory potential of wavelet features obtained from the daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signatures to different feature ranking and feature selection strategies. We have gauged the effectiveness of the resultant ranked and selected subsets of features using a support vector machine, sequential minimal optimization, random forest, and naïve Bayes classification strategies. We observed an accuracy of around 93% using tenfold cross validations to demonstrate the effectiveness of these methods.

  20. Threshold and flavor effects in the renormalization group equations of the MSSM. II. Dimensionful couplings

    NASA Astrophysics Data System (ADS)

    Box, Andrew D.; Tata, Xerxes

    2009-02-01

    We reexamine the one-loop renormalization group equations (RGEs) for the dimensionful parameters of the minimal supersymmetric standard model (MSSM) with broken supersymmetry, allowing for arbitrary flavor structure of the soft SUSY-breaking parameters. We include threshold effects by evaluating the β-functions in a sequence of (nonsupersymmetric) effective theories with heavy particles decoupled at the scale of their mass. We present the most general form for high-scale, soft SUSY-breaking parameters that obtains if we assume that the supersymmetry-breaking mechanism does not introduce new intergenerational couplings. This form, possibly amended to allow additional sources of flavor-violation, serves as a boundary condition for solving the RGEs for the dimensionful MSSM parameters. We then present illustrative examples of numerical solutions to the RGEs. We find that in a SUSY grand unified theory with the scale of SUSY scalars split from that of gauginos and higgsinos, the gaugino mass unification condition may be violated by O(10%). As another illustration, we show that in mSUGRA, the rate for the flavor-violating ttilde 1→c Ztilde 1 decay obtained using the complete RGE solution is smaller than that obtained using the commonly used “single-step” integration of the RGEs by a factor 10-25, and so may qualitatively change expectations for topologies from top-squark pair production at colliders. Together with the RGEs for dimensionless couplings presented in a companion paper, the RGEs in Appendix 2 of this paper form a complete set of one-loop MSSM RGEs that include threshold and flavor-effects necessary for two-loop accuracy.

  1. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  2. Riding the Right Wavelet: Detecting Fracture and Fault Orientation Scale Transitions Using Morlet Wavelets

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; Farrell, N. J.; Smith, M.

    2016-12-01

    The analysis of images through two-dimensional (2D) continuous wavelet transforms makes it possible to acquire local information at different scales of resolution. This characteristic allows us to use wavelet analysis to quantify anisotropic random fields such as networks of fractures. Previous studies [1] have used 2D anisotropic Mexican hat wavelets to analyse the organisation of fracture networks from cm- to km-scales. However, Antoine et al. [2] explained that this technique can have a relatively poor directional selectivity. This suggests the use of a wavelet whose transform is more sensitive to directions of linear features, i.e. 2D Morlet wavelets [3]. In this work, we use a fully-anisotropic Morlet wavelet as implemented by Neupauer & Powell [4], which is anisotropic in its real and imaginary parts and also in its magnitude. We demonstrate the validity of this analytical technique by application to both synthetic - generated according to known distributions of orientations and lengths - and experimentally produced fracture networks. We have analysed SEM Back Scattered Electron images of thin sections of Hopeman Sandstone (Scotland, UK) deformed under triaxial conditions. We find that the Morlet wavelet, compared to the Mexican hat, is more precise in detecting dominant orientations in fracture scale transition at every scale from intra-grain fractures (µm-scale) up to the faults cutting the whole thin section (cm-scale). Through this analysis we can determine the relationship between the initial orientation of tensile microcracks and the final geometry of the through-going shear fault, with total areal coverage of the analysed image. By comparing thin sections from experiments at different confining pressures, we can quantitatively explore the relationship between the observed geometry and the inferred mechanical processes. [1] Ouillon et al., Nonlinear Processes in Geophysics (1995) 2:158 - 177. [2] Antoine et al., Cambridge University Press (2008) 192-194. [3] Antoine et al., Signal Processing (1993) 31:241 - 272. [4] Neupauer & Powell, Computer & Geosciences (2005) 31:456 - 471.

  3. Option pricing from wavelet-filtered financial series

    NASA Astrophysics Data System (ADS)

    de Almeida, V. T. X.; Moriconi, L.

    2012-10-01

    We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.

  4. Variable mass pendulum behaviour processed by wavelet analysis

    NASA Astrophysics Data System (ADS)

    Caccamo, M. T.; Magazù, S.

    2017-01-01

    The present work highlights how, in order to characterize the motion of a variable mass pendulum, wavelet analysis can be an effective tool in furnishing information on the time evolution of the oscillation spectral content. In particular, the wavelet transform is applied to process the motion of a hung funnel that loses fine sand at an exponential rate; it is shown how, in contrast to the Fourier transform which furnishes only an average frequency value for the motion, the wavelet approach makes it possible to perform a joint time-frequency analysis. The work is addressed at undergraduate and graduate students.

  5. Identification Method of Mud Shale Fractures Base on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Xia, Weixu; Lai, Fuqiang; Luo, Han

    2018-01-01

    In recent years, inspired by seismic analysis technology, a new method for analysing mud shale fractures oil and gas reservoirs by logging properties has emerged. By extracting the high frequency attribute of the wavelet transform in the logging attribute, the formation information hidden in the logging signal is extracted, identified the fractures that are not recognized by conventional logging and in the identified fracture segment to show the “cycle jump”, “high value”, “spike” and other response effect is more obvious. Finally formed a complete wavelet denoising method and wavelet high frequency identification fracture method.

  6. Privacy Preserving Technique for Euclidean Distance Based Mining Algorithms Using a Wavelet Related Transform

    NASA Astrophysics Data System (ADS)

    Kadampur, Mohammad Ali; D. v. L. N., Somayajulu

    Privacy preserving data mining is an art of knowledge discovery without revealing the sensitive data of the data set. In this paper a data transformation technique using wavelets is presented for privacy preserving data mining. Wavelets use well known energy compaction approach during data transformation and only the high energy coefficients are published to the public domain instead of the actual data proper. It is found that the transformed data preserves the Eucleadian distances and the method can be used in privacy preserving clustering. Wavelets offer the inherent improved time complexity.

  7. Wavelet analysis and scaling properties of time series

    NASA Astrophysics Data System (ADS)

    Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.

    2005-10-01

    We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.

  8. Wavelet Applications for Flight Flutter Testing

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.

    1999-01-01

    Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.

  9. [The Identification of the Origin of Chinese Wolfberry Based on Infrared Spectral Technology and the Artificial Neural Network].

    PubMed

    Li, Zhong; Liu, Ming-de; Ji, Shou-xiang

    2016-03-01

    The Fourier Transform Infrared Spectroscopy (FTIR) is established to find the geographic origins of Chinese wolfberry quickly. In the paper, the 45 samples of Chinese wolfberry from different places of Qinghai Province are to be surveyed by FTIR. The original data matrix of FTIR is pretreated with common preprocessing and wavelet transform. Compared with common windows shifting smoothing preprocessing, standard normal variation correction and multiplicative scatter correction, wavelet transform is an effective spectrum data preprocessing method. Before establishing model through the artificial neural networks, the spectra variables are compressed by means of the wavelet transformation so as to enhance the training speed of the artificial neural networks, and at the same time the related parameters of the artificial neural networks model are also discussed in detail. The survey shows even if the infrared spectroscopy data is compressed to 1/8 of its original data, the spectral information and analytical accuracy are not deteriorated. The compressed spectra variables are used for modeling parameters of the backpropagation artificial neural network (BP-ANN) model and the geographic origins of Chinese wolfberry are used for parameters of export. Three layers of neural network model are built to predict the 10 unknown samples by using the MATLAB neural network toolbox design error back propagation network. The number of hidden layer neurons is 5, and the number of output layer neuron is 1. The transfer function of hidden layer is tansig, while the transfer function of output layer is purelin. Network training function is trainl and the learning function of weights and thresholds is learngdm. net. trainParam. epochs=1 000, while net. trainParam. goal = 0.001. The recognition rate of 100% is to be achieved. It can be concluded that the method is quite suitable for the quick discrimination of producing areas of Chinese wolfberry. The infrared spectral analysis technology combined with the artificial neural networks is proved to be a reliable and new method for the identification of the original place of Traditional Chinese Medicine.

  10. Coherent-subspace array processing based on wavelet covariance: an application to broad-band, seismo-volcanic signals

    NASA Astrophysics Data System (ADS)

    Saccorotti, G.; Nisii, V.; Del Pezzo, E.

    2008-07-01

    Long-Period (LP) and Very-Long-Period (VLP) signals are the most characteristic seismic signature of volcano dynamics, and provide important information about the physical processes occurring in magmatic and hydrothermal systems. These events are usually characterized by sharp spectral peaks, which may span several frequency decades, by emergent onsets, and by a lack of clear S-wave arrivals. These two latter features make both signal detection and location a challenging task. In this paper, we propose a processing procedure based on Continuous Wavelet Transform of multichannel, broad-band data to simultaneously solve the signal detection and location problems. Our method consists of two steps. First, we apply a frequency-dependent threshold to the estimates of the array-averaged WCO in order to locate the time-frequency regions spanned by coherent arrivals. For these data, we then use the time-series of the complex wavelet coefficients for deriving the elements of the spatial Cross-Spectral Matrix. From the eigenstructure of this matrix, we eventually estimate the kinematic signals' parameters using the MUltiple SIgnal Characterization (MUSIC) algorithm. The whole procedure greatly facilitates the detection and location of weak, broad-band signals, in turn avoiding the time-frequency resolution trade-off and frequency leakage effects which affect conventional covariance estimates based upon Windowed Fourier Transform. The method is applied to explosion signals recorded at Stromboli volcano by either a short-period, small aperture antenna, or a large-aperture, broad-band network. The LP (0.2 < T < 2s) components of the explosive signals are analysed using data from the small-aperture array and under the plane-wave assumption. In this manner, we obtain a precise time- and frequency-localization of the directional properties for waves impinging at the array. We then extend the wavefield decomposition method using a spherical wave front model, and analyse the VLP components (T > 2s) of the explosion recordings from the broad-band network. Source locations obtained this way are fully compatible with those retrieved from application of more traditional (and computationally expensive) time-domain techniques, such as the Radial Semblance method.

  11. The impact of manual threshold selection in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; Koivisto, Juha; Karhu, Kalle; Forouzanfar, Tymour; Wolff, Jan

    2017-04-01

    Medical additive manufacturing requires standard tessellation language (STL) models. Such models are commonly derived from computed tomography (CT) images using thresholding. Threshold selection can be performed manually or automatically. The aim of this study was to assess the impact of manual and default threshold selection on the reliability and accuracy of skull STL models using different CT technologies. One female and one male human cadaver head were imaged using multi-detector row CT, dual-energy CT, and two cone-beam CT scanners. Four medical engineers manually thresholded the bony structures on all CT images. The lowest and highest selected mean threshold values and the default threshold value were used to generate skull STL models. Geometric variations between all manually thresholded STL models were calculated. Furthermore, in order to calculate the accuracy of the manually and default thresholded STL models, all STL models were superimposed on an optical scan of the dry female and male skulls ("gold standard"). The intra- and inter-observer variability of the manual threshold selection was good (intra-class correlation coefficients >0.9). All engineers selected grey values closer to soft tissue to compensate for bone voids. Geometric variations between the manually thresholded STL models were 0.13 mm (multi-detector row CT), 0.59 mm (dual-energy CT), and 0.55 mm (cone-beam CT). All STL models demonstrated inaccuracies ranging from -0.8 to +1.1 mm (multi-detector row CT), -0.7 to +2.0 mm (dual-energy CT), and -2.3 to +4.8 mm (cone-beam CT). This study demonstrates that manual threshold selection results in better STL models than default thresholding. The use of dual-energy CT and cone-beam CT technology in its present form does not deliver reliable or accurate STL models for medical additive manufacturing. New approaches are required that are based on pattern recognition and machine learning algorithms.

  12. Evidence of Large Fluctuations of Stock Return and Financial Crises from Turkey: Using Wavelet Coherency and Varma Modeling to Forecast Stock Return

    NASA Astrophysics Data System (ADS)

    Oygur, Tunc; Unal, Gazanfer

    Shocks, jumps, booms and busts are typical large fluctuation markers which appear in crisis. Models and leading indicators vary according to crisis type in spite of the fact that there are a lot of different models and leading indicators in literature to determine structure of crisis. In this paper, we investigate structure of dynamic correlation of stock return, interest rate, exchange rate and trade balance differences in crisis periods in Turkey over the period between October 1990 and March 2015 by applying wavelet coherency methodologies to determine nature of crises. The time period includes the Turkeys currency and banking crises; US sub-prime mortgage crisis and the European sovereign debt crisis occurred in 1994, 2001, 2008 and 2009, respectively. Empirical results showed that stock return, interest rate, exchange rate and trade balance differences are significantly linked during the financial crises in Turkey. The cross wavelet power, the wavelet coherency, the multiple wavelet coherency and the quadruple wavelet coherency methodologies have been used to examine structure of dynamic correlation. Moreover, in consequence of quadruple and multiple wavelet coherence, strongly correlated large scales indicate linear behavior and, hence VARMA (vector autoregressive moving average) gives better fitting and forecasting performance. In addition, increasing the dimensions of the model for strongly correlated scales leads to more accurate results compared to scalar counterparts.

  13. What drives high flow events in the Swiss Alps? Recent developments in wavelet spectral analysis and their application to hydrology

    NASA Astrophysics Data System (ADS)

    Schaefli, B.; Maraun, D.; Holschneider, M.

    2007-12-01

    Extreme hydrological events are often triggered by exceptional co-variations of the relevant hydrometeorological processes and in particular by exceptional co-oscillations at various temporal scales. Wavelet and cross wavelet spectral analysis offers promising time-scale resolved analysis methods to detect and analyze such exceptional co-oscillations. This paper presents the state-of-the-art methods of wavelet spectral analysis, discusses related subtleties, potential pitfalls and recently developed solutions to overcome them and shows how wavelet spectral analysis, if combined to a rigorous significance test, can lead to reliable new insights into hydrometeorological processes for real-world applications. The presented methods are applied to detect potentially flood triggering situations in a high Alpine catchment for which a recent re-estimation of design floods encountered significant problems simulating the observed high flows. For this case study, wavelet spectral analysis of precipitation, temperature and discharge offers a powerful tool to help detecting potentially flood producing meteorological situations and to distinguish between different types of floods with respect to the prevailing critical hydrometeorological conditions. This opens very new perspectives for the analysis of model performances focusing on the occurrence and non-occurrence of different types of high flow events. Based on the obtained results, the paper summarizes important recommendations for future applications of wavelet spectral analysis in hydrology.

  14. Application of wavelet analysis for monitoring the hydrologic effects of dam operation: Glen canyon dam and the Colorado River at lees ferry, Arizona

    USGS Publications Warehouse

    White, M.A.; Schmidt, J.C.; Topping, D.J.

    2005-01-01

    Wavelet analysis is a powerful tool with which to analyse the hydrologic effects of dam construction and operation on river systems. Using continuous records of instantaneous discharge from the Lees Ferry gauging station and records of daily mean discharge from upstream tributaries, we conducted wavelet analyses of the hydrologic structure of the Colorado River in Grand Canyon. The wavelet power spectrum (WPS) of daily mean discharge provided a highly compressed and integrative picture of the post-dam elimination of pronounced annual and sub-annual flow features. The WPS of the continuous record showed the influence of diurnal and weekly power generation cycles, shifts in discharge management, and the 1996 experimental flood in the post-dam period. Normalization of the WPS by local wavelet spectra revealed the fine structure of modulation in discharge scale and amplitude and provides an extremely efficient tool with which to assess the relationships among hydrologic cycles and ecological and geomorphic systems. We extended our analysis to sections of the Snake River and showed how wavelet analysis can be used as a data mining technique. The wavelet approach is an especially promising tool with which to assess dam operation in less well-studied regions and to evaluate management attempts to reconstruct desired flow characteristics. Copyright ?? 2005 John Wiley & Sons, Ltd.

  15. Reversible wavelet filter banks with side informationless spatially adaptive low-pass filters

    NASA Astrophysics Data System (ADS)

    Abhayaratne, Charith

    2011-07-01

    Wavelet transforms that have an adaptive low-pass filter are useful in applications that require the signal singularities, sharp transitions, and image edges to be left intact in the low-pass signal. In scalable image coding, the spatial resolution scalability is achieved by reconstructing the low-pass signal subband, which corresponds to the desired resolution level, and discarding other high-frequency wavelet subbands. In such applications, it is vital to have low-pass subbands that are not affected by smoothing artifacts associated with low-pass filtering. We present the mathematical framework for achieving 1-D wavelet transforms that have a spatially adaptive low-pass filter (SALP) using the prediction-first lifting scheme. The adaptivity decisions are computed using the wavelet coefficients, and no bookkeeping is required for the perfect reconstruction. Then, 2-D wavelet transforms that have a spatially adaptive low-pass filter are designed by extending the 1-D SALP framework. Because the 2-D polyphase decompositions are used in this case, the 2-D adaptivity decisions are made nonseparable as opposed to the separable 2-D realization using 1-D transforms. We present examples using the 2-D 5/3 wavelet transform and their lossless image coding and scalable decoding performances in terms of quality and resolution scalability. The proposed 2-D-SALP scheme results in better performance compared to the existing adaptive update lifting schemes.

  16. Diagnostic methodology for incipient system disturbance based on a neural wavelet approach

    NASA Astrophysics Data System (ADS)

    Won, In-Ho

    Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.

  17. Wafer plane inspection with soft resist thresholding

    NASA Astrophysics Data System (ADS)

    Hess, Carl; Shi, Rui-fang; Wihl, Mark; Xiong, Yalin; Pang, Song

    2008-10-01

    Wafer Plane Inspection (WPI) is an inspection mode on the KLA-Tencor TeraScaTM platform that uses the high signalto- noise ratio images from the high numerical aperture microscope, and then models the entire lithographic process to enable defect detection on the wafer plane[1]. This technology meets the needs of some advanced mask manufacturers to identify the lithographically-significant defects while ignoring the other non-lithographically-significant defects. WPI accomplishes this goal by performing defect detection based on a modeled image of how the mask features would actually print in the photoresist. There are several advantages to this approach: (1) the high fidelity of the images provide a sensitivity advantage over competing approaches; (2) the ability to perform defect detection on the wafer plane allows one to only see those defects that have a printing impact on the wafer; (3) the use of modeling on the lithographic portion of the flow enables unprecedented flexibility to support arbitrary illumination profiles, process-window inspection in unit time, and combination modes to find both printing and non-printing defects. WPI is proving to be a valuable addition to the KLA-Tencor detection algorithm suite. The modeling portion of WPI uses a single resist threshold as the final step in the processing. This has been shown to be adequate on several advanced customer layers, but is not ideal for all layers. Actual resist chemistry has complicated processes including acid and base-diffusion and quench that are not consistently well-modeled with a single resist threshold. We have considered the use of an advanced resist model for WPI, but rejected it because the burdensome requirements for the calibration of the model were not practical for reticle inspection. This paper describes an alternative approach that allows for a "soft" resist threshold to be applied that provides a more robust solution for the most challenging processes. This approach is just finishing beta testing with a customer developing advanced node designs.

  18. Wavelets and Multifractal Analysis

    DTIC Science & Technology

    2004-07-01

    distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001750, Wavelets and Multifractal Analysis (WAMA) Workshop held on 19-31 July 2004., The original...f)] . . . 16 2.5.4 Detrended Fluctuation Analysis [DFA(m)] . . . . . . . . . . . . . . . 17 2.6 Scale-Independent Measures...18 2.6.1 Detrended -Fluctuation- Analysis Power-Law Exponent (αD) . . . . . . 18 2.6.2 Wavelet-Transform Power-Law Exponent

  19. Target Detection and Classification Using Seismic and PIR Sensors

    DTIC Science & Technology

    2012-06-01

    time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The

  20. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by

Top