NASA Astrophysics Data System (ADS)
Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua
1997-04-01
Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.
Analysis on Behaviour of Wavelet Coefficient during Fault Occurrence in Transformer
NASA Astrophysics Data System (ADS)
Sreewirote, Bancha; Ngaopitakkul, Atthapol
2018-03-01
The protection system for transformer has play significant role in avoiding severe damage to equipment when disturbance occur and ensure overall system reliability. One of the methodology that widely used in protection scheme and algorithm is discrete wavelet transform. However, characteristic of coefficient under fault condition must be analyzed to ensure its effectiveness. So, this paper proposed study and analysis on wavelet coefficient characteristic when fault occur in transformer in both high- and low-frequency component from discrete wavelet transform. The effect of internal and external fault on wavelet coefficient of both fault and normal phase has been taken into consideration. The fault signal has been simulate using transmission connected to transformer experimental setup on laboratory level that modelled after actual system. The result in term of wavelet coefficient shown a clearly differentiate between wavelet characteristic in both high and low frequency component that can be used to further design and improve detection and classification algorithm that based on discrete wavelet transform methodology in the future.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
Adaptive Filtering in the Wavelet Transform Domain Via Genetic Algorithms
2004-08-01
inverse transform process. 2. BACKGROUND The image processing research conducted at the AFRL/IFTA Reconfigurable Computing Laboratory has been...coefficients from the wavelet domain back into the original signal domain. In other words, the inverse transform produces the original signal x(t) from the...coefficients for an inverse wavelet transform, such that the MSE of images reconstructed by this inverse transform is significantly less than the mean squared
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
Image superresolution of cytology images using wavelet based patch search
NASA Astrophysics Data System (ADS)
Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo
2015-01-01
Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.
Cheremkhin, Pavel A; Kurbatova, Ekaterina A
2018-01-01
Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on wavelets allows high compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of wavelets directly does not allow high values of compression to be obtained. However, additional preprocessing and postprocessing can afford significant compression of holograms and the acceptable quality of reconstructed images. In this paper application of wavelet transforms for compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, wavelet compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional compression of wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various wavelets and methods of additional compression of wavelet coefficients is performed. Optimum parameters of compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.
PULSAR SIGNAL DENOISING METHOD BASED ON LAPLACE DISTRIBUTION IN NO-SUBSAMPLING WAVELET PACKET DOMAIN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang
2016-11-01
In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.
Wavelet-based fMRI analysis: 3-D denoising, signal separation, and validation metrics
Khullar, Siddharth; Michael, Andrew; Correa, Nicolle; Adali, Tulay; Baum, Stefi A.; Calhoun, Vince D.
2010-01-01
We present a novel integrated wavelet-domain based framework (w-ICA) for 3-D de-noising functional magnetic resonance imaging (fMRI) data followed by source separation analysis using independent component analysis (ICA) in the wavelet domain. We propose the idea of a 3-D wavelet-based multi-directional de-noising scheme where each volume in a 4-D fMRI data set is sub-sampled using the axial, sagittal and coronal geometries to obtain three different slice-by-slice representations of the same data. The filtered intensity value of an arbitrary voxel is computed as an expected value of the de-noised wavelet coefficients corresponding to the three viewing geometries for each sub-band. This results in a robust set of de-noised wavelet coefficients for each voxel. Given the decorrelated nature of these de-noised wavelet coefficients; it is possible to obtain more accurate source estimates using ICA in the wavelet domain. The contributions of this work can be realized as two modules. First, the analysis module where we combine a new 3-D wavelet denoising approach with better signal separation properties of ICA in the wavelet domain, to yield an activation component that corresponds closely to the true underlying signal and is maximally independent with respect to other components. Second, we propose and describe two novel shape metrics for post-ICA comparisons between activation regions obtained through different frameworks. We verified our method using simulated as well as real fMRI data and compared our results against the conventional scheme (Gaussian smoothing + spatial ICA: s-ICA). The results show significant improvements based on two important features: (1) preservation of shape of the activation region (shape metrics) and (2) receiver operating characteristic (ROC) curves. It was observed that the proposed framework was able to preserve the actual activation shape in a consistent manner even for very high noise levels in addition to significant reduction in false positives voxels. PMID:21034833
Wavelet tree structure based speckle noise removal for optical coherence tomography
NASA Astrophysics Data System (ADS)
Yuan, Xin; Liu, Xuan; Liu, Yang
2018-02-01
We report a new speckle noise removal algorithm in optical coherence tomography (OCT). Though wavelet domain thresholding algorithms have demonstrated superior advantages in suppressing noise magnitude and preserving image sharpness in OCT, the wavelet tree structure has not been investigated in previous applications. In this work, we propose an adaptive wavelet thresholding algorithm via exploiting the tree structure in wavelet coefficients to remove the speckle noise in OCT images. The threshold for each wavelet band is adaptively selected following a special rule to retain the structure of the image across different wavelet layers. Our results demonstrate that the proposed algorithm outperforms conventional wavelet thresholding, with significant advantages in preserving image features.
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Fast wavelet based algorithms for linear evolution equations
NASA Technical Reports Server (NTRS)
Engquist, Bjorn; Osher, Stanley; Zhong, Sifen
1992-01-01
A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.
Use of the wavelet transform to investigate differences in brain PET images between patient groups
NASA Astrophysics Data System (ADS)
Ruttimann, Urs E.; Unser, Michael A.; Rio, Daniel E.; Rawlings, Robert R.
1993-06-01
Suitability of the wavelet transform was studied for the analysis of glucose utilization differences between subject groups as displayed in PET images. To strengthen statistical inference, it was of particular interest investigating the tradeoff between signal localization and image decomposition into uncorrelated components. This tradeoff is shown to be controlled by wavelet regularity, with the optimal compromise attained by third-order orthogonal spline wavelets. Testing of the ensuing wavelet coefficients identified only about 1.5% as statistically different (p < .05) from noise, which then served to resynthesize the difference images by the inverse wavelet transform. The resulting images displayed relatively uniform, noise-free regions of significant differences with, due to the good localization maintained by the wavelets, very little reconstruction artifacts.
Fast generation of computer-generated holograms using wavelet shrinkage.
Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2017-01-09
Computer-generated holograms (CGHs) are generated by superimposing complex amplitudes emitted from a number of object points. However, this superposition process remains very time-consuming even when using the latest computers. We propose a fast calculation algorithm for CGHs that uses a wavelet shrinkage method, eliminating small wavelet coefficient values to express approximated complex amplitudes using only a few representative wavelet coefficients.
Adaptive Filtering in the Wavelet Transform Domain via Genetic Algorithms
2004-08-06
wavelet transforms. Whereas the term “evolved” pertains only to the altered wavelet coefficients used during the inverse transform process. 2...words, the inverse transform produces the original signal x(t) from the wavelet and scaling coefficients. )()( ,, tdtx nk n nk k ψ...reconstruct the original signal as accurately as possible. The inverse transform reconstructs an approximation of the original signal (Burrus
USDA-ARS?s Scientific Manuscript database
This paper presents a novel wrinkle evaluation method that uses modified wavelet coefficients and an optimized support-vector-machine (SVM) classification scheme to characterize and classify wrinkle appearance of fabric. Fabric images were decomposed with the wavelet transform (WT), and five parame...
Identification of speech transients using variable frame rate analysis and wavelet packets.
Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung
2006-01-01
Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.
Islanding detection technique using wavelet energy in grid-connected PV system
NASA Astrophysics Data System (ADS)
Kim, Il Song
2016-08-01
This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.
A wavelet-based Gaussian method for energy dispersive X-ray fluorescence spectrum.
Liu, Pan; Deng, Xiaoyan; Tang, Xin; Shen, Shijian
2017-05-01
This paper presents a wavelet-based Gaussian method (WGM) for the peak intensity estimation of energy dispersive X-ray fluorescence (EDXRF). The relationship between the parameters of Gaussian curve and the wavelet coefficients of Gaussian peak point is firstly established based on the Mexican hat wavelet. It is found that the Gaussian parameters can be accurately calculated by any two wavelet coefficients at the peak point which has to be known. This fact leads to a local Gaussian estimation method for spectral peaks, which estimates the Gaussian parameters based on the detail wavelet coefficients of Gaussian peak point. The proposed method is tested via simulated and measured spectra from an energy X-ray spectrometer, and compared with some existing methods. The results prove that the proposed method can directly estimate the peak intensity of EDXRF free from the background information, and also effectively distinguish overlap peaks in EDXRF spectrum.
Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang
2016-12-01
The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family tools. KurWSD has three main advantages: firstly, all the characteristic information scattered in proximal sub-bands is gathered through synthesizing those impulsive dominant sub-band signals and thus eliminates the dilemma of the IFFBS phenomenon. Secondly, the noises in the focused sub-bands could be alleviated efficiently through shrinking or removing the dense wavelet coefficients of Gaussian noise. Lastly, wavelet coefficients with faulty information are reliably detected and preserved through manipulating wavelet coefficients discriminatively based on the contribution to the impulsive components. Moreover, the reliability and effectiveness of the KurWSD are demonstrated through simulated and experimental signals.
Wavelet-linear genetic programming: A new approach for modeling monthly streamflow
NASA Astrophysics Data System (ADS)
Ravansalar, Masoud; Rajaee, Taher; Kisi, Ozgur
2017-06-01
The streamflows are important and effective factors in stream ecosystems and its accurate prediction is an essential and important issue in water resources and environmental engineering systems. A hybrid wavelet-linear genetic programming (WLGP) model, which includes a discrete wavelet transform (DWT) and a linear genetic programming (LGP) to predict the monthly streamflow (Q) in two gauging stations, Pataveh and Shahmokhtar, on the Beshar River at the Yasuj, Iran were used in this study. In the proposed WLGP model, the wavelet analysis was linked to the LGP model where the original time series of streamflow were decomposed into the sub-time series comprising wavelet coefficients. The results were compared with the single LGP, artificial neural network (ANN), a hybrid wavelet-ANN (WANN) and Multi Linear Regression (MLR) models. The comparisons were done by some of the commonly utilized relevant physical statistics. The Nash coefficients (E) were found as 0.877 and 0.817 for the WLGP model, for the Pataveh and Shahmokhtar stations, respectively. The comparison of the results showed that the WLGP model could significantly increase the streamflow prediction accuracy in both stations. Since, the results demonstrate a closer approximation of the peak streamflow values by the WLGP model, this model could be utilized for the simulation of cumulative streamflow data prediction in one month ahead.
Block-based scalable wavelet image codec
NASA Astrophysics Data System (ADS)
Bao, Yiliang; Kuo, C.-C. Jay
1999-10-01
This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).
Minimum risk wavelet shrinkage operator for Poisson image denoising.
Cheng, Wu; Hirakawa, Keigo
2015-05-01
The pixel values of images taken by an image sensor are said to be corrupted by Poisson noise. To date, multiscale Poisson image denoising techniques have processed Haar frame and wavelet coefficients--the modeling of coefficients is enabled by the Skellam distribution analysis. We extend these results by solving for shrinkage operators for Skellam that minimizes the risk functional in the multiscale Poisson image denoising setting. The minimum risk shrinkage operator of this kind effectively produces denoised wavelet coefficients with minimum attainable L2 error.
NASA Astrophysics Data System (ADS)
Huang, Darong; Bai, Xing-Rong
Based on wavelet transform and neural network theory, a traffic-flow prediction model, which was used in optimal control of Intelligent Traffic system, is constructed. First of all, we have extracted the scale coefficient and wavelet coefficient from the online measured raw data of traffic flow via wavelet transform; Secondly, an Artificial Neural Network model of Traffic-flow Prediction was constructed and trained using the coefficient sequences as inputs and raw data as outputs; Simultaneous, we have designed the running principium of the optimal control system of traffic-flow Forecasting model, the network topological structure and the data transmitted model; Finally, a simulated example has shown that the technique is effectively and exactly. The theoretical results indicated that the wavelet neural network prediction model and algorithms have a broad prospect for practical application.
Sparsity prediction and application to a new steganographic technique
NASA Astrophysics Data System (ADS)
Phillips, David; Noonan, Joseph
2004-10-01
Steganography is a technique of embedding information in innocuous data such that only the innocent data is visible. The wavelet transform lends itself to image steganography because it generates a large number of coefficients representing the information in the image. Altering a small set of these coefficients allows embedding of information (payload) into an image (cover) without noticeably altering the original image. We propose a novel, dual-wavelet steganographic technique, using transforms selected such that the transform of the cover image has low sparsity, while the payload transform has high sparsity. Maximizing the sparsity of the payload transform reduces the amount of information embedded in the cover, and minimizing the sparsity of the cover increases the locations that can be altered without significantly altering the image. Making this system effective on any given image pair requires a metric to indicate the best (maximum sparsity) and worst (minimum sparsity) wavelet transforms to use. This paper develops the first stage of this metric, which can predict, averaged across many wavelet families, which of two images will have a higher sparsity. A prototype implementation of the dual-wavelet system as a proof of concept is also developed.
Optimal wavelets for biomedical signal compression.
Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario
2006-07-01
Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Feurstein, Markus C.; Teich, Malvin C.
1998-02-01
We applied multiresolution wavelet analysis to the sequence of times between human heartbeats ( R-R intervals) and have found a scale window, between 16 and 32 heartbeat intervals, over which the widths of the R-R wavelet coefficients fall into disjoint sets for normal and heart-failure patients. This has enabled us to correctly classify every patient in a standard data set as belonging either to the heart-failure or normal group with 100% accuracy, thereby providing a clinically significant measure of the presence of heart failure from the R-R intervals alone. Comparison is made with previous approaches, which have provided only statistically significant measures.
Li, Su-Yi; Ji, Yan-Ju; Liu, Wei-Yu; Wang, Zhi-Hong
2013-04-01
In the present study, an innovative method is proposed, employing both wavelet transform and neural network, to analyze the near-infrared spectrum data in oil shale survey. The method entails using db8 wavelet at 3 levels decomposition to process raw data, using the transformed data as the input matrix, and creating the model through neural network. To verify the validity of the method, this study analyzes 30 synthesized oil shale samples, in which 20 samples are randomly selected for network training, the other 10 for model prediction, and uses the full spectrum and the wavelet transformed spectrum to carry out 10 network models, respectively. Results show that the mean speed of the full spectrum neural network modeling is 570.33 seconds, and the predicted residual sum of squares (PRESS) and correlation coefficient of prediction are 0.006 012 and 0.843 75, respectively. In contrast, the mean speed of the wavelet network modeling method is 3.15 seconds, and the mean PRESS and correlation coefficient of prediction are 0.002 048 and 0.953 19, respectively. These results demonstrate that the wavelet neural network modeling method is significantly superior to the full spectrum neural network modeling method. This study not only provides a new method for more efficient and accurate detection of the oil content of oil shale, but also indicates the potential for applying wavelet transform and neutral network in broad near-infrared spectrum analysis.
Sensor system for heart sound biomonitor
NASA Astrophysics Data System (ADS)
Maple, Jarrad L.; Hall, Leonard T.; Agzarian, John; Abbott, Derek
1999-09-01
Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually, rather than through a conventional stethoscope. A system whereby a digital stethoscope interfaces directly to a PC will be directly along with signal processing algorithms, adopted. The sensor is based on a noise cancellation microphone, with a 450 Hz bandwidth and is sampled at 2250 samples/sec with 12-bit resolution. Further to this, we discuss for comparison a piezo-based sensor with a 1 kHz bandwidth. A major problem is that the recording of the heart sound into these devices is subject to unwanted background noise which can override the heart sound and results in a poor visual representation. This noise originates from various sources such as skin contact with the stethoscope diaphragm, lung sounds, and other surrounding sounds such as speech. Furthermore we demonstrate a solution using 'wavelet denoising'. The wavelet transform is used because of the similarity between the shape of wavelets and the time-domain shape of a heartbeat sound. Thus coding of the waveform into the wavelet domain is achieved with relatively few wavelet coefficients, in contrast to the many Fourier components that would result from conventional decomposition. We show that the background noise can be dramatically reduced by a thresholding operation in the wavelet domain. The principle is that the background noise codes into many small broadband wavelet coefficients that can be removed without significant degradation of the signal of interest.
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
Periodized Daubechies wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.
1996-03-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.
Van Dijck, Gert; Van Hulle, Marc M.
2011-01-01
The damage caused by corrosion in chemical process installations can lead to unexpected plant shutdowns and the leakage of potentially toxic chemicals into the environment. When subjected to corrosion, structural changes in the material occur, leading to energy releases as acoustic waves. This acoustic activity can in turn be used for corrosion monitoring, and even for predicting the type of corrosion. Here we apply wavelet packet decomposition to extract features from acoustic emission signals. We then use the extracted wavelet packet coefficients for distinguishing between the most important types of corrosion processes in the chemical process industry: uniform corrosion, pitting and stress corrosion cracking. The local discriminant basis selection algorithm can be considered as a standard for the selection of the most discriminative wavelet coefficients. However, it does not take the statistical dependencies between wavelet coefficients into account. We show that, when these dependencies are ignored, a lower accuracy is obtained in predicting the corrosion type. We compare several mutual information filters to take these dependencies into account in order to arrive at a more accurate prediction. PMID:22163921
Wavelet multiresolution complex network for decoding brain fatigued behavior from P300 signals
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Wang, Zi-Bo; Yang, Yu-Xuan; Li, Shan; Dang, Wei-Dong; Mao, Xiao-Qian
2018-09-01
Brain-computer interface (BCI) enables users to interact with the environment without relying on neural pathways and muscles. P300 based BCI systems have been extensively used to achieve human-machine interaction. However, the appearance of fatigue symptoms during operation process leads to the decline in classification accuracy of P300. Characterizing brain cognitive process underlying normal and fatigue conditions constitutes a problem of vital importance in the field of brain science. We in this paper propose a novel wavelet decomposition based complex network method to efficiently analyze the P300 signals recorded in the image stimulus test based on classical 'Oddball' paradigm. Initially, multichannel EEG signals are decomposed into wavelet coefficient series. Then we construct complex network by treating electrodes as nodes and determining the connections according to the 2-norm distances between wavelet coefficient series. The analysis of topological structure and statistical index indicates that the properties of brain network demonstrate significant distinctions between normal status and fatigue status. More specifically, the brain network reconfiguration in response to the cognitive task in fatigue status is reflected as the enhancement of the small-worldness.
Li, Sikun; Wang, Xiangzhao; Su, Xianyu; Tang, Feng
2012-04-20
This paper theoretically discusses modulus of two-dimensional (2D) wavelet transform (WT) coefficients, calculated by using two frequently used 2D daughter wavelet definitions, in an optical fringe pattern analysis. The discussion shows that neither is good enough to represent the reliability of the phase data. The differences between the two frequently used 2D daughter wavelet definitions in the performance of 2D WT also are discussed. We propose a new 2D daughter wavelet definition for reliability-guided phase unwrapping of optical fringe pattern. The modulus of the advanced 2D WT coefficients, obtained by using a daughter wavelet under this new daughter wavelet definition, includes not only modulation information but also local frequency information of the deformed fringe pattern. Therefore, it can be treated as a good parameter that represents the reliability of the retrieved phase data. Computer simulation and experimentation show the validity of the proposed method.
Research and Implementation of Heart Sound Denoising
NASA Astrophysics Data System (ADS)
Liu, Feng; Wang, Yutai; Wang, Yanxiang
Heart sound is one of the most important signals. However, the process of getting heart sound signal can be interfered with many factors outside. Heart sound is weak electric signal and even weak external noise may lead to the misjudgment of pathological and physiological information in this signal, thus causing the misjudgment of disease diagnosis. As a result, it is a key to remove the noise which is mixed with heart sound. In this paper, a more systematic research and analysis which is involved in heart sound denoising based on matlab has been made. The study of heart sound denoising based on matlab firstly use the powerful image processing function of matlab to transform heart sound signals with noise into the wavelet domain through wavelet transform and decomposition these signals in muli-level. Then for the detail coefficient, soft thresholding is made using wavelet transform thresholding to eliminate noise, so that a signal denoising is significantly improved. The reconstructed signals are gained with stepwise coefficient reconstruction for the processed detail coefficient. Lastly, 50HZ power frequency and 35 Hz mechanical and electrical interference signals are eliminated using a notch filter.
[A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].
Zhao, An; Wu, Baoming
2006-12-01
This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.
Reconstruction of color images via Haar wavelet based on digital micromirror device
NASA Astrophysics Data System (ADS)
Liu, Xingjiong; He, Weiji; Gu, Guohua
2015-10-01
A digital micro mirror device( DMD) is introduced to form Haar wavelet basis , projecting on the color target image by making use of structured illumination, including red, green and blue light. The light intensity signals reflected from the target image are received synchronously by the bucket detector which has no spatial resolution, converted into voltage signals and then transferred into PC[1] .To reach the aim of synchronization, several synchronization processes are added during data acquisition. In the data collection process, according to the wavelet tree structure, the locations of significant coefficients at the finer scale are predicted by comparing the coefficients sampled at the coarsest scale with the threshold. The monochrome grayscale images are obtained under red , green and blue structured illumination by using Haar wavelet inverse transform algorithm, respectively. The color fusion algorithm is carried on the three monochrome grayscale images to obtain the final color image. According to the imaging principle, the experimental demonstration device is assembled. The letter "K" and the X-rite Color Checker Passport are projected and reconstructed as target images, and the final reconstructed color images have good qualities. This article makes use of the method of Haar wavelet reconstruction, reducing the sampling rate considerably. It provides color information without compromising the resolution of the final image.
Khalil, Mohammed S.; Khan, Muhammad Khurram; Alginahi, Yasser M.
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small. PMID:25028681
Khalil, Mohammed S; Kurniawan, Fajri; Khan, Muhammad Khurram; Alginahi, Yasser M
2014-01-01
This paper presents a novel watermarking method to facilitate the authentication and detection of the image forgery on the Quran images. Two layers of embedding scheme on wavelet and spatial domain are introduced to enhance the sensitivity of fragile watermarking and defend the attacks. Discrete wavelet transforms are applied to decompose the host image into wavelet prior to embedding the watermark in the wavelet domain. The watermarked wavelet coefficient is inverted back to spatial domain then the least significant bits is utilized to hide another watermark. A chaotic map is utilized to blur the watermark to make it secure against the local attack. The proposed method allows high watermark payloads, while preserving good image quality. Experiment results confirm that the proposed methods are fragile and have superior tampering detection even though the tampered area is very small.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Rejection of the maternal electrocardiogram in the electrohysterogram signal.
Leman, H; Marque, C
2000-08-01
The electrohysterogram (EHG) signal is mainly corrupted by the mother's electrocardiogram (ECG), which remains present despite analog filtering during acquisition. Wavelets are a powerful denoising tool and have already proved their efficiency on the EHG. In this paper, we propose a new method that employs the redundant wavelet packet transform. We first study wavelet packet coefficient histograms and propose an algorithm to automatically detect the histogram mode number. Using a new criterion, we compute a best basis adapted to the denoising. After EHG wavelet packet coefficient thresholding in the selected basis, the inverse transform is applied. The ECG seems to be very efficiently removed.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Wang, Yan-Cang; Yang, Gui-Jun; Zhu, Jin-Shan; Gu, Xiao-He; Xu, Peng; Liao, Qin-Hong
2014-07-01
For improving the estimation accuracy of soil organic matter content of the north fluvo-aquic soil, wavelet transform technology is introduced. The soil samples were collected from Tongzhou district and Shunyi district in Beijing city. And the data source is from soil hyperspectral data obtained under laboratory condition. First, discrete wavelet transform efficiently decomposes hyperspectral into approximate coefficients and detail coefficients. Then, the correlation between approximate coefficients, detail coefficients and organic matter content was analyzed, and the sensitive bands of the organic matter were screened. Finally, models were established to estimate the soil organic content by using the partial least squares regression (PLSR). Results show that the NIR bands made more contributions than the visible band in estimating organic matter content models; the ability of approximate coefficients to estimate organic matter content is better than that of detail coefficients; The estimation precision of the detail coefficients fir soil organic matter content decreases with the spectral resolution being lower; Compared with the commonly used three types of soil spectral reflectance transforms, the wavelet transform can improve the estimation ability of soil spectral fir organic content; The accuracy of the best model established by the approximate coefficients or detail coefficients is higher, and the coefficient of determination (R2) and the root mean square error (RMSE) of the best model for approximate coefficients are 0.722 and 0.221, respectively. The R2 and RMSE of the best model for detail coefficients are 0.670 and 0.255, respectively.
A Multiscale Vision Model and Applications to Astronomical Image and Data Analyses
NASA Astrophysics Data System (ADS)
Bijaoui, A.; Slezak, E.; Vandame, B.
Many researches were carried out on the automated identification of the astrophy sical sources, and their relevant measurements. Some vision models have been developed for this task, their use depending on the image content. We have developed a multiscale vision model (MVM) \\cite{BR95} well suited for analyzing complex structures such like interstellar clouds, galaxies, or cluster of galaxies. Our model is based on a redundant wavelet transform. For each scale we detect significant wavelet coefficients by application of a decision rule based on their probability density functions (PDF) under the hypothesis of a uniform distribution. In the case of a Poisson noise, this PDF can be determined from the autoconvolution of the wavelet function histogram \\cite{SLB93}. We may also apply Anscombe's transform, scale by scale in order to take into account the integrated number of events at each scale \\cite{FSB98}. Our aim is to compute an image of all detected structural features. MVM allows us to build oriented trees from the neighbouring of significant wavelet coefficients. Each tree is also divided into subtrees taking into account the maxima along the scale axis. This leads to identify objects in the scale space, and then to restore their images by classical inverse methods. This model works only if the sampling is correct at each scale. It is not generally the case for the orthogonal wavelets, so that we apply the so-called `a trous algorithm \\cite{BSM94} or a specific pyramidal one \\cite{RBV98}. It leads to ext ract superimposed objets of different size, and it gives for each of them a separate image, from which we can obtain position, flux and p attern parameters. We have applied these methods to different kinds of images, photographic plates, CCD frames or X-ray images. We have only to change the statistical rule for extr acting significant coefficients to adapt the model from an image class to another one. We have also applied this model to extract clusters hierarchically distributed or to identify regions devoid of objects from galaxy counts.
NASA Astrophysics Data System (ADS)
Wan, Renzhi; Zu, Yunxiao; Shao, Lin
2018-04-01
The blood echo signal maintained through Medical ultrasound Doppler devices would always include vascular wall pulsation signal .The traditional method to de-noise wall signal is using high-pass filter, which will also remove the lowfrequency part of the blood flow signal. Some scholars put forward a method based on region selective reduction, which at first estimates of the wall pulsation signals and then removes the wall signal from the mixed signal. Apparently, this method uses the correlation between wavelet coefficients to distinguish blood signal from wall signal, but in fact it is a kind of wavelet threshold de-noising method, whose effect is not so much ideal. In order to maintain a better effect, this paper proposes an improved method based on wavelet coefficient correlation to separate blood signal and wall signal, and simulates the algorithm by computer to verify its validity.
NASA Astrophysics Data System (ADS)
Jiang, Zhuo; Xie, Chengjun
2013-12-01
This paper improved the algorithm of reversible integer linear transform on finite interval [0,255], which can realize reversible integer linear transform in whole number axis shielding data LSB (least significant bit). Firstly, this method use integer wavelet transformation based on lifting scheme to transform the original image, and select the transformed high frequency areas as information hiding area, meanwhile transform the high frequency coefficients blocks in integer linear way and embed the secret information in LSB of each coefficient, then information hiding by embedding the opposite steps. To extract data bits and recover the host image, a similar reverse procedure can be conducted, and the original host image can be lossless recovered. The simulation experimental results show that this method has good secrecy and concealment, after conducted the CDF (m, n) and DD (m, n) series of wavelet transformed. This method can be applied to information security domain, such as medicine, law and military.
Identification of large geomorphological anomalies based on 2D discrete wavelet transform
NASA Astrophysics Data System (ADS)
Doglioni, A.; Simeone, V.
2012-04-01
The identification and analysis based on quantitative evidences of large geomorphological anomalies is an important stage for the study of large landslides. Numerical geomorphic analyses represent an interesting approach to this kind of studies, allowing for a detailed and pretty accurate identification of hidden topographic anomalies that may be related to large landslides. Here a geomorphic numerical analyses of the Digital Terrain Model (DTM) is presented. The introduced approach is based on 2D discrete wavelet transform (Antoine et al., 2003; Bruun and Nilsen, 2003, Booth et al., 2009). The 2D wavelet decomposition of the DTM, and in particular the analysis of the detail coefficients of the wavelet transform can provide evidences of anomalies or singularities, i.e. discontinuities of the land surface. These discontinuities are not very evident from the DTM as it is, while 2D wavelet transform allows for grid-based analysis of DTM and for mapping the decomposition. In fact, the grid-based DTM can be assumed as a matrix, where a discrete wavelet transform (Daubechies, 1992) is performed columnwise and linewise, which basically represent horizontal and vertical directions. The outcomes of this analysis are low-frequency approximation coefficients and high-frequency detail coefficients. Detail coefficients are analyzed, since their variations are associated to discontinuities of the DTM. Detailed coefficients are estimated assuming to perform 2D wavelet transform both for the horizontal direction (east-west) and for the vertical direction (north-south). Detail coefficients are then mapped for both the cases, thus allowing to visualize and quantify potential anomalies of the land surface. Moreover, wavelet decomposition can be pushed to further levels, assuming a higher scale number of the transform. This may potentially return further interesting results, in terms of identification of the anomalies of land surface. In this kind of approach, the choice of a proper mother wavelet function is a tricky point, since it conditions the analysis and then their outcomes. Therefore multiple levels as well as multiple wavelet analyses are guessed. Here the introduced approach is applied to some interesting cases study of south Italy, in particular for the identification of large anomalies associated to large landslides at the transition between Apennine chain domain and the foredeep domain. In particular low Biferno valley and Fortore valley are here analyzed. Finally, the wavelet transforms are performed on multiple levels, thus trying to address the problem of which is the level extent for an accurate analysis fit to a specific problem. Antoine J.P., Carrette P., Murenzi R., and Piette B., (2003), Image analysis with two-dimensional continuous wavelet transform, Signal Processing, 31(3), pp. 241-272, doi:10.1016/0165-1684(93)90085-O. Booth A.M., Roering J.J., and Taylor Perron J., (2009), Automated landslide mapping using spectral analysis and high-resolution topographic data: Puget Sound lowlands, Washington, and Portland Hills, Oregon, Geomorphology, 109(3-4), pp. 132-147, doi:10.1016/j.geomorph.2009.02.027. Bruun B.T., and Nilsen S., (2003), Wavelet representation of large digital terrain models, Computers and Geoscience, 29(6), pp. 695-703, doi:10.1016/S0098-3004(03)00015-3. Daubechies, I. (1992), Ten lectures on wavelets, SIAM.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
NASA Astrophysics Data System (ADS)
Wanchuliak, O. Ya.; Peresunko, A. P.; Bakko, Bouzan Adel; Kushnerick, L. Ya.
2011-09-01
This paper presents the foundations of a large scale - localized wavelet - polarization analysis - inhomogeneous laser images of histological sections of myocardial tissue. Opportunities were identified defining relations between the structures of wavelet coefficients and causes of death. The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet - coefficients polarization maps of myocardium layers and death reasons.
Rank Determination of Mental Functions by 1D Wavelets and Partial Correlation.
Karaca, Y; Aslan, Z; Cattani, C; Galletta, D; Zhang, Y
2017-01-01
The main aim of this paper is to classify mental functions by the Wechsler Adult Intelligence Scale-Revised tests with a mixed method based on wavelets and partial correlation. The Wechsler Adult Intelligence Scale-Revised is a widely used test designed and applied for the classification of the adults cognitive skills in a comprehensive manner. In this paper, many different intellectual profiles have been taken into consideration to measure the relationship between the mental functioning and psychological disorder. We propose a method based on wavelets and correlation analysis for classifying mental functioning, by the analysis of some selected parameters measured by the Wechsler Adult Intelligence Scale-Revised tests. In particular, 1-D Continuous Wavelet Analysis, 1-D Wavelet Coefficient Method and Partial Correlation Method have been analyzed on some Wechsler Adult Intelligence Scale-Revised parameters such as School Education, Gender, Age, Performance Information Verbal and Full Scale Intelligence Quotient. In particular, we will show that gender variable has a negative but a significant role on age and Performance Information Verbal factors. The age parameters also has a significant relation in its role on Performance Information Verbal and Full Scale Intelligence Quotient change.
Wavelet analysis in two-dimensional tomography
NASA Astrophysics Data System (ADS)
Burkovets, Dimitry N.
2002-02-01
The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
Fast reversible wavelet image compressor
NASA Astrophysics Data System (ADS)
Kim, HyungJun; Li, Ching-Chung
1996-10-01
We present a unified image compressor with spline biorthogonal wavelets and dyadic rational filter coefficients which gives high computational speed and excellent compression performance. Convolutions with these filters can be preformed by using only arithmetic shifting and addition operations. Wavelet coefficients can be encoded with an arithmetic coder which also uses arithmetic shifting and addition operations. Therefore, from the beginning to the end, the while encoding/decoding process can be done within a short period of time. The proposed method naturally extends form the lossless compression to the lossy but high compression range and can be easily adapted to the progressive reconstruction.
Harmonic analysis of electric locomotive and traction power system based on wavelet singular entropy
NASA Astrophysics Data System (ADS)
Dun, Xiaohong
2018-05-01
With the rapid development of high-speed railway and heavy-haul transport, the locomotive and traction power system has become the main harmonic source of China's power grid. In response to this phenomenon, the system's power quality issues need timely monitoring, assessment and governance. Wavelet singular entropy is an organic combination of wavelet transform, singular value decomposition and information entropy theory, which combines the unique advantages of the three in signal processing: the time-frequency local characteristics of wavelet transform, singular value decomposition explores the basic modal characteristics of data, and information entropy quantifies the feature data. Based on the theory of singular value decomposition, the wavelet coefficient matrix after wavelet transform is decomposed into a series of singular values that can reflect the basic characteristics of the original coefficient matrix. Then the statistical properties of information entropy are used to analyze the uncertainty of the singular value set, so as to give a definite measurement of the complexity of the original signal. It can be said that wavelet entropy has a good application prospect in fault detection, classification and protection. The mat lab simulation shows that the use of wavelet singular entropy on the locomotive and traction power system harmonic analysis is effective.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Wavelet Types Comparison for Extracting Iris Feature Based on Energy Compaction
NASA Astrophysics Data System (ADS)
Rizal Isnanto, R.
2015-06-01
Human iris has a very unique pattern which is possible to be used as a biometric recognition. To identify texture in an image, texture analysis method can be used. One of method is wavelet that extract the image feature based on energy. Wavelet transforms used are Haar, Daubechies, Coiflets, Symlets, and Biorthogonal. In the research, iris recognition based on five mentioned wavelets was done and then comparison analysis was conducted for which some conclusions taken. Some steps have to be done in the research. First, the iris image is segmented from eye image then enhanced with histogram equalization. The features obtained is energy value. The next step is recognition using normalized Euclidean distance. Comparison analysis is done based on recognition rate percentage with two samples stored in database for reference images. After finding the recognition rate, some tests are conducted using Energy Compaction for all five types of wavelets above. As the result, the highest recognition rate is achieved using Haar, whereas for coefficients cutting for C(i) < 0.1, Haar wavelet has a highest percentage, therefore the retention rate or significan coefficient retained for Haaris lower than other wavelet types (db5, coif3, sym4, and bior2.4)
Parallel object-oriented, denoising system using wavelet multiresolution analysis
Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.
2005-04-12
The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.
Analysis of spike-wave discharges in rats using discrete wavelet transform.
Ubeyli, Elif Derya; Ilbay, Gül; Sahin, Deniz; Ateş, Nurbay
2009-03-01
A feature is a distinctive or characteristic measurement, transform, structural component extracted from a segment of a pattern. Features are used to represent patterns with the goal of minimizing the loss of important information. The discrete wavelet transform (DWT) as a feature extraction method was used in representing the spike-wave discharges (SWDs) records of Wistar Albino Glaxo/Rijswijk (WAG/Rij) rats. The SWD records of WAG/Rij rats were decomposed into time-frequency representations using the DWT and the statistical features were calculated to depict their distribution. The obtained wavelet coefficients were used to identify characteristics of the signal that were not apparent from the original time domain signal. The present study demonstrates that the wavelet coefficients are useful in determining the dynamics in the time-frequency domain of SWD records.
Stationary wavelet transform for under-sampled MRI reconstruction.
Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M
2014-12-01
In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen
2013-08-01
We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.
Bayesian wavelet PCA methodology for turbomachinery damage diagnosis under uncertainty
NASA Astrophysics Data System (ADS)
Xu, Shengli; Jiang, Xiaomo; Huang, Jinzhi; Yang, Shuhua; Wang, Xiaofang
2016-12-01
Centrifugal compressor often suffers various defects such as impeller cracking, resulting in forced outage of the total plant. Damage diagnostics and condition monitoring of such a turbomachinery system has become an increasingly important and powerful tool to prevent potential failure in components and reduce unplanned forced outage and further maintenance costs, while improving reliability, availability and maintainability of a turbomachinery system. This paper presents a probabilistic signal processing methodology for damage diagnostics using multiple time history data collected from different locations of a turbomachine, considering data uncertainty and multivariate correlation. The proposed methodology is based on the integration of three advanced state-of-the-art data mining techniques: discrete wavelet packet transform, Bayesian hypothesis testing, and probabilistic principal component analysis. The multiresolution wavelet analysis approach is employed to decompose a time series signal into different levels of wavelet coefficients. These coefficients represent multiple time-frequency resolutions of a signal. Bayesian hypothesis testing is then applied to each level of wavelet coefficient to remove possible imperfections. The ratio of posterior odds Bayesian approach provides a direct means to assess whether there is imperfection in the decomposed coefficients, thus avoiding over-denoising. Power spectral density estimated by the Welch method is utilized to evaluate the effectiveness of Bayesian wavelet cleansing method. Furthermore, the probabilistic principal component analysis approach is developed to reduce dimensionality of multiple time series and to address multivariate correlation and data uncertainty for damage diagnostics. The proposed methodology and generalized framework is demonstrated with a set of sensor data collected from a real-world centrifugal compressor with impeller cracks, through both time series and contour analyses of vibration signal and principal components.
NASA Astrophysics Data System (ADS)
Reddy, Ramakrushna; Nair, Rajesh R.
2013-10-01
This work deals with a methodology applied to seismic early warning systems which are designed to provide real-time estimation of the magnitude of an event. We will reappraise the work of Simons et al. (2006), who on the basis of wavelet approach predicted a magnitude error of ±1. We will verify and improve upon the methodology of Simons et al. (2006) by applying an SVM statistical learning machine on the time-scale wavelet decomposition methods. We used the data of 108 events in central Japan with magnitude ranging from 3 to 7.4 recorded at KiK-net network stations, for a source-receiver distance of up to 150 km during the period 1998-2011. We applied a wavelet transform on the seismogram data and calculating scale-dependent threshold wavelet coefficients. These coefficients were then classified into low magnitude and high magnitude events by constructing a maximum margin hyperplane between the two classes, which forms the essence of SVMs. Further, the classified events from both the classes were picked up and linear regressions were plotted to determine the relationship between wavelet coefficient magnitude and earthquake magnitude, which in turn helped us to estimate the earthquake magnitude of an event given its threshold wavelet coefficient. At wavelet scale number 7, we predicted the earthquake magnitude of an event within 2.7 seconds. This means that a magnitude determination is available within 2.7 s after the initial onset of the P-wave. These results shed light on the application of SVM as a way to choose the optimal regression function to estimate the magnitude from a few seconds of an incoming seismogram. This would improve the approaches from Simons et al. (2006) which use an average of the two regression functions to estimate the magnitude.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Saleh, Z; Tang, X
Purpose: Segmentation of prostate CBCT images is an essential step towards real-time adaptive radiotherapy. It is challenging For Calypso patients, as more artifacts are generated by the beacon transponders. We herein propose a novel wavelet-based segmentation algorithm for rectum, bladder, and prostate of CBCT images with implanted Calypso transponders. Methods: Five hypofractionated prostate patients with daily CBCT were studied. Each patient had 3 Calypso transponder beacons implanted, and the patients were setup and treated with Calypso tracking system. Two sets of CBCT images from each patient were studied. The structures (i.e. rectum, bladder, and prostate) were contoured by a trainedmore » expert, and these served as ground truth. For a given CBCT, the moving window-based Double Haar transformation is applied first to obtain the wavelet coefficients. Based on a user defined point in the object of interest, a cluster algorithm based adaptive thresholding is applied to the low frequency components of the wavelet coefficients, and a Lee filter theory based adaptive thresholding is applied to the high frequency components. For the next step, the wavelet reconstruction is applied to the thresholded wavelet coefficients. A binary/segmented image of the object of interest is therefore obtained. DICE, sensitivity, inclusiveness and ΔV were used to evaluate the segmentation result. Results: Considering all patients, the bladder has the DICE, sensitivity, inclusiveness, and ΔV ranges of [0.81–0.95], [0.76–0.99], [0.83–0.94], [0.02–0.21]. For prostate, the ranges are [0.77–0.93], [0.84–0.97], [0.68–0.92], [0.1–0.46]. For rectum, the ranges are [0.72–0.93], [0.57–0.99], [0.73–0.98], [0.03–0.42]. Conclusion: The proposed algorithm appeared effective segmenting prostate CBCT images with the present of the Calypso artifacts. However, it is not robust in two scenarios: 1) rectum with significant amount of gas; 2) prostate with very low contrast. Model based algorithm might improve the segmentation in these two scenarios.« less
Textural characterization of histopathological images for oral sub-mucous fibrosis detection.
Krishnan, M Muthu Rama; Shah, Pratik; Choudhary, Anirudh; Chakraborty, Chandan; Paul, Ranjan Rashmi; Ray, Ajoy K
2011-10-01
In the field of quantitative microscopy, textural information plays a significant role very often in tissue characterization and diagnosis, in addition to morphology and intensity. The aim of this work is to improve the classification accuracy based on textural features for the development of a computer assisted screening of oral sub-mucous fibrosis (OSF). In fact, a systematic approach is introduced in order to grade the histopathological tissue sections into normal, OSF without dysplasia and OSF with dysplasia, which would help the oral onco-pathologists to screen the subjects rapidly. In totality, 71 textural features are extracted from epithelial region of the tissue sections using various wavelet families, Gabor-wavelet, local binary pattern, fractal dimension and Brownian motion curve, followed by preprocessing and segmentation. Wavelet families contribute a common set of 9 features, out of which 8 are significant and other 61 out of 62 obtained from the rest of the extractors are also statistically significant (p<0.05) in discriminating the three stages. Based on mean distance criteria, the best wavelet family (i.e., biorthogonal3.1 (bior3.1)) is selected for classifier design. support vector machine (SVM) is trained by 146 samples based on 69 textural features and its classification accuracy is computed for each of the combinations of wavelet family and rest of the extractors. Finally, it has been investigated that bior3.1 wavelet coefficients leads to higher accuracy (88.38%) in combination with LBP and Gabor wavelet features through three-fold cross validation. Results are shown and discussed in detail. It is shown that combining more than one texture measure instead of using just one might improve the overall accuracy. Copyright © 2011 Elsevier Ltd. All rights reserved.
Poisson denoising on the sphere
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Fadili, J.; Grenier, I.; Casandjian, J. M.
2009-08-01
In the scope of the Fermi mission, Poisson noise removal should improve data quality and make source detection easier. This paper presents a method for Poisson data denoising on sphere, called Multi-Scale Variance Stabilizing Transform on Sphere (MS-VSTS). This method is based on a Variance Stabilizing Transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has an (asymptotically) constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. Thus, MS-VSTS consists in decomposing the data into a sparse multi-scale dictionary (wavelets, curvelets, ridgelets...), and then applying a VST on the coefficients in order to get quasi-Gaussian stabilized coefficients. In this present article, the used multi-scale transform is the Isotropic Undecimated Wavelet Transform. Then, hypothesis tests are made to detect significant coefficients, and the denoised image is reconstructed with an iterative method based on Hybrid Steepest Descent (HST). The method is tested on simulated Fermi data.
Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong
2012-01-01
Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%.
Yan, Jian-Jun; Wang, Yi-Qin; Guo, Rui; Zhou, Jin-Zhuan; Yan, Hai-Xia; Xia, Chun-Ming; Shen, Yong
2012-01-01
Auscultation signals are nonstationary in nature. Wavelet packet transform (WPT) has currently become a very useful tool in analyzing nonstationary signals. Sample entropy (SampEn) has recently been proposed to act as a measurement for quantifying regularity and complexity of time series data. WPT and SampEn were combined in this paper to analyze auscultation signals in traditional Chinese medicine (TCM). SampEns for WPT coefficients were computed to quantify the signals from qi- and yin-deficient, as well as healthy, subjects. The complexity of the signal can be evaluated with this scheme in different time-frequency resolutions. First, the voice signals were decomposed into approximated and detailed WPT coefficients. Then, SampEn values for approximated and detailed coefficients were calculated. Finally, SampEn values with significant differences in the three kinds of samples were chosen as the feature parameters for the support vector machine to identify the three types of auscultation signals. The recognition accuracy rates were higher than 90%. PMID:22690242
Doppler radar fall activity detection using the wavelet transform.
Su, Bo Yu; Ho, K C; Rantz, Marilyn J; Skubic, Marjorie
2015-03-01
We propose in this paper the use of Wavelet transform (WT) to detect human falls using a ceiling mounted Doppler range control radar. The radar senses any motions from falls as well as nonfalls due to the Doppler effect. The WT is very effective in distinguishing the falls from other activities, making it a promising technique for radar fall detection in nonobtrusive inhome elder care applications. The proposed radar fall detector consists of two stages. The prescreen stage uses the coefficients of wavelet decomposition at a given scale to identify the time locations in which fall activities may have occurred. The classification stage extracts the time-frequency content from the wavelet coefficients at many scales to form a feature vector for fall versus nonfall classification. The selection of different wavelet functions is examined to achieve better performance. Experimental results using the data from the laboratory and real inhome environments validate the promising and robust performance of the proposed detector.
Design of almost symmetric orthogonal wavelet filter bank via direct optimization.
Murugesan, Selvaraaju; Tay, David B H
2012-05-01
It is a well-known fact that (compact-support) dyadic wavelets [based on the two channel filter banks (FBs)] cannot be simultaneously orthogonal and symmetric. Although orthogonal wavelets have the energy preservation property, biorthogonal wavelets are preferred in image processing applications because of their symmetric property. In this paper, a novel method is presented for the design of almost symmetric orthogonal wavelet FB. Orthogonality is structurally imposed by using the unnormalized lattice structure, and this leads to an objective function, which is relatively simple to optimize. The designed filters have good frequency response, flat group delay, almost symmetric filter coefficients, and symmetric wavelet function.
Sonar target enhancement by shrinkage of incoherent wavelet coefficients.
Hunter, Alan J; van Vossen, Robbert
2014-01-01
Background reverberation can obscure useful features of the target echo response in broadband low-frequency sonar images, adversely affecting detection and classification performance. This paper describes a resolution and phase-preserving means of separating the target response from the background reverberation noise using a coherence-based wavelet shrinkage method proposed recently for de-noising magnetic resonance images. The algorithm weights the image wavelet coefficients in proportion to their coherence between different looks under the assumption that the target response is more coherent than the background. The algorithm is demonstrated successfully on experimental synthetic aperture sonar data from a broadband low-frequency sonar developed for buried object detection.
Wavelet Transforms in Parallel Image Processing
1994-01-27
NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by
Wavelet-based adaptive thresholding method for image segmentation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl
2001-05-01
A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.
A data-driven wavelet-based approach for generating jumping loads
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Guo; Racic, Vitomir
2018-06-01
This paper suggests an approach to generate human jumping loads using wavelet transform and a database of individual jumping force records. A total of 970 individual jumping force records of various frequencies were first collected by three experiments from 147 test subjects. For each record, every jumping pulse was extracted and decomposed into seven levels by wavelet transform. All the decomposition coefficients were stored in an information database. Probability distributions of jumping cycle period, contact ratio and energy of the jumping pulse were statistically analyzed. Inspired by the theory of DNA recombination, an approach was developed by interchanging the wavelet coefficients between different jumping pulses. To generate a jumping force time history with N pulses, wavelet coefficients were first selected randomly from the database at each level. They were then used to reconstruct N pulses by the inverse wavelet transform. Jumping cycle periods and contract ratios were then generated randomly based on their probabilistic functions. These parameters were assigned to each of the N pulses which were in turn scaled by the amplitude factors βi to account for energy relationship between successive pulses. The final jumping force time history was obtained by linking all the N cycles end to end. This simulation approach can preserve the non-stationary features of the jumping load force in time-frequency domain. Application indicates that this approach can be used to generate jumping force time history due to single people jumping and also can be extended further to stochastic jumping loads due to groups and crowds.
NASA Astrophysics Data System (ADS)
Rizzo, R. E.; Healy, D.; Farrell, N. J.
2017-12-01
We have implemented a novel image processing tool, namely two-dimensional (2D) Morlet wavelet analysis, capable of detecting changes occurring in fracture patterns at different scales of observation, and able of recognising the dominant fracture orientations and the spatial configurations for progressively larger (or smaller) scale of analysis. Because of its inherited anisotropy, the Morlet wavelet is proved to be an excellent choice for detecting directional linear features, i.e. regions where the amplitude of the signal is regular along one direction and has sharp variation along the perpendicular direction. Performances of the Morlet wavelet are tested against the 'classic' Mexican hat wavelet, deploying a complex synthetic fracture network. When applied to a natural fracture network, formed triaxially (σ1>σ2=σ3) deforming a core sample of the Hopeman sandstone, the combination of 2D Morlet wavelet and wavelet coefficient maps allows for the detection of characteristic scale orientation and length transitions, associated with the shifts from distributed damage to the growth of localised macroscopic shear fracture. A complementary outcome arises from the wavelet coefficient maps produced by increasing the wavelet scale parameter. These maps can be used to chart the variations in the spatial distribution of the analysed entities, meaning that it is possible to retrieve information on the density of fracture patterns at specific length scales during deformation.
Identification of structural damage using wavelet-based data classification
NASA Astrophysics Data System (ADS)
Koh, Bong-Hwan; Jeong, Min-Joong; Jung, Uk
2008-03-01
Predicted time-history responses from a finite-element (FE) model provide a baseline map where damage locations are clustered and classified by extracted damage-sensitive wavelet coefficients such as vertical energy threshold (VET) positions having large silhouette statistics. Likewise, the measured data from damaged structure are also decomposed and rearranged according to the most dominant positions of wavelet coefficients. Having projected the coefficients to the baseline map, the true localization of damage can be identified by investigating the level of closeness between the measurement and predictions. The statistical confidence of baseline map improves as the number of prediction cases increases. The simulation results of damage detection in a truss structure show that the approach proposed in this study can be successfully applied for locating structural damage even in the presence of a considerable amount of process and measurement noise.
Remote sensing of soil organic matter of farmland with hyperspectral image
NASA Astrophysics Data System (ADS)
Gu, Xiaohe; Wang, Lei; Yang, Guijun; Zhang, Liyan
2017-10-01
Monitoring soil organic matter (SOM) of cultivated land quantitively and mastering its spatial change are helpful for fertility adjustment and sustainable development of agriculture. The study aimed to analyze the response between SOM and reflectivity of hyperspectral image with different pixel size and develop the optimal model of estimating SOM with imaging spectral technology. The wavelet transform method was used to analyze the correlation between the hyperspectral reflectivity and SOM. Then the optimal pixel size and sensitive wavelet feature scale were screened to develop the inversion model of SOM. Result showed that wavelet transform of soil hyperspectrum was help to improve the correlation between the wavelet features and SOM. In the visible wavelength range, the susceptible wavelet features of SOM mainly concentrated 460 603 nm. As the wavelength increased, the wavelet scale corresponding correlation coefficient increased maximum and then gradually decreased. In the near infrared wavelength range, the susceptible wavelet features of SOM mainly concentrated 762 882 nm. As the wavelength increased, the wavelet scale gradually decreased. The study developed multivariate model of continuous wavelet transforms by the method of stepwise linear regression (SLR). The CWT-SLR models reached higher accuracies than those of univariate models. With the resampling scale increasing, the accuracies of CWT-SLR models gradually increased, while the determination coefficients (R2) fluctuated from 0.52 to 0.59. The R2 of 5*5 scale reached highest (0.5954), while the RMSE reached lowest (2.41 g/kg). It indicated that multivariate model based on continuous wavelet transform had better ability for estimating SOM than univariate model.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
Classification of EEG Signals Based on Pattern Recognition Approach.
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.
Classification of EEG Signals Based on Pattern Recognition Approach
Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed
2017-01-01
Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190
Adaptive zero-tree structure for curved wavelet image coding
NASA Astrophysics Data System (ADS)
Zhang, Liang; Wang, Demin; Vincent, André
2006-02-01
We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Sun, Qi; Pi, Shaohua; Wu, Hongyan
2014-09-01
In this paper, feature extraction and pattern recognition of the distributed optical fiber sensing signal have been studied. We adopt Mel-Frequency Cepstral Coefficient (MFCC) feature extraction, wavelet packet energy feature extraction and wavelet packet Shannon entropy feature extraction methods to obtain sensing signals (such as speak, wind, thunder and rain signals, etc.) characteristic vectors respectively, and then perform pattern recognition via RBF neural network. Performances of these three feature extraction methods are compared according to the results. We choose MFCC characteristic vector to be 12-dimensional. For wavelet packet feature extraction, signals are decomposed into six layers by Daubechies wavelet packet transform, in which 64 frequency constituents as characteristic vector are respectively extracted. In the process of pattern recognition, the value of diffusion coefficient is introduced to increase the recognition accuracy, while keeping the samples for testing algorithm the same. Recognition results show that wavelet packet Shannon entropy feature extraction method yields the best recognition accuracy which is up to 97%; the performance of 12-dimensional MFCC feature extraction method is less satisfactory; the performance of wavelet packet energy feature extraction method is the worst.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-03-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R [Albuquerque, NM
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Efficient hemodynamic event detection utilizing relational databases and wavelet analysis
NASA Technical Reports Server (NTRS)
Saeed, M.; Mark, R. G.
2001-01-01
Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.
Construction of Orthonormal Wavelets Using Symbolic Algebraic Methods
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2009-09-01
Our contribution is concerned with the solution of nonlinear algebraic equations systems arising from the computation of scaling coefficients of orthonormal wavelets with compact support. Specifically Daubechies wavelets, symmlets, coiflets, and generalized coiflets. These wavelets are defined as a solution of equation systems which are partly linear and partly nonlinear. The idea of presented methods consists in replacing those equations for scaling coefficients by equations for scaling moments. It enables us to eliminate some quadratic conditions in the original system and then simplify it. The simplified system is solved with the aid of the Gröbner basis method. The advantage of our approach is that in some cases, it provides all possible solutions and these solutions can be computed to arbitrary precision. For small systems, we are even able to find explicit solutions. The computation was carried out by symbolic algebra software Maple.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
The EM Method in a Probabilistic Wavelet-Based MRI Denoising
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
The EM Method in a Probabilistic Wavelet-Based MRI Denoising.
Martin-Fernandez, Marcos; Villullas, Sergio
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.
A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data
NASA Astrophysics Data System (ADS)
Freeman, P. E.; Kashyap, V.; Rosner, R.; Lamb, D. Q.
2002-01-01
Wavelets are scalable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero, and thus may be used to simultaneously characterize the shape, location, and strength of astronomical sources. But in addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly nonzero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. Source pixels are identified by comparing each correlation coefficient with its probability sampling distribution, which is a function of the (estimated or a priori known) background amplitude. In this paper, we describe the mission-independent, wavelet-based source detection algorithm ``WAVDETECT,'' part of the freely available Chandra Interactive Analysis of Observations (CIAO) software package. Our algorithm uses the Marr, or ``Mexican Hat'' wavelet function, but may be adapted for use with other wavelet functions. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e., flat-fielded) background maps; (2) the correction for exposure variations within the field of view (due to, e.g., telescope support ribs or the edge of the field); (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the robustness of WAVDETECT by applying it to an image from an idealized detector with a spatially invariant Gaussian PSF and an exposure map similar to that of the Einstein IPC; to Pleiades Cluster data collected by the ROSAT PSPC; and to simulated Chandra ACIS-I image of the Lockman Hole region.
Exploration of EEG features of Alzheimer's disease using continuous wavelet transform.
Ghorbanian, Parham; Devilbiss, David M; Hess, Terry; Bernstein, Allan; Simon, Adam J; Ashrafiuon, Hashem
2015-09-01
We have developed a novel approach to elucidate several discriminating EEG features of Alzheimer's disease. The approach is based on the use of a variety of continuous wavelet transforms, pairwise statistical tests with multiple comparison correction, and several decision tree algorithms, in order to choose the most prominent EEG features from a single sensor. A pilot study was conducted to record EEG signals from Alzheimer's disease (AD) patients and healthy age-matched control (CTL) subjects using a single dry electrode device during several eyes-closed (EC) and eyes-open (EO) resting conditions. We computed the power spectrum distribution properties and wavelet and sample entropy of the wavelet coefficients time series at scale ranges approximately corresponding to the major brain frequency bands. A predictive index was developed using the results from statistical tests and decision tree algorithms to identify the most reliable significant features of the AD patients when compared to healthy controls. The three most dominant features were identified as larger absolute mean power and larger standard deviation of the wavelet scales corresponding to 4-8 Hz (θ) during EO and lower wavelet entropy of the wavelet scales corresponding to 8-12 Hz (α) during EC, respectively. The fourth reliable set of distinguishing features of AD patients was lower relative power of the wavelet scales corresponding to 12-30 Hz (β) followed by lower skewness of the wavelet scales corresponding to 2-4 Hz (upper δ), both during EO. In general, the results indicate slowing and lower complexity of EEG signal in AD patients using a very easy-to-use and convenient single dry electrode device.
Texture Analysis of Recurrence Plots Based on Wavelets and PSO for Laryngeal Pathologies Detection.
Souza, Taciana A; Vieira, Vinícius J D; Correia, Suzete E N; Costa, Silvana L N C; de A Costa, Washington C; Souza, Micael A
2015-01-01
This paper deals with the discrimination between healthy and pathological speech signals using recurrence plots and wavelet transform with texture features. Approximation and detail coefficients are obtained from the recurrence plots using Haar wavelet transform, considering one decomposition level. The considered laryngeal pathologies are: paralysis, Reinke's edema and nodules. Accuracy rates above 86% were obtained by means of the employed method.
NASA Astrophysics Data System (ADS)
Keylock, Christopher J.
2018-04-01
A technique termed gradual multifractal reconstruction (GMR) is formulated. A continuum is defined from a signal that preserves the pointwise Hölder exponent (multifractal) structure of a signal but randomises the locations of the original data values with respect to this (φ = 0), to the original signal itself(φ = 1). We demonstrate that this continuum may be populated with synthetic time series by undertaking selective randomisation of wavelet phases using a dual-tree complex wavelet transform. That is, the φ = 0 end of the continuum is realised using the recently proposed iterated, amplitude adjusted wavelet transform algorithm (Keylock, 2017) that fully randomises the wavelet phases. This is extended to the GMR formulation by selective phase randomisation depending on whether or not the wavelet coefficient amplitudes exceeds a threshold criterion. An econophysics application of the technique is presented. The relation between the normalised log-returns and their Hölder exponents for the daily returns of eight financial indices are compared. One particularly noticeable result is the change for the two American indices (NASDAQ 100 and S&P 500) from a non-significant to a strongly significant (as determined using GMR) cross-correlation between the returns and their Hölder exponents from before the 2008 crash to afterwards. This is also reflected in the skewness of the phase difference distributions, which exhibit a geographical structure, with Asian markets not exhibiting significant skewness in contrast to those from elsewhere globally.
Diagnostic methodology for incipient system disturbance based on a neural wavelet approach
NASA Astrophysics Data System (ADS)
Won, In-Ho
Since incipient system disturbances are easily mixed up with other events or noise sources, the signal from the system disturbance can be neglected or identified as noise. Thus, as available knowledge and information is obtained incompletely or inexactly from the measurements; an exploration into the use of artificial intelligence (AI) tools to overcome these uncertainties and limitations was done. A methodology integrating the feature extraction efficiency of the wavelet transform with the classification capabilities of neural networks is developed for signal classification in the context of detecting incipient system disturbances. The synergistic effects of wavelets and neural networks present more strength and less weakness than either technique taken alone. A wavelet feature extractor is developed to form concise feature vectors for neural network inputs. The feature vectors are calculated from wavelet coefficients to reduce redundancy and computational expense. During this procedure, the statistical features based on the fractal concept to the wavelet coefficients play a role as crucial key in the wavelet feature extractor. To verify the proposed methodology, two applications are investigated and successfully tested. The first involves pump cavitation detection using dynamic pressure sensor. The second pertains to incipient pump cavitation detection using signals obtained from a current sensor. Also, through comparisons between three proposed feature vectors and with statistical techniques, it is shown that the variance feature extractor provides a better approach in the performed applications.
Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's
NASA Technical Reports Server (NTRS)
Cai, Wei; Wang, Jian-Zhong
1993-01-01
We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.
Cell edge detection in JPEG2000 wavelet domain - analysis on sigmoid function edge model.
Punys, Vytenis; Maknickas, Ramunas
2011-01-01
Big virtual microscopy images (80K x 60K pixels and larger) are usually stored using the JPEG2000 image compression scheme. Diagnostic quantification, based on image analysis, might be faster if performed on compressed data (approx. 20 times less the original amount), representing the coefficients of the wavelet transform. The analysis of possible edge detection without reverse wavelet transform is presented in the paper. Two edge detection methods, suitable for JPEG2000 bi-orthogonal wavelets, are proposed. The methods are adjusted according calculated parameters of sigmoid edge model. The results of model analysis indicate more suitable method for given bi-orthogonal wavelet.
NASA Astrophysics Data System (ADS)
Huang, Shieh-Kung; Loh, Chin-Hsiung; Chen, Chin-Tsun
2016-04-01
Seismic records collected from earthquake with large magnitude and far distance may contain long period seismic waves which have small amplitude but with dominant period up to 10 sec. For a general situation, the long period seismic waves will not endanger the safety of the structural system or cause any uncomfortable for human activity. On the contrary, for those far distant earthquakes, this type of seismic waves may cause a glitch or, furthermore, breakdown to some important equipments/facilities (such as the high-precision facilities in high-tech Fab) and eventually damage the interests of company if the amplitude becomes significant. The previous study showed that the ground motion features such as time-variant dominant frequencies extracted using moving window singular spectrum analysis (MWSSA) and amplitude characteristics of long-period waves identified from slope change of ground motion Arias Intensity can efficiently indicate the damage severity to the high-precision facilities. However, embedding a large hankel matrix to extract long period seismic waves make the MWSSA become a time-consumed process. In this study, the seismic ground motion data collected from broadband seismometer network located in Taiwan were used (with epicenter distance over 1000 km). To monitor the significant long-period waves, the low frequency components of these seismic ground motion data are extracted using wavelet packet transform (WPT) to obtain wavelet coefficients and the wavelet entropy of coefficients are used to identify the amplitude characteristics of long-period waves. The proposed method is a timesaving process compared to MWSSA and can be easily implemented for real-time detection. Comparison and discussion on this method among these different seismic events and the damage severity to the high-precision facilities in high-tech Fab is made.
Noncoding sequence classification based on wavelet transform analysis: part II
NASA Astrophysics Data System (ADS)
Paredes, O.; Strojnik, M.; Romo-Vázquez, R.; Vélez-Pérez, H.; Ranta, R.; Garcia-Torales, G.; Scholl, M. K.; Morales, J. A.
2017-09-01
DNA sequences in human genome can be divided into the coding and noncoding ones. We hypothesize that the characteristic periodicities of the noncoding sequences are related to their function. We describe the procedure to identify these characteristic periodicities using the wavelet analysis. Our results show that three groups of noncoding sequences, each one with different biological function, may be differentiated by their wavelet coefficients within specific frequency range.
Li, Sheng; Zöllner, Frank G; Merrem, Andreas D; Peng, Yinghong; Roervik, Jarle; Lundervold, Arvid; Schad, Lothar R
2012-03-01
Renal diseases can lead to kidney failure that requires life-long dialysis or renal transplantation. Early detection and treatment can prevent progression towards end stage renal disease. MRI has evolved into a standard examination for the assessment of the renal morphology and function. We propose a wavelet-based clustering to group the voxel time courses and thereby, to segment the renal compartments. This approach comprises (1) a nonparametric, discrete wavelet transform of the voxel time course, (2) thresholding of the wavelet coefficients using Stein's Unbiased Risk estimator, and (3) k-means clustering of the wavelet coefficients to segment the kidneys. Our method was applied to 3D dynamic contrast enhanced (DCE-) MRI data sets of human kidney in four healthy volunteers and three patients. On average, the renal cortex in the healthy volunteers could be segmented at 88%, the medulla at 91%, and the pelvis at 98% accuracy. In the patient data, with aberrant voxel time courses, the segmentation was also feasible with good results for the kidney compartments. In conclusion wavelet based clustering of DCE-MRI of kidney is feasible and a valuable tool towards automated perfusion and glomerular filtration rate quantification. Copyright © 2011 Elsevier Ltd. All rights reserved.
On the wavelet optimized finite difference method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1994-01-01
When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.
Image-adaptive and robust digital wavelet-domain watermarking for images
NASA Astrophysics Data System (ADS)
Zhao, Yi; Zhang, Liping
2018-03-01
We propose a new frequency domain wavelet based watermarking technique. The key idea of our scheme is twofold: multi-tier solution representation of image and odd-even quantization embedding/extracting watermark. Because many complementary watermarks need to be hidden, the watermark image designed is image-adaptive. The meaningful and complementary watermark images was embedded into the original image (host image) by odd-even quantization modifying coefficients, which was selected from the detail wavelet coefficients of the original image, if their magnitudes are larger than their corresponding Just Noticeable Difference thresholds. The tests show good robustness against best-known attacks such as noise addition, image compression, median filtering, clipping as well as geometric transforms. Further research may improve the performance by refining JND thresholds.
Wavelet analysis of birefringence images of myocardium tissue
NASA Astrophysics Data System (ADS)
Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Kushnerik, L.; Soltys, I. V.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.
[Surface electromyography signal classification using gray system theory].
Xie, Hongbo; Ma, Congbin; Wang, Zhizhong; Huang, Hai
2004-12-01
A new method based on gray correlation was introduced to improve the identification rate in artificial limb. The electromyography (EMG) signal was first transformed into time-frequency domain by wavelet transform. Singular value decomposition (SVD) was then used to extract feature vector from the wavelet coefficient for pattern recognition. The decision was made according to the maximum gray correlation coefficient. Compared with neural network recognition, this robust method has an almost equivalent recognition rate but much lower computation costs and less training samples.
Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images
Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde
2017-01-01
Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860
The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm
NASA Astrophysics Data System (ADS)
Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero
1999-10-01
We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.
NASA Astrophysics Data System (ADS)
Zhao, Weichen; Sun, Zhuo; Kong, Song
2016-10-01
Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.
The generalized Morse wavelet method to determine refractive index dispersion of dielectric films
NASA Astrophysics Data System (ADS)
Kocahan, Özlem; Özcan, Seçkin; Coşkun, Emre; Özder, Serhat
2017-04-01
The continuous wavelet transform (CWT) method is a useful tool for the determination of refractive index dispersion of dielectric films. Mother wavelet selection is an important factor for the accuracy of the results when using CWT. In this study, generalized Morse wavelet (GMW) was proposed as the mother wavelet because of having two degrees of freedom. The simulation studies, based on error calculations and Cauchy Coefficient comparisons, were presented and also the noisy signal was tested by CWT method with GMW. The experimental validity of this method was checked by D263 T schott glass having 100 μm thickness and the results were compared to those from the catalog value.
[An improved medical image fusion algorithm and quality evaluation].
Chen, Meiling; Tao, Ling; Qian, Zhiyu
2009-08-01
Medical image fusion is of very important value for application in medical image analysis and diagnosis. In this paper, the conventional method of wavelet fusion is improved,so a new algorithm of medical image fusion is presented and the high frequency and low frequency coefficients are studied respectively. When high frequency coefficients are chosen, the regional edge intensities of each sub-image are calculated to realize adaptive fusion. The choice of low frequency coefficient is based on the edges of images, so that the fused image preserves all useful information and appears more distinctly. We apply the conventional and the improved fusion algorithms based on wavelet transform to fuse two images of human body and also evaluate the fusion results through a quality evaluation method. Experimental results show that this algorithm can effectively retain the details of information on original images and enhance their edge and texture features. This new algorithm is better than the conventional fusion algorithm based on wavelet transform.
Tool Condition Monitoring in Micro-End Milling using wavelets
NASA Astrophysics Data System (ADS)
Dubey, N. K.; Roushan, A.; Rao, U. S.; Sandeep, K.; Patra, K.
2018-04-01
In this work, Tool Condition Monitoring (TCM) strategy is developed for micro-end milling of titanium alloy and mild steel work-pieces. Full immersion slot milling experiments are conducted using a solid tungsten carbide end mill for more than 1900 s to have reasonable amount of tool wear. During the micro-end milling process, cutting force and vibration signals are acquired using Kistler piezo-electric 3-component force dynamometer (9256C2) and accelerometer (NI cDAQ-9188) respectively. The force components and the vibration signals are processed using Discrete Wavelet Transformation (DWT) in both time and frequency window. 5-level wavelet packet decomposition using Db-8 wavelet is carried out and the detailed coefficients D1 to D5 for each of the signals are obtained. The results of the wavelet transformation are correlated with the tool wear. In case of vibration signals, de-noising is done for higher frequency components (D1) and force signals were de-noised for lower frequency components (D5). Increasing value of MAD (Mean Absolute Deviation) of the detail coefficients for successive channels depicted tool wear. The predictions of the tool wear are confirmed from the actual wear observed in the SEM of the worn tool.
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
NASA Astrophysics Data System (ADS)
Kadampur, Mohammad Ali; D. v. L. N., Somayajulu
Privacy preserving data mining is an art of knowledge discovery without revealing the sensitive data of the data set. In this paper a data transformation technique using wavelets is presented for privacy preserving data mining. Wavelets use well known energy compaction approach during data transformation and only the high energy coefficients are published to the public domain instead of the actual data proper. It is found that the transformed data preserves the Eucleadian distances and the method can be used in privacy preserving clustering. Wavelets offer the inherent improved time complexity.
Shape-driven 3D segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2006-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details.
Multi-threshold de-noising of electrical imaging logging data based on the wavelet packet transform
NASA Astrophysics Data System (ADS)
Xie, Fang; Xiao, Chengwen; Liu, Ruilin; Zhang, Lili
2017-08-01
A key problem of effectiveness evaluation for fractured-vuggy carbonatite reservoir is how to accurately extract fracture and vug information from electrical imaging logging data. Drill bits quaked during drilling and resulted in rugged surfaces of borehole walls and thus conductivity fluctuations in electrical imaging logging data. The occurrence of the conductivity fluctuations (formation background noise) directly affects the fracture/vug information extraction and reservoir effectiveness evaluation. We present a multi-threshold de-noising method based on wavelet packet transform to eliminate the influence of rugged borehole walls. The noise is present as fluctuations in button-electrode conductivity curves and as pockmarked responses in electrical imaging logging static images. The noise has responses in various scales and frequency ranges and has low conductivity compared with fractures or vugs. Our de-noising method is to decompose the data into coefficients with wavelet packet transform on a quadratic spline basis, then shrink high-frequency wavelet packet coefficients in different resolutions with minimax threshold and hard-threshold function, and finally reconstruct the thresholded coefficients. We use electrical imaging logging data collected from fractured-vuggy Ordovician carbonatite reservoir in Tarim Basin to verify the validity of the multi-threshold de-noising method. Segmentation results and extracted parameters are shown as well to prove the effectiveness of the de-noising procedure.
Rabbani, Hossein; Sonka, Milan; Abramoff, Michael D
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR.
Hosseinbor, Ameer Pasha; Kim, Won Hwa; Adluru, Nagesh; Acharya, Amit; Vorperian, Houri K; Chung, Moo K
2014-01-01
Recently, the HyperSPHARM algorithm was proposed to parameterize multiple disjoint objects in a holistic manner using the 4D hyperspherical harmonics. The HyperSPHARM coefficients are global; they cannot be used to directly infer localized variations in signal. In this paper, we present a unified wavelet framework that links Hyper-SPHARM to the diffusion wavelet transform. Specifically, we will show that the HyperSPHARM basis forms a subset of a wavelet-based multiscale representation of surface-based signals. This wavelet, termed the hyperspherical diffusion wavelet, is a consequence of the equivalence of isotropic heat diffusion smoothing and the diffusion wavelet transform on the hypersphere. Our framework allows for the statistical inference of highly localized anatomical changes, which we demonstrate in the first-ever developmental study on the hyoid bone investigating gender and age effects. We also show that the hyperspherical wavelet successfully picks up group-wise differences that are barely detectable using SPHARM.
Hosseinbor, A. Pasha; Kim, Won Hwa; Adluru, Nagesh; Acharya, Amit; Vorperian, Houri K.; Chung, Moo K.
2014-01-01
Recently, the HyperSPHARM algorithm was proposed to parameterize multiple disjoint objects in a holistic manner using the 4D hyperspherical harmonics. The HyperSPHARM coefficients are global; they cannot be used to directly infer localized variations in signal. In this paper, we present a unified wavelet framework that links HyperSPHARM to the diffusion wavelet transform. Specifically, we will show that the HyperSPHARM basis forms a subset of a wavelet-based multiscale representation of surface-based signals. This wavelet, termed the hyperspherical diffusion wavelet, is a consequence of the equivalence of isotropic heat diffusion smoothing and the diffusion wavelet transform on the hypersphere. Our framework allows for the statistical inference of highly localized anatomical changes, which we demonstrate in the firstever developmental study on the hyoid bone investigating gender and age effects. We also show that the hyperspherical wavelet successfully picks up group-wise differences that are barely detectable using SPHARM. PMID:25320783
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
An intelligent data model for the storage of structured grids
NASA Astrophysics Data System (ADS)
Clyne, John; Norton, Alan
2013-04-01
With support from the U.S. National Science Foundation we have developed, and currently maintain, VAPOR: a geosciences-focused, open source visual data analysis package. VAPOR enables highly interactive exploration, as well as qualitative and quantitative analysis of high-resolution simulation outputs using only a commodity, desktop computer. The enabling technology behind VAPOR's ability to interact with a data set, whose size would overwhelm all but the largest analysis computing resources, is a progressive data access file format, called the VAPOR Data Collection (VDC). The VDC is based on the discrete wavelet transform and their information compaction properties. Prior to analysis, raw data undergo a wavelet transform, concentrating the information content into a fraction of the coefficients. The coefficients are then sorted by their information content (magnitude) into a small number of bins. Data are reconstructed by applying an inverse wavelet transform. If all of the coefficient bins are used during reconstruction the process is lossless (up to floating point round-off). If only a subset of the bins are used, an approximation of the original data is produced. A crucial point here is that the principal benefit to reconstruction from a subset of wavelet coefficients is a reduction in I/O. Further, if smaller coefficients are simply discarded, or perhaps stored on more capacious tertiary storage, secondary storage requirements (e.g. disk) can be reduced as well. In practice, these reductions in I/O or storage can be on the order of tens or even hundreds. This talk will briefly describe the VAPOR Data Collection, and will present real world success stories from the geosciences that illustrate how progressive data access enables highly interactive exploration of Big Data.
Multi-resolution analysis for ear recognition using wavelet features
NASA Astrophysics Data System (ADS)
Shoaib, M.; Basit, A.; Faye, I.
2016-11-01
Security is very important and in order to avoid any physical contact, identification of human when they are moving is necessary. Ear biometric is one of the methods by which a person can be identified using surveillance cameras. Various techniques have been proposed to increase the ear based recognition systems. In this work, a feature extraction method for human ear recognition based on wavelet transforms is proposed. The proposed features are approximation coefficients and specific details of level two after applying various types of wavelet transforms. Different wavelet transforms are applied to find the suitable wavelet. Minimum Euclidean distance is used as a matching criterion. Results achieved by the proposed method are promising and can be used in real time ear recognition system.
Application of wavelet techniques for cancer diagnosis using ultrasound images: A Review.
Sudarshan, Vidya K; Mookiah, Muthu Rama Krishnan; Acharya, U Rajendra; Chandran, Vinod; Molinari, Filippo; Fujita, Hamido; Ng, Kwan Hoong
2016-02-01
Ultrasound is an important and low cost imaging modality used to study the internal organs of human body and blood flow through blood vessels. It uses high frequency sound waves to acquire images of internal organs. It is used to screen normal, benign and malignant tissues of various organs. Healthy and malignant tissues generate different echoes for ultrasound. Hence, it provides useful information about the potential tumor tissues that can be analyzed for diagnostic purposes before therapeutic procedures. Ultrasound images are affected with speckle noise due to an air gap between the transducer probe and the body. The challenge is to design and develop robust image preprocessing, segmentation and feature extraction algorithms to locate the tumor region and to extract subtle information from isolated tumor region for diagnosis. This information can be revealed using a scale space technique such as the Discrete Wavelet Transform (DWT). It decomposes an image into images at different scales using low pass and high pass filters. These filters help to identify the detail or sudden changes in intensity in the image. These changes are reflected in the wavelet coefficients. Various texture, statistical and image based features can be extracted from these coefficients. The extracted features are subjected to statistical analysis to identify the significant features to discriminate normal and malignant ultrasound images using supervised classifiers. This paper presents a review of wavelet techniques used for preprocessing, segmentation and feature extraction of breast, thyroid, ovarian and prostate cancer using ultrasound images. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nagy, Szilvia; Pipek, János
2015-12-21
In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.
NASA Astrophysics Data System (ADS)
Palaniswamy, Sumithra; Duraisamy, Prakash; Alam, Mohammad Showkat; Yuan, Xiaohui
2012-04-01
Automatic speech processing systems are widely used in everyday life such as mobile communication, speech and speaker recognition, and for assisting the hearing impaired. In speech communication systems, the quality and intelligibility of speech is of utmost importance for ease and accuracy of information exchange. To obtain an intelligible speech signal and one that is more pleasant to listen, noise reduction is essential. In this paper a new Time Adaptive Discrete Bionic Wavelet Thresholding (TADBWT) scheme is proposed. The proposed technique uses Daubechies mother wavelet to achieve better enhancement of speech from additive non- stationary noises which occur in real life such as street noise and factory noise. Due to the integration of human auditory system model into the wavelet transform, bionic wavelet transform (BWT) has great potential for speech enhancement which may lead to a new path in speech processing. In the proposed technique, at first, discrete BWT is applied to noisy speech to derive TADBWT coefficients. Then the adaptive nature of the BWT is captured by introducing a time varying linear factor which updates the coefficients at each scale over time. This approach has shown better performance than the existing algorithms at lower input SNR due to modified soft level dependent thresholding on time adaptive coefficients. The objective and subjective test results confirmed the competency of the TADBWT technique. The effectiveness of the proposed technique is also evaluated for speaker recognition task under noisy environment. The recognition results show that the TADWT technique yields better performance when compared to alternate methods specifically at lower input SNR.
Wavelets, ridgelets, and curvelets for Poisson noise removal.
Zhang, Bo; Fadili, Jalal M; Starck, Jean-Luc
2008-07-01
In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) low-count situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MS-VSTs) and nonlinear decomposition schemes. By doing so, the noise-contaminated coefficients of these MS-VST-modified transforms are asymptotically normally distributed with known variances. A classical hypothesis-testing framework is adopted to detect the significant coefficients, and a sparsity-driven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MS-VST approach for recovering important structures of various morphologies in (very) low-count images. These results also demonstrate that the MS-VST approach is competitive relative to many existing denoising methods.
NASA Astrophysics Data System (ADS)
Wang, Chun-mei; Zhang, Chong-ming; Zou, Jun-zhong; Zhang, Jian
2012-02-01
The diagnosis of several neurological disorders is based on the detection of typical pathological patterns in electroencephalograms (EEGs). This is a time-consuming task requiring significant training and experience. A lot of effort has been devoted to developing automatic detection techniques which might help not only in accelerating this process but also in avoiding the disagreement among readers of the same record. In this work, Neyman-Pearson criteria and a support vector machine (SVM) are applied for detecting an epileptic EEG. Decision making is performed in two stages: feature extraction by computing the wavelet coefficients and the approximate entropy (ApEn) and detection by using Neyman-Pearson criteria and an SVM. Then the detection performance of the proposed method is evaluated. Simulation results demonstrate that the wavelet coefficients and the ApEn are features that represent the EEG signals well. By comparison with Neyman-Pearson criteria, an SVM applied on these features achieved higher detection accuracies.
A study of renal blood flow regulation using the discrete wavelet transform
NASA Astrophysics Data System (ADS)
Pavlov, Alexey N.; Pavlova, Olga N.; Mosekilde, Erik; Sosnovtseva, Olga V.
2010-02-01
In this paper we provide a way to distinguish features of renal blood flow autoregulation mechanisms in normotensive and hypertensive rats based on the discrete wavelet transform. Using the variability of the wavelet coefficients we show distinctions that occur between the normal and pathological states. A reduction of this variability in hypertension is observed on the microscopic level of the blood flow in efferent arteriole of single nephrons. This reduction is probably associated with higher flexibility of healthy cardiovascular system.
NASA Technical Reports Server (NTRS)
Jawerth, Bjoern; Sweldens, Wim
1993-01-01
We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.
A wavelet approach to binary blackholes with asynchronous multitasking
NASA Astrophysics Data System (ADS)
Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo
2016-03-01
Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.
Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei
2010-01-01
This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.
Peak finding using biorthogonal wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, C.Y.
2000-02-01
The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform muchmore » better than the FBI wavelets.« less
Shape-Driven 3D Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2013-01-01
This paper presents a novel active surface segmentation algorithm using a multiscale shape representation and prior. We define a parametric model of a surface using spherical wavelet functions and learn a prior probability distribution over the wavelet coefficients to model shape variations at different scales and spatial locations in a training set. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior in the segmentation framework. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to the segmentation of brain caudate nucleus, of interest in the study of schizophrenia. Our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm by capturing finer shape details. PMID:17354875
A Novel Analysis Of The Connection Between Indian Monsoon Rainfall And Solar Activity
NASA Astrophysics Data System (ADS)
Bhattacharyya, S.; Narasimha, R.
2005-12-01
The existence of possible correlations between the solar cycle period as extracted from the yearly means of sunspot numbers and any periodicities that may be present in the Indian monsoon rainfall has been addressed using wavelet analysis. The wavelet transform coefficient maps of sunspot-number time series and those of the homogeneous Indian monsoon rainfall annual time series data reveal striking similarities, especially around the 11-year period. A novel method to analyse and quantify this similarity devising statistical schemes is suggested in this paper. The wavelet transform coefficient maxima at the 11-year period for the sunspot numbers and the monsoon rainfall have each been modelled as a point process in time and a statistical scheme for identifying a trend or dependence between the two processes has been devised. A regression analysis of parameters in these processes reveals a nearly linear trend with small but systematic deviations from the regressed line. Suitable function models for these deviations have been obtained through an unconstrained error minimisation scheme. These models provide an excellent fit to the time series of the given wavelet transform coefficient maxima obtained from actual data. Statistical significance tests on these deviations suggest with 99% confidence that the deviations are sample fluctuations obtained from normal distributions. In fact our earlier studies (see, Bhattacharyya and Narasimha, 2005, Geophys. Res. Lett., Vol. 32, No. 5) revealed that average rainfall is higher during periods of greater solar activity for all cases, at confidence levels varying from 75% to 99%, being 95% or greater in 3 out of 7 of them. Analysis using standard wavelet techniques reveals higher power in the 8--16 y band during the higher solar activity period, in 6 of the 7 rainfall time series, at confidence levels exceeding 99.99%. Furthermore, a comparison between the wavelet cross spectra of solar activity with rainfall and noise (including those simulating the rainfall spectrum and probability distribution) revealed that over the two test-periods respectively of high and low solar activity, the average cross power of the solar activity index with rainfall exceeds that with the noise at z-test confidence levels exceeding 99.99% over period-bands covering the 11.6 y sunspot cycle (see, Bhattacharyya and Narasimha, SORCE 2005 14-16th September, at Durango, Colorado USA). These results provide strong evidence for connections between Indian rainfall and solar activity. The present study reveals in addition the presence of subharmonics of the solar cycle period in the monsoon rainfall time series together with information on their phase relationships.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Espinosa-Paredes, Gilberto; Prieto-Guerrero, Alfonso; Nunez-Carrera, Alejandro
This paper introduces a wavelet-based method to analyze instability events in a boiling water reactor (BWR) during transient phenomena. The methodology to analyze BWR signals includes the following: (a) the short-time Fourier transform (STFT) analysis, (b) decomposition using the continuous wavelet transform (CWT), and (c) application of multiresolution analysis (MRA) using discrete wavelet transform (DWT). STFT analysis permits the study, in time, of the spectral content of analyzed signals. The CWT provides information about ruptures, discontinuities, and fractal behavior. To detect these important features in the signal, a mother wavelet has to be chosen and applied at several scales tomore » obtain optimum results. MRA allows fast implementation of the DWT. Features like important frequencies, discontinuities, and transients can be detected with analysis at different levels of detail coefficients. The STFT was used to provide a comparison between a classic method and the wavelet-based method. The damping ratio, which is an important stability parameter, was calculated as a function of time. The transient behavior can be detected by analyzing the maximum contained in detail coefficients at different levels in the signal decomposition. This method allows analysis of both stationary signals and highly nonstationary signals in the timescale plane. This methodology has been tested with the benchmark power instability event of Laguna Verde nuclear power plant (NPP) Unit 1, which is a BWR-5 NPP.« less
NASA Astrophysics Data System (ADS)
de Macedo, Isadora A. S.; da Silva, Carolina B.; de Figueiredo, J. J. S.; Omoboya, Bode
2017-01-01
Wavelet estimation as well as seismic-to-well tie procedures are at the core of every seismic interpretation workflow. In this paper we perform a comparative study of wavelet estimation methods for seismic-to-well tie. Two approaches to wavelet estimation are discussed: a deterministic estimation, based on both seismic and well log data, and a statistical estimation, based on predictive deconvolution and the classical assumptions of the convolutional model, which provides a minimum-phase wavelet. Our algorithms, for both wavelet estimation methods introduce a semi-automatic approach to determine the optimum parameters of deterministic wavelet estimation and statistical wavelet estimation and, further, to estimate the optimum seismic wavelets by searching for the highest correlation coefficient between the recorded trace and the synthetic trace, when the time-depth relationship is accurate. Tests with numerical data show some qualitative conclusions, which are probably useful for seismic inversion and interpretation of field data, by comparing deterministic wavelet estimation and statistical wavelet estimation in detail, especially for field data example. The feasibility of this approach is verified on real seismic and well data from Viking Graben field, North Sea, Norway. Our results also show the influence of the washout zones on well log data on the quality of the well to seismic tie.
Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.« less
Contextual Compression of Large-Scale Wind Turbine Array Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less
Person Authentication Using Learned Parameters of Lifting Wavelet Filters
NASA Astrophysics Data System (ADS)
Niijima, Koichi
2006-10-01
This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.
NASA Astrophysics Data System (ADS)
Ji, Yi; Sun, Shanlin; Xie, Hong-Bo
2017-06-01
Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.
PET-CT image fusion using random forest and à-trous wavelet transform.
Seal, Ayan; Bhattacharjee, Debotosh; Nasipuri, Mita; Rodríguez-Esparragón, Dionisio; Menasalvas, Ernestina; Gonzalo-Martin, Consuelo
2018-03-01
New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful. Copyright © 2017 John Wiley & Sons, Ltd.
Sonka, Milan; Abramoff, Michael D.
2013-01-01
In this paper, MMSE estimator is employed for noise-free 3D OCT data recovery in 3D complex wavelet domain. Since the proposed distribution for noise-free data plays a key role in the performance of MMSE estimator, a priori distribution for the pdf of noise-free 3D complex wavelet coefficients is proposed which is able to model the main statistical properties of wavelets. We model the coefficients with a mixture of two bivariate Gaussian pdfs with local parameters which are able to capture the heavy-tailed property and inter- and intrascale dependencies of coefficients. In addition, based on the special structure of OCT images, we use an anisotropic windowing procedure for local parameters estimation that results in visual quality improvement. On this base, several OCT despeckling algorithms are obtained based on using Gaussian/two-sided Rayleigh noise distribution and homomorphic/nonhomomorphic model. In order to evaluate the performance of the proposed algorithm, we use 156 selected ROIs from 650 × 512 × 128 OCT dataset in the presence of wet AMD pathology. Our simulations show that the best MMSE estimator using local bivariate mixture prior is for the nonhomomorphic model in the presence of Gaussian noise which results in an improvement of 7.8 ± 1.7 in CNR. PMID:24222760
An hybrid neuro-wavelet approach for long-term prediction of solar wind
NASA Astrophysics Data System (ADS)
Napoli, Christian; Bonanno, Francesco; Capizzi, Giacomo
2011-06-01
Nowadays the interest for space weather and solar wind forecasting is increasing to become a main relevance problem especially for telecommunication industry, military, and for scientific research. At present the goal for weather forecasting reach the ultimate high ground of the cosmos where the environment can affect the technological instrumentation. Some interests then rise about the correct prediction of space events, like ionized turbulence in the ionosphere or impacts from the energetic particles in the Van Allen belts, then of the intensity and features of the solar wind and magnetospheric response. The problem of data prediction can be faced using hybrid computation methods so as wavelet decomposition and recurrent neural networks (RNNs). Wavelet analysis was used in order to reduce the data redundancies so obtaining representation which can express their intrinsic structure. The main advantage of the wavelet use is the ability to pack the energy of a signal, and in turn the relevant carried informations, in few significant uncoupled coefficients. Neural networks (NNs) are a promising technique to exploit the complexity of non-linear data correlation. To obtain a correct prediction of solar wind an RNN was designed starting on the data series. As reported in literature, because of the temporal memory of the data an Adaptative Amplitude Real Time Recurrent Learning algorithm was used for a full connected RNN with temporal delays. The inputs for the RNN were given by the set of coefficients coming from the biorthogonal wavelet decomposition of the solar wind velocity time series. The experimental data were collected during the NASA mission WIND. It is a spin stabilized spacecraft launched in 1994 in a halo orbit around the L1 point. The data are provided by the SWE, a subsystem of the main craft designed to measure the flux of thermal protons and positive ions.
A support vector machine approach for classification of welding defects from ultrasonic signals
NASA Astrophysics Data System (ADS)
Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming
2014-07-01
Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.
Wavelet packets for multi- and hyper-spectral imagery
NASA Astrophysics Data System (ADS)
Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.
2010-01-01
State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.
Phase synchronization based on a Dual-Tree Complex Wavelet Transform
NASA Astrophysics Data System (ADS)
Ferreira, Maria Teodora; Domingues, Margarete Oliveira; Macau, Elbert E. N.
2016-11-01
In this work, we show the applicability of our Discrete Complex Wavelet Approach (DCWA) to verify the phenomenon of phase synchronization transition in two coupled chaotic Lorenz systems. DCWA is based on the phase assignment from complex wavelet coefficients obtained by using a Dual-Tree Complex Wavelet Transform (DT-CWT). We analyzed two coupled chaotic Lorenz systems, aiming to detect the transition from non-phase synchronization to phase synchronization. In addition, we check how good is the method in detecting periods of 2π phase-slips. In all experiments, DCWA is compared with classical phase detection methods such as the ones based on arctangent and Hilbert transform showing a much better performance.
Resolution of the 1D regularized Burgers equation using a spatial wavelet approximation
NASA Technical Reports Server (NTRS)
Liandrat, J.; Tchamitchian, PH.
1990-01-01
The Burgers equation with a small viscosity term, initial and periodic boundary conditions is resolved using a spatial approximation constructed from an orthonormal basis of wavelets. The algorithm is directly derived from the notions of multiresolution analysis and tree algorithms. Before the numerical algorithm is described these notions are first recalled. The method uses extensively the localization properties of the wavelets in the physical and Fourier spaces. Moreover, the authors take advantage of the fact that the involved linear operators have constant coefficients. Finally, the algorithm can be considered as a time marching version of the tree algorithm. The most important point is that an adaptive version of the algorithm exists: it allows one to reduce in a significant way the number of degrees of freedom required for a good computation of the solution. Numerical results and description of the different elements of the algorithm are provided in combination with different mathematical comments on the method and some comparison with more classical numerical algorithms.
NASA Technical Reports Server (NTRS)
Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.
1996-01-01
Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.
Wavelet filter analysis of local atmospheric pressure effects in the long-period tidal bands
NASA Astrophysics Data System (ADS)
Hu, X.-G.; Liu, L. T.; Ducarme, B.; Hsu, H. T.; Sun, H.-P.
2006-11-01
It is well known that local atmospheric pressure variations obviously affect the observation of short-period Earth tides, such as diurnal tides, semi-diurnal tides and ter-diurnal tides, but local atmospheric pressure effects on the long-period Earth tides have not been studied in detail. This is because the local atmospheric pressure is believed not to be sufficient for an effective pressure correction in long-period tidal bands, and there are no efficient methods to investigate local atmospheric effects in these bands. The usual tidal analysis software package, such as ETERNA, Baytap-G and VAV, cannot provide detailed pressure admittances for long-period tidal bands. We propose a wavelet method to investigate local atmospheric effects on gravity variations in long-period tidal bands. This method constructs efficient orthogonal filter bank with Daubechies wavelets of high vanishing moments. The main advantage of the wavelet filter bank is that it has excellent low frequency response and efficiently suppresses instrumental drift of superconducting gravimeters (SGs) without using any mathematical model. Applying the wavelet method to the 13-year continuous gravity observations from SG T003 in Brussels, Belgium, we filtered 12 long-period tidal groups into eight narrow frequency bands. Wavelet method demonstrates that local atmospheric pressure fluctuations are highly correlated with the noise of SG measurements in the period band 4-40 days with correlation coefficients higher than 0.95 and local atmospheric pressure variations are the main error source for the determination of the tidal parameters in these bands. We show the significant improvement of long-period tidal parameters provided by wavelet method in term of precision.
Effective implementation of wavelet Galerkin method
NASA Astrophysics Data System (ADS)
Finěk, Václav; Šimunková, Martina
2012-11-01
It was proved by W. Dahmen et al. that an adaptive wavelet scheme is asymptotically optimal for a wide class of elliptic equations. This scheme approximates the solution u by a linear combination of N wavelets and a benchmark for its performance is the best N-term approximation, which is obtained by retaining the N largest wavelet coefficients of the unknown solution. Moreover, the number of arithmetic operations needed to compute the approximate solution is proportional to N. The most time consuming part of this scheme is the approximate matrix-vector multiplication. In this contribution, we will introduce our implementation of wavelet Galerkin method for Poisson equation -Δu = f on hypercube with homogeneous Dirichlet boundary conditions. In our implementation, we identified nonzero elements of stiffness matrix corresponding to the above problem and we perform matrix-vector multiplication only with these nonzero elements.
Shirazinodeh, Alireza; Noubari, Hossein Ahmadi; Rabbani, Hossein; Dehnavi, Alireza Mehri
2015-01-01
Recent studies on wavelet transform and fractal modeling applied on mammograms for the detection of cancerous tissues indicate that microcalcifications and masses can be utilized for the study of the morphology and diagnosis of cancerous cases. It is shown that the use of fractal modeling, as applied to a given image, can clearly discern cancerous zones from noncancerous areas. In this paper, for fractal modeling, the original image is first segmented into appropriate fractal boxes followed by identifying the fractal dimension of each windowed section using a computationally efficient two-dimensional box-counting algorithm. Furthermore, using appropriate wavelet sub-bands and image Reconstruction based on modified wavelet coefficients, it is shown that it is possible to arrive at enhanced features for detection of cancerous zones. In this paper, we have attempted to benefit from the advantages of both fractals and wavelets by introducing a new algorithm. By using a new algorithm named F1W2, the original image is first segmented into appropriate fractal boxes, and the fractal dimension of each windowed section is extracted. Following from that, by applying a maximum level threshold on fractal dimensions matrix, the best-segmented boxes are selected. In the next step, the segmented Cancerous zones which are candidates are then decomposed by utilizing standard orthogonal wavelet transform and db2 wavelet in three different resolution levels, and after nullifying wavelet coefficients of the image at the first scale and low frequency band of the third scale, the modified reconstructed image is successfully utilized for detection of breast cancer regions by applying an appropriate threshold. For detection of cancerous zones, our simulations indicate the accuracy of 90.9% for masses and 88.99% for microcalcifications detection results using the F1W2 method. For classification of detected mictocalcification into benign and malignant cases, eight features are identified and utilized in radial basis function neural network. Our simulation results indicate the accuracy of 92% classification using F1W2 method.
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Bindhu, V. M.; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh
2017-10-01
An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.
NASA Astrophysics Data System (ADS)
Singh, Hukum
2016-06-01
An asymmetric scheme has been proposed for optical double images encryption in the gyrator wavelet transform (GWT) domain. Grayscale and binary images are encrypted separately using double random phase encoding (DRPE) in the GWT domain. Phase masks based on devil's vortex Fresnel Lens (DVFLs) and random phase masks (RPMs) are jointly used in spatial as well as in the Fourier plane. The images to be encrypted are first gyrator transformed and then single-level discrete wavelet transformed (DWT) to decompose LL , HL , LH and HH matrices of approximation, horizontal, vertical and diagonal coefficients. The resulting coefficients from the DWT are multiplied by other RPMs and the results are applied to inverse discrete wavelet transform (IDWT) for obtaining the encrypted images. The images are recovered from their corresponding encrypted images by using the correct parameters of the GWT, DVFL and its digital implementation has been performed using MATLAB 7.6.0 (R2008a). The mother wavelet family, DVFL and gyrator transform orders associated with the GWT are extra keys that cause difficulty to an attacker. Thus, the scheme is more secure as compared to conventional techniques. The efficacy of the proposed scheme is verified by computing mean-squared-error (MSE) between recovered and the original images. The sensitivity of the proposed scheme is verified with encryption parameters and noise attacks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maiolo, M., E-mail: massimo.maiolo@zhaw.ch; ZHAW, Institut für Angewandte Simulation, Grüental, CH-8820 Wädenswil; Vancheri, A., E-mail: alberto.vancheri@supsi.ch
In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficientmore » orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.« less
Wavelet methodology to improve single unit isolation in primary motor cortex cells
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.
2016-01-01
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461
Decentralized modal identification using sparse blind source separation
NASA Astrophysics Data System (ADS)
Sadhu, A.; Hazra, B.; Narasimhan, S.; Pandey, M. D.
2011-12-01
Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time-frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure.
Wavelet-based 3-D inversion for frequency-domain airborne EM data
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.
2018-04-01
In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder.
Mumtaz, Wajid; Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad; Malik, Aamir Saeed
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant's treatment outcome may help during antidepressant's selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant's treatment outcome for the MDD patients.
Nazarzadeh, Kimia; Arjunan, Sridhar P; Kumar, Dinesh K; Das, Debi Prasad
2016-08-01
In this study, we have analyzed the accelerometer data recorded during gait analysis of Parkinson disease patients for detecting freezing of gait (FOG) episodes. The proposed method filters the recordings for noise reduction of the leg movement changes and computes the wavelet coefficients to detect FOG events. Publicly available FOG database was used and the technique was evaluated using receiver operating characteristic (ROC) analysis. Results show a higher performance of the wavelet feature in discrimination of the FOG events from the background activity when compared with the existing technique.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
NASA Astrophysics Data System (ADS)
Peresunko, A. P.; Zavadovskya, I. G.
2004-06-01
The paper deals with the studying of prognostic possibilities of determining the orientation structure of endometrial strome in the normal state and hiperplasia. The laser diagnostic of endometrial state is based on the principles of optical changes of laser radiation during its passing through the histological sample with the following investigation of its wavelet coefficients.
Wavelet assessment of cerebrospinal compensatory reserve and cerebrovascular pressure reactivity
NASA Astrophysics Data System (ADS)
Latka, M.; Turalska, M.; Kolodziej, W.; Latka, D.; West, B.
2006-03-01
We employ complex continuous wavelet transforms to develop a consistent mathematical framework capable of quantifying both cerebrospinal compensatory reserve and cerebrovascular pressure--reactivity. The wavelet gain, defined as the frequency dependent ratio of time averaged wavelet coefficients of intracranial (ICP) and arterial blood pressure (ABP) fluctuations, characterizes the dampening of spontaneous arterial blood oscillations. This gain is introduced as a novel measure of cerebrospinal compensatory reserve. For a group of 10 patients who died as a result of head trauma (Glasgow Outcome Scale GOS =1) the average gain is 0.45 calculated at 0.05 Hz significantly exceeds that of 16 patients with favorable outcome (GOS=2): with gain of 0.24 with p=4x10-5. We also study the dynamics of instantaneous phase difference between the fluctuations of the ABP and ICP time series. The time-averaged synchronization index, which depends upon frequency, yields the information about the stability of the phase difference and is used as a cerebrovascular pressure--reactivity index. The average phase difference for GOS=1 is close to zero in sharp contrast to the mean value of 30^o for patients with GOS=2. We hypothesize that in patients who died the impairment of cerebral autoregulation is followed by the break down of residual pressure reactivity.
Design and evaluation of sparse quantization index modulation watermarking schemes
NASA Astrophysics Data System (ADS)
Cornelis, Bruno; Barbarien, Joeri; Dooms, Ann; Munteanu, Adrian; Cornelis, Jan; Schelkens, Peter
2008-08-01
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
Search prefilters to assist in library searching of infrared spectra of automotive clear coats.
Lavine, Barry K; Fasasi, Ayuba; Mirjankar, Nikhil; White, Collin; Sandercock, Mark
2015-01-01
Clear coat searches of the infrared (IR) spectral library of the paint data query (PDQ) forensic database often generate an unusable number of hits that span multiple manufacturers, assembly plants, and years. To improve the accuracy of the hit list, pattern recognition methods have been used to develop search prefilters (i.e., principal component models) that differentiate between similar but non-identical IR spectra of clear coats on the basis of manufacturer (e.g., General Motors, Ford, Chrysler) or assembly plant. A two step procedure to develop these search prefilters was employed. First, the discrete wavelet transform was used to decompose each IR spectrum into wavelet coefficients to enhance subtle but significant features in the spectral data. Second, a genetic algorithm for IR spectral pattern recognition was employed to identify wavelet coefficients characteristic of the manufacturer or assembly plant of the vehicle. Even in challenging trials where the paint samples evaluated were all from the same manufacturer (General Motors) within a limited production year range (2000-2006), the respective assembly plant of the vehicle was correctly identified. Search prefilters to identify assembly plants were successfully validated using 10 blind samples provided by the Royal Canadian Mounted Police (RCMP) as part of a study to populate PDQ to current production years, whereas the search prefilter to discriminate among automobile manufacturers was successfully validated using IR spectra obtained directly from the PDQ database. Copyright © 2014 Elsevier B.V. All rights reserved.
Decision support system for diabetic retinopathy using discrete wavelet transform.
Noronha, K; Acharya, U R; Nayak, K P; Kamath, S; Bhandary, S V
2013-03-01
Prolonged duration of the diabetes may affect the tiny blood vessels of the retina causing diabetic retinopathy. Routine eye screening of patients with diabetes helps to detect diabetic retinopathy at the early stage. It is very laborious and time-consuming for the doctors to go through many fundus images continuously. Therefore, decision support system for diabetic retinopathy detection can reduce the burden of the ophthalmologists. In this work, we have used discrete wavelet transform and support vector machine classifier for automated detection of normal and diabetic retinopathy classes. The wavelet-based decomposition was performed up to the second level, and eight energy features were extracted. Two energy features from the approximation coefficients of two levels and six energy values from the details in three orientations (horizontal, vertical and diagonal) were evaluated. These features were fed to the support vector machine classifier with various kernel functions (linear, radial basis function, polynomial of orders 2 and 3) to evaluate the highest classification accuracy. We obtained the highest average classification accuracy, sensitivity and specificity of more than 99% with support vector machine classifier (polynomial kernel of order 3) using three discrete wavelet transform features. We have also proposed an integrated index called Diabetic Retinopathy Risk Index using clinically significant wavelet energy features to identify normal and diabetic retinopathy classes using just one number. We believe that this (Diabetic Retinopathy Risk Index) can be used as an adjunct tool by the doctors during the eye screening to cross-check their diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, A.B.; Clothiaux, E.
Because of Earth`s gravitational field, its atmosphere is strongly anisotropic with respect to the vertical; the effect of the Earth`s rotation on synoptic wind patterns also causes a more subtle form of anisotropy in the horizontal plane. The authors survey various approaches to statistically robust anisotropy from a wavelet perspective and present a new one adapted to strongly non-isotropic fields that are sampled on a rectangular grid with a large aspect ratio. This novel technique uses an anisotropic version of Multi-Resolution Analysis (MRA) in image analysis; the authors form a tensor product of the standard dyadic Haar basis, where themore » dividing ratio is {lambda}{sub z} = 2, and a nonstandard triadic counterpart, where the dividing ratio is {lambda}{sub x} = 3. The natural support of the field is therefore 2{sup n} pixels (vertically) by 3{sup n} pixels (horizontally) where n is the number of levels in the MRA. The natural triadic basis includes the French top-hat wavelet which resonates with bumps in the field whereas the Haar wavelet responds to ramps or steps. The complete 2D basis has one scaling function and five wavelets. The resulting anisotropic MRA is designed for application to the liquid water content (LWC) field in boundary-layer clouds, as the prevailing wind advects them by a vertically pointing mm-radar system. Spatial correlations are notoriously long-range in cloud structure and the authors use the wavelet coefficients from the new MRA to characterize these correlations in a multifractal analysis scheme. In the present study, the MRA is used (in synthesis mode) to generate fields that mimic cloud structure quite realistically although only a few parameters are used to control the randomness of the LWC`s wavelet coefficients.« less
Investigation of using wavelet analysis for classifying pattern of cyclic voltammetry signals
NASA Astrophysics Data System (ADS)
Jityen, Arthit; Juagwon, Teerasak; Jaisuthi, Rawat; Osotchan, Tanakorn
2017-09-01
Wavelet analysis is an excellent technique for data processing analysis based on linear vector algebra since it has an ability to perform local analysis and is able to analyze an unspecific localized area of a large signal. In this work, the wavelet analysis of cyclic waveform was investigated in order to find the distinguishable feature from the cyclic data. The analyzed wavelet coefficients were proposed to be used as selected cyclic feature parameters. The cyclic voltammogram (CV) of different electrodes consisting of carbon nanotube (CNT) and several types of metal phthalocyanine (MPc) including CoPc, FePc, ZnPc and MnPc powders was used as several sets of cyclic data for various types of coffee. The mixture powder was embedded in a hollow Teflon rod and used as working electrodes. Electrochemical response of the fabricated electrodes in Robusta, blend coffee I, blend coffee II, chocolate malt and cocoa at the same concentrations was measured with scanning rate of 0.05V/s from -1.5 to 1.5V respectively to Ag/AgCl electrode for five scanning loops. The CV of blended CNT electrode with some MPc electrodes indicated the ionic interaction which can be the effect of catalytic oxidation of saccharides and/or polyphenol on the sensor surface. The major information of CV response can be extracted by using several mother wavelet families viz. daubechies (dB1 to dB3), coiflets (coiflet1), biorthogonal (Bior1.1) and symlets (sym2) and then the discrimination of these wavelet coefficients of each data group can be separated by principal component analysis (PCA). The PCA results indicated the clearly separate groups with total contribution more than 62.37% representing from PC1 and PC2.
Wavelet-based automatic determination of the P- and S-wave arrivals
NASA Astrophysics Data System (ADS)
Bogiatzis, P.; Ishii, M.
2013-12-01
The detection of P- and S-wave arrivals is important for a variety of seismological applications including earthquake detection and characterization, and seismic tomography problems such as imaging of hydrocarbon reservoirs. For many years, dedicated human-analysts manually selected the arrival times of P and S waves. However, with the rapid expansion of seismic instrumentation, automatic techniques that can process a large number of seismic traces are becoming essential in tomographic applications, and for earthquake early-warning systems. In this work, we present a pair of algorithms for efficient picking of P and S onset times. The algorithms are based on the continuous wavelet transform of the seismic waveform that allows examination of a signal in both time and frequency domains. Unlike Fourier transform, the basis functions are localized in time and frequency, therefore, wavelet decomposition is suitable for analysis of non-stationary signals. For detecting the P-wave arrival, the wavelet coefficients are calculated using the vertical component of the seismogram, and the onset time of the wave is identified. In the case of the S-wave arrival, we take advantage of the polarization of the shear waves, and cross-examine the wavelet coefficients from the two horizontal components. In addition to the onset times, the automatic picking program provides estimates of uncertainty, which are important for subsequent applications. The algorithms are tested with synthetic data that are generated to include sudden changes in amplitude, frequency, and phase. The performance of the wavelet approach is further evaluated using real data by comparing the automatic picks with manual picks. Our results suggest that the proposed algorithms provide robust measurements that are comparable to manual picks for both P- and S-wave arrivals.
Modal identification of structures by a novel approach based on FDD-wavelet method
NASA Astrophysics Data System (ADS)
Tarinejad, Reza; Damadipour, Majid
2014-02-01
An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.
NASA Astrophysics Data System (ADS)
Polotti, Pietro; Evangelista, Gianpaolo
2001-12-01
Voiced musical sounds have nonzero energy in sidebands of the frequency partials. Our work is based on the assumption, often experimentally verified, that the energy distribution of the sidebands is shaped as powers of the inverse of the distance from the closest partial. The power spectrum of these pseudo-periodic processes is modeled by means of a superposition of modulated[InlineEquation not available: see fulltext.] components, that is, by a pseudo-periodic[InlineEquation not available: see fulltext.]-like process. Due to the fundamental selfsimilar character of the wavelet transform,[InlineEquation not available: see fulltext.] processes can be fruitfully analyzed and synthesized by means of wavelets. We obtain a set of very loosely correlated coefficients at each scale level that can be well approximated by white noise in the synthesis process. Our computational scheme is based on an orthogonal[InlineEquation not available: see fulltext.]-band filter bank and a dyadic wavelet transform per channel. The[InlineEquation not available: see fulltext.] channels are tuned to the left and right sidebands of the harmonics so that sidebands are mutually independent. The structure computes the expansion coefficients of a new orthogonal and complete set of harmonic-band wavelets. The main point of our scheme is that we need only two parameters per harmonic in order to model the stochastic fluctuations of sounds from a pure periodic behavior.
NASA Astrophysics Data System (ADS)
Wang, Jianhua; Yang, Yanxi
2018-05-01
We present a new wavelet ridge extraction method employing a novel cost function in two-dimensional wavelet transform profilometry (2-D WTP). First of all, the maximum value point is extracted from two-dimensional wavelet transform coefficient modulus, and the local extreme value points over 90% of maximum value are also obtained, they both constitute wavelet ridge candidates. Then, the gradient of rotate factor is introduced into the Abid's cost function, and the logarithmic Logistic model is used to adjust and improve the cost function weights so as to obtain more reasonable value estimation. At last, the dynamic programming method is used to accurately find the optimal wavelet ridge, and the wrapped phase can be obtained by extracting the phase at the ridge. Its advantage is that, the fringe pattern with low signal-to-noise ratio can be demodulated accurately, and its noise immunity will be better. Meanwhile, only one fringe pattern is needed to projected to measured object, so dynamic three-dimensional (3-D) measurement in harsh environment can be realized. Computer simulation and experimental results show that, for the fringe pattern with noise pollution, the 3-D surface recovery accuracy by the proposed algorithm is increased. In addition, the demodulation phase accuracy of Morlet, Fan and Cauchy mother wavelets are compared.
Experimental study on the crack detection with optimized spatial wavelet analysis and windowing
NASA Astrophysics Data System (ADS)
Ghanbari Mardasi, Amir; Wu, Nan; Wu, Christine
2018-05-01
In this paper, a high sensitive crack detection is experimentally realized and presented on a beam under certain deflection by optimizing spatial wavelet analysis. Due to the crack existence in the beam structure, a perturbation/slop singularity is induced in the deflection profile. Spatial wavelet transformation works as a magnifier to amplify the small perturbation signal at the crack location to detect and localize the damage. The profile of a deflected aluminum cantilever beam is obtained for both intact and cracked beams by a high resolution laser profile sensor. Gabor wavelet transformation is applied on the subtraction of intact and cracked data sets. To improve detection sensitivity, scale factor in spatial wavelet transformation and the transformation repeat times are optimized. Furthermore, to detect the possible crack close to the measurement boundaries, wavelet transformation edge effect, which induces large values of wavelet coefficient around the measurement boundaries, is efficiently reduced by introducing different windowing functions. The result shows that a small crack with depth of less than 10% of the beam height can be localized with a clear perturbation. Moreover, the perturbation caused by a crack at 0.85 mm away from one end of the measurement range, which is covered by wavelet transform edge effect, emerges by applying proper window functions.
Admissible Diffusion Wavelets and Their Applications in Space-Frequency Processing.
Hou, Tingbo; Qin, Hong
2013-01-01
As signal processing tools, diffusion wavelets and biorthogonal diffusion wavelets have been propelled by recent research in mathematics. They employ diffusion as a smoothing and scaling process to empower multiscale analysis. However, their applications in graphics and visualization are overshadowed by nonadmissible wavelets and their expensive computation. In this paper, our motivation is to broaden the application scope to space-frequency processing of shape geometry and scalar fields. We propose the admissible diffusion wavelets (ADW) on meshed surfaces and point clouds. The ADW are constructed in a bottom-up manner that starts from a local operator in a high frequency, and dilates by its dyadic powers to low frequencies. By relieving the orthogonality and enforcing normalization, the wavelets are locally supported and admissible, hence facilitating data analysis and geometry processing. We define the novel rapid reconstruction, which recovers the signal from multiple bands of high frequencies and a low-frequency base in full resolution. It enables operations localized in both space and frequency by manipulating wavelet coefficients through space-frequency filters. This paper aims to build a common theoretic foundation for a host of applications, including saliency visualization, multiscale feature extraction, spectral geometry processing, etc.
S2LET: A code to perform fast wavelet analysis on the sphere
NASA Astrophysics Data System (ADS)
Leistedt, B.; McEwen, J. D.; Vandergheynst, P.; Wiaux, Y.
2013-10-01
We describe S2LET, a fast and robust implementation of the scale-discretised wavelet transform on the sphere. Wavelets are constructed through a tiling of the harmonic line and can be used to probe spatially localised, scale-dependent features of signals on the sphere. The reconstruction of a signal from its wavelets coefficients is made exact here through the use of a sampling theorem on the sphere. Moreover, a multiresolution algorithm is presented to capture all information of each wavelet scale in the minimal number of samples on the sphere. In addition S2LET supports the HEALPix pixelisation scheme, in which case the transform is not exact but nevertheless achieves good numerical accuracy. The core routines of S2LET are written in C and have interfaces in Matlab, IDL and Java. Real signals can be written to and read from FITS files and plotted as Mollweide projections. The S2LET code is made publicly available, is extensively documented, and ships with several examples in the four languages supported. At present the code is restricted to axisymmetric wavelets but will be extended to directional, steerable wavelets in a future release.
Acoustic emission detection for mass fractions of materials based on wavelet packet technology.
Wang, Xianghong; Xiang, Jianjun; Hu, Hongwei; Xie, Wei; Li, Xiongbing
2015-07-01
Materials are often damaged during the process of detecting mass fractions by traditional methods. Acoustic emission (AE) technology combined with wavelet packet analysis is used to evaluate the mass fractions of microcrystalline graphite/polyvinyl alcohol (PVA) composites in this study. Attenuation characteristics of AE signals across the composites with different mass fractions are investigated. The AE signals are decomposed by wavelet packet technology to obtain the relationships between the energy and amplitude attenuation coefficients of feature wavelet packets and mass fractions as well. Furthermore, the relationship is validated by a sample. The larger proportion of microcrystalline graphite will correspond to the higher attenuation of energy and amplitude. The attenuation characteristics of feature wavelet packets with the frequency range from 125 kHz to 171.85 kHz are more suitable for the detection of mass fractions than those of the original AE signals. The error of the mass fraction of microcrystalline graphite calculated by the feature wavelet packet (1.8%) is lower than that of the original signal (3.9%). Therefore, AE detection base on wavelet packet analysis is an ideal NDT method for evaluate mass fractions of composite materials. Copyright © 2015 Elsevier B.V. All rights reserved.
Dunea, Daniel; Pohoata, Alin; Iordache, Stefania
2015-07-01
The paper presents the screening of various feedforward neural networks (FANN) and wavelet-feedforward neural networks (WFANN) applied to time series of ground-level ozone (O3), nitrogen dioxide (NO2), and particulate matter (PM10 and PM2.5 fractions) recorded at four monitoring stations located in various urban areas of Romania, to identify common configurations with optimal generalization performance. Two distinct model runs were performed as follows: data processing using hourly-recorded time series of airborne pollutants during cold months (O3, NO2, and PM10), when residential heating increases the local emissions, and data processing using 24-h daily averaged concentrations (PM2.5) recorded between 2009 and 2012. Dataset variability was assessed using statistical analysis. Time series were passed through various FANNs. Each time series was decomposed in four time-scale components using three-level wavelets, which have been passed also through FANN, and recomposed into a single time series. The agreement between observed and modelled output was evaluated based on the statistical significance (r coefficient and correlation between errors and data). Daubechies db3 wavelet-Rprop FANN (6-4-1) utilization gave positive results for O3 time series optimizing the exclusive use of the FANN for hourly-recorded time series. NO2 was difficult to model due to time series specificity, but wavelet integration improved FANN performances. Daubechies db3 wavelet did not improve the FANN outputs for PM10 time series. Both models (FANN/WFANN) overestimated PM2.5 forecasted values in the last quarter of time series. A potential improvement of the forecasted values could be the integration of a smoothing algorithm to adjust the PM2.5 model outputs.
Wavelet methodology to improve single unit isolation in primary motor cortex cells.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2015-05-15
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein's unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. Copyright © 2015. Published by Elsevier B.V.
Wavelet based detection of manatee vocalizations
NASA Astrophysics Data System (ADS)
Gur, Berke M.; Niezrecki, Christopher
2005-04-01
The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.
A splitting algorithm for the wavelet transform of cubic splines on a nonuniform grid
NASA Astrophysics Data System (ADS)
Sulaimanov, Z. M.; Shumilov, B. M.
2017-10-01
For cubic splines with nonuniform nodes, splitting with respect to the even and odd nodes is used to obtain a wavelet expansion algorithm in the form of the solution to a three-diagonal system of linear algebraic equations for the coefficients. Computations by hand are used to investigate the application of this algorithm for numerical differentiation. The results are illustrated by solving a prediction problem.
NASA Astrophysics Data System (ADS)
Kougioumtzoglou, Ioannis A.; dos Santos, Ketson R. M.; Comerford, Liam
2017-09-01
Various system identification techniques exist in the literature that can handle non-stationary measured time-histories, or cases of incomplete data, or address systems following a fractional calculus modeling. However, there are not many (if any) techniques that can address all three aforementioned challenges simultaneously in a consistent manner. In this paper, a novel multiple-input/single-output (MISO) system identification technique is developed for parameter identification of nonlinear and time-variant oscillators with fractional derivative terms subject to incomplete non-stationary data. The technique utilizes a representation of the nonlinear restoring forces as a set of parallel linear sub-systems. In this regard, the oscillator is transformed into an equivalent MISO system in the wavelet domain. Next, a recently developed L1-norm minimization procedure based on compressive sensing theory is applied for determining the wavelet coefficients of the available incomplete non-stationary input-output (excitation-response) data. Finally, these wavelet coefficients are utilized to determine appropriately defined time- and frequency-dependent wavelet based frequency response functions and related oscillator parameters. Several linear and nonlinear time-variant systems with fractional derivative elements are used as numerical examples to demonstrate the reliability of the technique even in cases of noise corrupted and incomplete data.
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network
NASA Astrophysics Data System (ADS)
Ong, Jia Jan; Ang, L.-M.; Seng, K. P.
This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.
A wavelet-based technique to predict treatment outcome for Major Depressive Disorder
Xia, Likun; Mohd Yasin, Mohd Azhar; Azhar Ali, Syed Saad
2017-01-01
Treatment management for Major Depressive Disorder (MDD) has been challenging. However, electroencephalogram (EEG)-based predictions of antidepressant’s treatment outcome may help during antidepressant’s selection and ultimately improve the quality of life for MDD patients. In this study, a machine learning (ML) method involving pretreatment EEG data was proposed to perform such predictions for Selective Serotonin Reuptake Inhibitor (SSRIs). For this purpose, the acquisition of experimental data involved 34 MDD patients and 30 healthy controls. Consequently, a feature matrix was constructed involving time-frequency decomposition of EEG data based on wavelet transform (WT) analysis, termed as EEG data matrix. However, the resultant EEG data matrix had high dimensionality. Therefore, dimension reduction was performed based on a rank-based feature selection method according to a criterion, i.e., receiver operating characteristic (ROC). As a result, the most significant features were identified and further be utilized during the training and testing of a classification model, i.e., the logistic regression (LR) classifier. Finally, the LR model was validated with 100 iterations of 10-fold cross-validation (10-CV). The classification results were compared with short-time Fourier transform (STFT) analysis, and empirical mode decompositions (EMD). The wavelet features extracted from frontal and temporal EEG data were found statistically significant. In comparison with other time-frequency approaches such as the STFT and EMD, the WT analysis has shown highest classification accuracy, i.e., accuracy = 87.5%, sensitivity = 95%, and specificity = 80%. In conclusion, significant wavelet coefficients extracted from frontal and temporal pre-treatment EEG data involving delta and theta frequency bands may predict antidepressant’s treatment outcome for the MDD patients. PMID:28152063
Ngan, Shing-Chung; Hu, Xiaoping; Khong, Pek-Lan
2011-03-01
We propose a method for preprocessing event-related functional magnetic resonance imaging (fMRI) data that can lead to enhancement of template-free activation detection. The method is based on using a figure of merit to guide the wavelet shrinkage of a given fMRI data set. Several previous studies have demonstrated that in the root-mean-square error setting, wavelet shrinkage can improve the signal-to-noise ratio of fMRI time courses. However, preprocessing fMRI data in the root-mean-square error setting does not necessarily lead to enhancement of template-free activation detection. Motivated by this observation, in this paper, we move to the detection setting and investigate the possibility of using wavelet shrinkage to enhance template-free activation detection of fMRI data. The main ingredients of our method are (i) forward wavelet transform of the voxel time courses, (ii) shrinking the resulting wavelet coefficients as directed by an appropriate figure of merit, (iii) inverse wavelet transform of the shrunk data, and (iv) submitting these preprocessed time courses to a given activation detection algorithm. Two figures of merit are developed in the paper, and two other figures of merit adapted from the literature are described. Receiver-operating characteristic analyses with simulated fMRI data showed quantitative evidence that data preprocessing as guided by the figures of merit developed in the paper can yield improved detectability of the template-free measures. We also demonstrate the application of our methodology on an experimental fMRI data set. The proposed method is useful for enhancing template-free activation detection in event-related fMRI data. It is of significant interest to extend the present framework to produce comprehensive, adaptive and fully automated preprocessing of fMRI data optimally suited for subsequent data analysis steps. Copyright © 2010 Elsevier B.V. All rights reserved.
Design of tree structured matched wavelet for HRV signals of menstrual cycle.
Rawal, Kirti; Saini, B S; Saini, Indu
2016-07-01
An algorithm is presented for designing a new class of wavelets matched to the Heart Rate Variability (HRV) signals of the menstrual cycle. The proposed wavelets are used to find HRV variations between phases of menstrual cycle. The method finds the signal matching characteristics by minimising the shape feature error using Least Mean Square method. The proposed filter banks are used for the decomposition of the HRV signal. For reconstructing the original signal, the tree structure method is used. In this approach, decomposed sub-bands are selected based upon their energy in each sub-band. Thus, instead of using all sub-bands for reconstruction, sub-bands having high energy content are used for the reconstruction of signal. Thus, a lower number of sub-bands are required for reconstruction of the original signal which shows the effectiveness of newly created filter coefficients. Results show that proposed wavelets are able to differentiate HRV variations between phases of the menstrual cycle accurately than standard wavelets.
Paul, Sabyasachi; Sarkar, P K
2013-04-01
Use of wavelet transformation in stationary signal processing has been demonstrated for denoising the measured spectra and characterisation of radionuclides in the in vivo monitoring analysis, where difficulties arise due to very low activity level to be estimated in biological systems. The large statistical fluctuations often make the identification of characteristic gammas from radionuclides highly uncertain, particularly when interferences from progenies are also present. A new wavelet-based noise filtering methodology has been developed for better detection of gamma peaks in noisy data. This sequential, iterative filtering method uses the wavelet multi-resolution approach for noise rejection and an inverse transform after soft 'thresholding' over the generated coefficients. Analyses of in vivo monitoring data of (235)U and (238)U were carried out using this method without disturbing the peak position and amplitude while achieving a 3-fold improvement in the signal-to-noise ratio, compared with the original measured spectrum. When compared with other data-filtering techniques, the wavelet-based method shows the best results.
An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Features
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Campbell, William J.; Cromp, Robert F.; Zukor, Dorothy (Technical Monitor)
2001-01-01
With the increasing importance of multiple platform/multiple remote sensing missions, fast and automatic integration of digital data from disparate sources has become critical to the success of these endeavors. Our work utilizes maxima of wavelet coefficients to form the basic features of a correlation-based automatic registration algorithm. Our wavelet-based registration algorithm is tested successfully with data from the National Oceanic and Atmospheric Administration (NOAA) Advanced Very High Resolution Radiometer (AVHRR) and the Landsat/Thematic Mapper(TM), which differ by translation and/or rotation. By the choice of high-frequency wavelet features, this method is similar to an edge-based correlation method, but by exploiting the multi-resolution nature of a wavelet decomposition, our method achieves higher computational speeds for comparable accuracies. This algorithm has been implemented on a Single Instruction Multiple Data (SIMD) massively parallel computer, the MasPar MP-2, as well as on the CrayT3D, the Cray T3E and a Beowulf cluster of Pentium workstations.
Multiscale wavelet representations for mammographic feature analysis
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-12-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Wavelet transformation to determine impedance spectra of lithium-ion rechargeable battery
NASA Astrophysics Data System (ADS)
Hoshi, Yoshinao; Yakabe, Natsuki; Isobe, Koichiro; Saito, Toshiki; Shitanda, Isao; Itagaki, Masayuki
2016-05-01
A new analytical method is proposed to determine the electrochemical impedance of lithium-ion rechargeable batteries (LIRB) from time domain data by wavelet transformation (WT). The WT is a waveform analysis method that can transform data in the time domain to the frequency domain while retaining time information. In this transformation, the frequency domain data are obtained by the convolution integral of a mother wavelet and original time domain data. A complex Morlet mother wavelet (CMMW) is used to obtain the complex number data in the frequency domain. The CMMW is expressed by combining a Gaussian function and sinusoidal term. The theory to select a set of suitable conditions for variables and constants related to the CMMW, i.e., band, scale, and time parameters, is established by determining impedance spectra from wavelet coefficients using input voltage to the equivalent circuit and the output current. The impedance spectrum of LIRB determined by WT agrees well with that measured using a frequency response analyzer.
Zhao, Jie; Hua, Mei
2004-06-01
To develop a wavelet noise canceller that cancels muscle electricity and power line hum in wide range of frequency. According to the feature that the QRS complex has higher frequency components, and the T, P wave have lower frequency components, the biorthogonal wavelet was selected to decompose the original signals. An interference-eliminated signal ECG was formed by reconstruction from the changed coefficients of wavelet. By using the canceller, muscle electricity and power line interference between 49 Hz and 61 Hz were eliminated from the ECG signals. This canceller works well in canceling muscle electricity, and basic and harmonic frequencies of power line hum. The canceller is also insensitive to the frequency change of power line, the same procedure is good for both 50 and 60 Hz power line hum.
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform).
Oltean, Gabriel; Ivanciu, Laura-Nicoleta
2016-01-01
The design and verification of complex electronic systems, especially the analog and mixed-signal ones, prove to be extremely time consuming tasks, if only circuit-level simulations are involved. A significant amount of time can be saved if a cost effective solution is used for the extensive analysis of the system, under all conceivable conditions. This paper proposes a data-driven method to build fast to evaluate, but also accurate metamodels capable of generating not-yet simulated waveforms as a function of different combinations of the parameters of the system. The necessary data are obtained by early-stage simulation of an electronic control system from the automotive industry. The metamodel development is based on three key elements: a wavelet transform for waveform characterization, a genetic algorithm optimization to detect the optimal wavelet transform and to identify the most relevant decomposition coefficients, and an artificial neuronal network to derive the relevant coefficients of the wavelet transform for any new parameters combination. The resulted metamodels for three different waveform families are fully reliable. They satisfy the required key points: high accuracy (a maximum mean squared error of 7.1x10-5 for the unity-based normalized waveforms), efficiency (fully affordable computational effort for metamodel build-up: maximum 18 minutes on a general purpose computer), and simplicity (less than 1 second for running the metamodel, the user only provides the parameters combination). The metamodels can be used for very efficient generation of new waveforms, for any possible combination of dependent parameters, offering the possibility to explore the entire design space. A wide range of possibilities becomes achievable for the user, such as: all design corners can be analyzed, possible worst-case situations can be investigated, extreme values of waveforms can be discovered, sensitivity analyses can be performed (the influence of each parameter on the output waveform). PMID:26745370
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan
2016-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102
Energy-Based Wavelet De-Noising of Hydrologic Time Series
Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu
2014-01-01
De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533
A de-noising method using the improved wavelet threshold function based on noise variance estimation
NASA Astrophysics Data System (ADS)
Liu, Hui; Wang, Weida; Xiang, Changle; Han, Lijin; Nie, Haizhao
2018-01-01
The precise and efficient noise variance estimation is very important for the processing of all kinds of signals while using the wavelet transform to analyze signals and extract signal features. In view of the problem that the accuracy of traditional noise variance estimation is greatly affected by the fluctuation of noise values, this study puts forward the strategy of using the two-state Gaussian mixture model to classify the high-frequency wavelet coefficients in the minimum scale, which takes both the efficiency and accuracy into account. According to the noise variance estimation, a novel improved wavelet threshold function is proposed by combining the advantages of hard and soft threshold functions, and on the basis of the noise variance estimation algorithm and the improved wavelet threshold function, the research puts forth a novel wavelet threshold de-noising method. The method is tested and validated using random signals and bench test data of an electro-mechanical transmission system. The test results indicate that the wavelet threshold de-noising method based on the noise variance estimation shows preferable performance in processing the testing signals of the electro-mechanical transmission system: it can effectively eliminate the interference of transient signals including voltage, current, and oil pressure and maintain the dynamic characteristics of the signals favorably.
NASA Astrophysics Data System (ADS)
Soltani Bozchalooi, Iman; Liang, Ming
2018-04-01
A discussion paper entitled "On the distribution of the modulus of Gabor wavelet coefficients and the upper bound of the dimensionless smoothness index in the case of additive Gaussian noises: revisited" by Dong Wang, Qiang Zhou, Kwok-Leung Tsui has been brought to our attention recently. This discussion paper (hereafter called Wang et al. paper) is based on arguments that are fundamentally incorrect and which we rebut within this commentary. However, as the flaws in the arguments proposed by Wang et al. are clear, we will keep this rebuttal as brief as possible.
3D Mueller-matrix mapping of biological optically anisotropic networks
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Ushenko, V. O.; Bodnar, G. B.; Zhytaryuk, V. G.; Prydiy, O. G.; Koval, G.; Lukashevich, I.; Vanchuliak, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
NASA Astrophysics Data System (ADS)
Sakhnovskiy, M. Yu.; Ushenko, Yu. O.; Ushenko, V. O.; Besaha, R. N.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of azimuthally-invariant Mueller-matrix description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of Mueller-matrix invariants (MMI) of linear and circular birefringences of skeletal muscle tissue. It was defined the values of statistic moments, which characterize the distributions of amplitudes of wavelet coefficients of MMI at different scales of scanning. The second part presents the data of statistic analysis of the distributions of amplitude of wavelet coefficients of the distributions of linear birefringence of myocardium tissue died after the infarction and ischemic heart disease. It was defined the objective criteria of differentiation of the cause of death.
Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A
2015-10-01
Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
NASA Astrophysics Data System (ADS)
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
NASA Astrophysics Data System (ADS)
Chapman, Emma; Abdalla, Filipe B.; Bobin, J.; Starck, J.-L.; Harker, Geraint; Jelić, Vibor; Labropoulos, Panagiotis; Zaroubi, Saleem; Brentjens, Michiel A.; de Bruyn, A. G.; Koopmans, L. V. E.
2013-02-01
The accurate and precise removal of 21-cm foregrounds from Epoch of Reionization (EoR) redshifted 21-cm emission data is essential if we are to gain insight into an unexplored cosmological era. We apply a non-parametric technique, Generalized Morphological Component Analysis (gmca), to simulated Low Frequency Array (LOFAR)-EoR data and show that it has the ability to clean the foregrounds with high accuracy. We recover the 21-cm 1D, 2D and 3D power spectra with high accuracy across an impressive range of frequencies and scales. We show that gmca preserves the 21-cm phase information, especially when the smallest spatial scale data is discarded. While it has been shown that LOFAR-EoR image recovery is theoretically possible using image smoothing, we add that wavelet decomposition is an efficient way of recovering 21-cm signal maps to the same or greater order of accuracy with more flexibility. By comparing the gmca output residual maps (equal to the noise, 21-cm signal and any foreground fitting errors) with the 21-cm maps at one frequency and discarding the smaller wavelet scale information, we find a correlation coefficient of 0.689, compared to 0.588 for the equivalently smoothed image. Considering only the pixels in a central patch covering 50 per cent of the total map area, these coefficients improve to 0.905 and 0.605, respectively, and we conclude that wavelet decomposition is a significantly more powerful method to denoise reconstructed 21-cm maps than smoothing.
Towards discrete wavelet transform-based human activity recognition
NASA Astrophysics Data System (ADS)
Khare, Manish; Jeon, Moongu
2017-06-01
Providing accurate recognition of human activities is a challenging problem for visual surveillance applications. In this paper, we present a simple and efficient algorithm for human activity recognition based on a wavelet transform. We adopt discrete wavelet transform (DWT) coefficients as a feature of human objects to obtain advantages of its multiresolution approach. The proposed method is tested on multiple levels of DWT. Experiments are carried out on different standard action datasets including KTH and i3D Post. The proposed method is compared with other state-of-the-art methods in terms of different quantitative performance measures. The proposed method is found to have better recognition accuracy in comparison to the state-of-the-art methods.
Wavelet-Based Motion Artifact Removal for Electrodermal Activity
Chen, Weixuan; Jaques, Natasha; Taylor, Sara; Sano, Akane; Fedor, Szymon; Picard, Rosalind W.
2017-01-01
Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring psychological or physiological arousal. However, analysis of EDA is hampered by its sensitivity to motion artifacts. We propose a method for removing motion artifacts from EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT). We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding to the underlying skin conductance level (SCL) and skin conductance responses (SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We evaluated the proposed method in comparison with three previous approaches. Our method achieved a greater reduction of artifacts while retaining motion-artifact-free data. PMID:26737714
Adaptive Multilinear Tensor Product Wavelets
Weiss, Kenneth; Lindstrom, Peter
2015-08-12
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how tomore » generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. In conclusion, we focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.« less
Time-localized wavelet multiple regression and correlation
NASA Astrophysics Data System (ADS)
Fernández-Macho, Javier
2018-02-01
This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.
Noise Reduction in Breath Sound Files Using Wavelet Transform Based Filter
NASA Astrophysics Data System (ADS)
Syahputra, M. F.; Situmeang, S. I. G.; Rahmat, R. F.; Budiarto, R.
2017-04-01
The development of science and technology in the field of healthcare increasingly provides convenience in diagnosing respiratory system problem. Recording the breath sounds is one example of these developments. Breath sounds are recorded using a digital stethoscope, and then stored in a file with sound format. This breath sounds will be analyzed by health practitioners to diagnose the symptoms of disease or illness. However, the breath sounds is not free from interference signals. Therefore, noise filter or signal interference reduction system is required so that breath sounds component which contains information signal can be clarified. In this study, we designed a filter called a wavelet transform based filter. The filter that is designed in this study is using Daubechies wavelet with four wavelet transform coefficients. Based on the testing of the ten types of breath sounds data, the data is obtained in the largest SNRdB bronchial for 74.3685 decibels.
Analysis of two dimensional signals via curvelet transform
NASA Astrophysics Data System (ADS)
Lech, W.; Wójcik, W.; Kotyra, A.; Popiel, P.; Duk, M.
2007-04-01
This paper describes an application of curvelet transform analysis problem of interferometric images. Comparing to two-dimensional wavelet transform, curvelet transform has higher time-frequency resolution. This article includes numerical experiments, which were executed on random interferometric image. In the result of nonlinear approximations, curvelet transform obtains matrix with smaller number of coefficients than is guaranteed by wavelet transform. Additionally, denoising simulations show that curvelet could be a very good tool to remove noise from images.
Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2014-07-01
Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.
Filtering of the Radon transform to enhance linear signal features via wavelet pyramid decomposition
NASA Astrophysics Data System (ADS)
Meckley, John R.
1995-09-01
The information content in many signal processing applications can be reduced to a set of linear features in a 2D signal transform. Examples include the narrowband lines in a spectrogram, ship wakes in a synthetic aperture radar image, and blood vessels in a medical computer-aided tomography scan. The line integrals that generate the values of the projections of the Radon transform can be characterized as a bank of matched filters for linear features. This localization of energy in the Radon transform for linear features can be exploited to enhance these features and to reduce noise by filtering the Radon transform with a filter explicitly designed to pass only linear features, and then reconstructing a new 2D signal by inverting the new filtered Radon transform (i.e., via filtered backprojection). Previously used methods for filtering the Radon transform include Fourier based filtering (a 2D elliptical Gaussian linear filter) and a nonlinear filter ((Radon xfrm)**y with y >= 2.0). Both of these techniques suffer from the mismatch of the filter response to the true functional form of the Radon transform of a line. The Radon transform of a line is not a point but is a function of the Radon variables (rho, theta) and the total line energy. This mismatch leads to artifacts in the reconstructed image and a reduction in achievable processing gain. The Radon transform for a line is computed as a function of angle and offset (rho, theta) and the line length. The 2D wavelet coefficients are then compared for the Haar wavelets and the Daubechies wavelets. These filter responses are used as frequency filters for the Radon transform. The filtering is performed on the wavelet pyramid decomposition of the Radon transform by detecting the most likely positions of lines in the transform and then by convolving the local area with the appropriate response and zeroing the pyramid coefficients outside of the response area. The response area is defined to contain 95% of the total wavelet coefficient energy. The detection algorithm provides an estimate of the line offset, orientation, and length that is then used to index the appropriate filter shape. Additional wavelet pyramid decomposition is performed in areas of high energy to refine the line position estimate. After filtering, the new Radon transform is generated by inverting the wavelet pyramid. The Radon transform is then inverted by filtered backprojection to produce the final 2D signal estimate with the enhanced linear features. The wavelet-based method is compared to both the Fourier and the nonlinear filtering with examples of sparse and dense shapes in imaging, acoustics and medical tomography with test images of noisy concentric lines, a real spectrogram of a blow fish (a very nonstationary spectrum), and the Shepp Logan Computer Tomography phantom image. Both qualitative and derived quantitative measures demonstrate the improvement of wavelet-based filtering. Additional research is suggested based on these results. Open questions include what level(s) to use for detection and filtering because multiple-level representations exist. The lower levels are smoother at reduced spatial resolution, while the higher levels provide better response to edges. Several examples are discussed based on analytical and phenomenological arguments.
Røislien, Jo; Winje, Brita
2013-09-20
Clinical studies frequently include repeated measurements of individuals, often for long periods. We present a methodology for extracting common temporal features across a set of individual time series observations. In particular, the methodology explores extreme observations within the time series, such as spikes, as a possible common temporal phenomenon. Wavelet basis functions are attractive in this sense, as they are localized in both time and frequency domains simultaneously, allowing for localized feature extraction from a time-varying signal. We apply wavelet basis function decomposition of individual time series, with corresponding wavelet shrinkage to remove noise. We then extract common temporal features using linear principal component analysis on the wavelet coefficients, before inverse transformation back to the time domain for clinical interpretation. We demonstrate the methodology on a subset of a large fetal activity study aiming to identify temporal patterns in fetal movement (FM) count data in order to explore formal FM counting as a screening tool for identifying fetal compromise and thus preventing adverse birth outcomes. Copyright © 2013 John Wiley & Sons, Ltd.
Nagarajan, R; Hariharan, M; Satiyan, M
2012-08-01
Developing tools to assist physically disabled and immobilized people through facial expression is a challenging area of research and has attracted many researchers recently. In this paper, luminance stickers based facial expression recognition is proposed. Recognition of facial expression is carried out by employing Discrete Wavelet Transform (DWT) as a feature extraction method. Different wavelet families with their different orders (db1 to db20, Coif1 to Coif 5 and Sym2 to Sym8) are utilized to investigate their performance in recognizing facial expression and to evaluate their computational time. Standard deviation is computed for the coefficients of first level of wavelet decomposition for every order of wavelet family. This standard deviation is used to form a set of feature vectors for classification. In this study, conventional validation and cross validation are performed to evaluate the efficiency of the suggested feature vectors. Three different classifiers namely Artificial Neural Network (ANN), k-Nearest Neighborhood (kNN) and Linear Discriminant Analysis (LDA) are used to classify a set of eight facial expressions. The experimental results demonstrate that the proposed method gives very promising classification accuracies.
Capizzi, Giacomo; Napoli, Christian; Bonanno, Francesco
2012-11-01
Solar radiation prediction is an important challenge for the electrical engineer because it is used to estimate the power developed by commercial photovoltaic modules. This paper deals with the problem of solar radiation prediction based on observed meteorological data. A 2-day forecast is obtained by using novel wavelet recurrent neural networks (WRNNs). In fact, these WRNNS are used to exploit the correlation between solar radiation and timescale-related variations of wind speed, humidity, and temperature. The input to the selected WRNN is provided by timescale-related bands of wavelet coefficients obtained from meteorological time series. The experimental setup available at the University of Catania, Italy, provided this information. The novelty of this approach is that the proposed WRNN performs the prediction in the wavelet domain and, in addition, also performs the inverse wavelet transform, giving the predicted signal as output. The obtained simulation results show a very low root-mean-square error compared to the results of the solar radiation prediction approaches obtained by hybrid neural networks reported in the recent literature.
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Zavorine, Ilya
1999-01-01
A wavelet-based image registration approach has previously been proposed by the authors. In this work, wavelet coefficient maxima obtained from an orthogonal wavelet decomposition using Daubechies filters were utilized to register images in a multi-resolution fashion. Tested on several remote sensing datasets, this method gave very encouraging results. Despite the lack of translation-invariance of these filters, we showed that when using cross-correlation as a feature matching technique, features of size larger than twice the size of the filters are correctly registered by using the low-frequency subbands of the Daubechies wavelet decomposition. Nevertheless, high-frequency subbands are still sensitive to translation effects. In this work, we are considering a rotation- and translation-invariant representation developed by E. Simoncelli and integrate it in our image registration scheme. The two types of filters, Daubechies and Simoncelli filters, are then being compared from a registration point of view, utilizing synthetic data as well as data from the Landsat/ Thematic Mapper (TM) and from the NOAA Advanced Very High Resolution Radiometer (AVHRR).
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Zavorine, Ilya
1999-01-01
A wavelet-based image registration approach has previously been proposed by the authors. In this work, wavelet coefficient maxima obtained from an orthogonal wavelet decomposition using Daubechies filters were utilized to register images in a multi-resolution fashion. Tested on several remote sensing datasets, this method gave very encouraging results. Despite the lack of translation-invariance of these filters, we showed that when using cross-correlation as a feature matching technique, features of size larger than twice the size of the filters are correctly registered by using the low-frequency subbands of the Daubechies wavelet decomposition. Nevertheless, high-frequency subbands are still sensitive to translation effects. In this work, we are considering a rotation- and translation-invariant representation developed by E. Simoncelli and integrate it in our image registration scheme. The two types of filters, Daubechies and Simoncelli filters, are then being compared from a registration point of view, utilizing synthetic data as well as data from the Landsat/ Thematic Mapper (TM) and from the NOAA Advanced Very High Resolution Radiometer (AVHRR).
Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications
2005-04-01
coefficient sets describing inverse transforms and matched forward/ inverse transform pairs that consistently outperform wavelets for image compression and reconstruction applications under conditions subject to quantization error.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Non-parametric transient classification using adaptive wavelets
NASA Astrophysics Data System (ADS)
Varughese, Melvin M.; von Sachs, Rainer; Stephanou, Michael; Bassett, Bruce A.
2015-11-01
Classifying transients based on multiband light curves is a challenging but crucial problem in the era of GAIA and Large Synoptic Sky Telescope since the sheer volume of transients will make spectroscopic classification unfeasible. We present a non-parametric classifier that predicts the transient's class given training data. It implements two novel components: the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients - as well as the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The classifier is simple to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant. Hence, BAGIDIS does not need the light curves to be aligned to extract features. Further, BAGIDIS is non-parametric so it can be used effectively in blind searches for new objects. We demonstrate the effectiveness of our classifier against the Supernova Photometric Classification Challenge to correctly classify supernova light curves as Type Ia or non-Ia. We train our classifier on the spectroscopically confirmed subsample (which is not representative) and show that it works well for supernova with observed light-curve time spans greater than 100 d (roughly 55 per cent of the data set). For such data, we obtain a Ia efficiency of 80.5 per cent and a purity of 82.4 per cent, yielding a highly competitive challenge score of 0.49. This indicates that our `model-blind' approach may be particularly suitable for the general classification of astronomical transients in the era of large synoptic sky surveys.
Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anant, K.S.
1997-06-01
In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the Pmore » as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the feature identification methods of Chapter 3, the compression methods of Chapter 4, as well as the wavelet design methods of Chapter 5, are general enough to be easily applied to other transient signals.« less
Reversible wavelet filter banks with side informationless spatially adaptive low-pass filters
NASA Astrophysics Data System (ADS)
Abhayaratne, Charith
2011-07-01
Wavelet transforms that have an adaptive low-pass filter are useful in applications that require the signal singularities, sharp transitions, and image edges to be left intact in the low-pass signal. In scalable image coding, the spatial resolution scalability is achieved by reconstructing the low-pass signal subband, which corresponds to the desired resolution level, and discarding other high-frequency wavelet subbands. In such applications, it is vital to have low-pass subbands that are not affected by smoothing artifacts associated with low-pass filtering. We present the mathematical framework for achieving 1-D wavelet transforms that have a spatially adaptive low-pass filter (SALP) using the prediction-first lifting scheme. The adaptivity decisions are computed using the wavelet coefficients, and no bookkeeping is required for the perfect reconstruction. Then, 2-D wavelet transforms that have a spatially adaptive low-pass filter are designed by extending the 1-D SALP framework. Because the 2-D polyphase decompositions are used in this case, the 2-D adaptivity decisions are made nonseparable as opposed to the separable 2-D realization using 1-D transforms. We present examples using the 2-D 5/3 wavelet transform and their lossless image coding and scalable decoding performances in terms of quality and resolution scalability. The proposed 2-D-SALP scheme results in better performance compared to the existing adaptive update lifting schemes.
Jiang, Gang; Quan, Hong; Wang, Cheng; Gong, Qiyong
2012-12-01
In this paper, a new method of combining translation invariant (TI) and wavelet-threshold (WT) algorithm to distinguish weak and overlapping signals of proton magnetic resonance spectroscopy (1H-MRS) is presented. First, the 1H-MRS spectrum signal is transformed into wavelet domain and then its wavelet coefficients are obtained. Then, the TI method and WT method are applied to detect the weak signals overlapped by the strong ones. Through the analysis of the simulation data, we can see that both frequency and amplitude information of small-signals can be obtained accurately by the algorithm, and through the combination with the method of signal fitting, quantitative calculation of the area under weak signals peaks can be realized.
Wavelet analysis of polarization maps of polycrystalline biological fluids networks
NASA Astrophysics Data System (ADS)
Ushenko, Y. A.
2011-12-01
The optical model of human joints synovial fluid is proposed. The statistic (statistic moments), correlation (autocorrelation function) and self-similar (Log-Log dependencies of power spectrum) structure of polarization two-dimensional distributions (polarization maps) of synovial fluid has been analyzed. It has been shown that differentiation of polarization maps of joint synovial fluid with different physiological state samples is expected of scale-discriminative analysis. To mark out of small-scale domain structure of synovial fluid polarization maps, the wavelet analysis has been used. The set of parameters, which characterize statistic, correlation and self-similar structure of wavelet coefficients' distributions of different scales of polarization domains for diagnostics and differentiation of polycrystalline network transformation connected with the pathological processes, has been determined.
Synchrony in broadband fluctuation and the 2008 financial crisis.
Lin, Der Chyan
2013-01-01
We propose phase-like characteristics in scale-free broadband processes and consider fluctuation synchrony based on the temporal signature of significant amplitude fluctuation. Using wavelet transform, successful captures of similar fluctuation pattern between such broadband processes are demonstrated. The application to the financial data leading to the 2008 financial crisis reveals the transition towards a qualitatively different dynamical regime with many equity price in fluctuation synchrony. Further analysis suggests an underlying scale free "price fluctuation network" with large clustering coefficient.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic multispectral images have shown promising results for the proposed method over current state-of-the-art image-compression techniques.
Applications of wavelets in morphometric analysis of medical images
NASA Astrophysics Data System (ADS)
Davatzikos, Christos; Tao, Xiaodong; Shen, Dinggang
2003-11-01
Morphometric analysis of medical images is playing an increasingly important role in understanding brain structure and function, as well as in understanding the way in which these change during development, aging and pathology. This paper presents three wavelet-based methods with related applications in morphometric analysis of magnetic resonance (MR) brain images. The first method handles cases where very limited datasets are available for the training of statistical shape models in the deformable segmentation. The method is capable of capturing a larger range of shape variability than the standard active shape models (ASMs) can, by using the elegant spatial-frequency decomposition of the shape contours provided by wavelet transforms. The second method addresses the difficulty of finding correspondences in anatomical images, which is a key step in shape analysis and deformable registration. The detection of anatomical correspondences is completed by using wavelet-based attribute vectors as morphological signatures of voxels. The third method uses wavelets to characterize the morphological measurements obtained from all voxels in a brain image, and the entire set of wavelet coefficients is further used to build a brain classifier. Since the classification scheme operates in a very-high-dimensional space, it can determine subtle population differences with complex spatial patterns. Experimental results are provided to demonstrate the performance of the proposed methods.
Neural network and wavelets in prediction of cosmic ray variability: The North Africa as study case
NASA Astrophysics Data System (ADS)
Zarrouk, Neïla; Bennaceur, Raouf
2010-04-01
Since the Earth is permanently bombarded with energetic cosmic rays particles, cosmic ray flux has been monitored by ground based neutron monitors for decades. In this work an attempt is made to investigate the decomposition and reconstructions provided by Morlet wavelet technique, using data series of cosmic rays variabilities, then to constitute from this wavelet analysis an input data base for the neural network system with which we can then predict decomposition coefficients and all related parameters for other points. Thus the latter are used for the recomposition step in which the plots and curves describing the relative cosmic rays intensities are obtained in any points on the earth in which we do not have any information about cosmic rays intensities. Although neural network associated with wavelets are not frequently used for cosmic rays time series, they seems very suitable and are a good choice to obtain these results. In fact we have succeeded to derive a very useful tool to obtain the decomposition coefficients, the main periods for each point on the Earth and on another hand we have now a kind of virtual NM for these locations like North Africa countries, Maroc, Algeria, Tunisia, Libya and Cairo. We have found the aspect of very known 11-years cycle: T1, we have also revealed the variation type of T2 and especially T3 cycles which seem to be induced by particular Earth's phenomena.
Fusing Image Data for Calculating Position of an Object
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang; Liebersbach, Robert; Trebi-Ollenu, Ashitey
2007-01-01
A computer program has been written for use in maintaining the calibration, with respect to the positions of imaged objects, of a stereoscopic pair of cameras on each of the Mars Explorer Rovers Spirit and Opportunity. The program identifies and locates a known object in the images. The object in question is part of a Moessbauer spectrometer located at the tip of a robot arm, the kinematics of which are known. In the program, the images are processed through a module that extracts edges, combines the edges into line segments, and then derives ellipse centroids from the line segments. The images are also processed by a feature-extraction algorithm that performs a wavelet analysis, then performs a pattern-recognition operation in the wavelet-coefficient space to determine matches to a texture feature measure derived from the horizontal, vertical, and diagonal coefficients. The centroids from the ellipse finder and the wavelet feature matcher are then fused to determine co-location. In the event that a match is found, the centroid (or centroids if multiple matches are present) is reported. If no match is found, the process reports the results of the analyses for further examination by human experts.
Liu, Zhiwen; He, Zhengjia; Guo, Wei; Tang, Zhangchun
2016-03-01
In order to extract fault features of large-scale power equipment from strong background noise, a hybrid fault diagnosis method based on the second generation wavelet de-noising (SGWD) and the local mean decomposition (LMD) is proposed in this paper. In this method, a de-noising algorithm of second generation wavelet transform (SGWT) using neighboring coefficients was employed as the pretreatment to remove noise in rotating machinery vibration signals by virtue of its good effect in enhancing the signal-noise ratio (SNR). Then, the LMD method is used to decompose the de-noised signals into several product functions (PFs). The PF corresponding to the faulty feature signal is selected according to the correlation coefficients criterion. Finally, the frequency spectrum is analyzed by applying the FFT to the selected PF. The proposed method is applied to analyze the vibration signals collected from an experimental gearbox and a real locomotive rolling bearing. The results demonstrate that the proposed method has better performances such as high SNR and fast convergence speed than the normal LMD method. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Just Noticeable Distortion Model and Its Application in Color Image Watermarking
NASA Astrophysics Data System (ADS)
Liu, Kuo-Cheng
In this paper, a perceptually adaptive watermarking scheme for color images is proposed in order to achieve robustness and transparency. A new just noticeable distortion (JND) estimator for color images is first designed in the wavelet domain. The key issue of the JND model is to effectively integrate visual masking effects. The estimator is an extension to the perceptual model that is used in image coding for grayscale images. Except for the visual masking effects given coefficient by coefficient by taking into account the luminance content and the texture of grayscale images, the crossed masking effect given by the interaction between luminance and chrominance components and the effect given by the variance within the local region of the target coefficient are investigated such that the visibility threshold for the human visual system (HVS) can be evaluated. In a locally adaptive fashion based on the wavelet decomposition, the estimator applies to all subbands of luminance and chrominance components of color images and is used to measure the visibility of wavelet quantization errors. The subband JND profiles are then incorporated into the proposed color image watermarking scheme. Performance in terms of robustness and transparency of the watermarking scheme is obtained by means of the proposed approach to embed the maximum strength watermark while maintaining the perceptually lossless quality of the watermarked color image. Simulation results show that the proposed scheme with inserting watermarks into luminance and chrominance components is more robust than the existing scheme while retaining the watermark transparency.
NASA Astrophysics Data System (ADS)
Campo, D.; Quintero, O. L.; Bastidas, M.
2016-04-01
We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.
Paul, Rimi; Sengupta, Anindita
2017-11-01
A new controller based on discrete wavelet packet transform (DWPT) for liquid level system (LLS) has been presented here. This controller generates control signal using node coefficients of the error signal which interprets many implicit phenomena such as process dynamics, measurement noise and effect of external disturbances. Through simulation results on LLS problem, this controller is shown to perform faster than both the discrete wavelet transform based controller and conventional proportional integral controller. Also, it is more efficient in terms of its ability to provide better noise rejection. To overcome the wind up phenomenon by considering the saturation due to presence of actuator, anti-wind up technique is applied to the conventional PI controller and compared to the wavelet packet transform based controller. In this case also, packet controller is found better than the other ones. This similar work has been extended for analogous first order RC plant as well as second order plant also. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
A new method of Quickbird own image fusion
NASA Astrophysics Data System (ADS)
Han, Ying; Jiang, Hong; Zhang, Xiuying
2009-10-01
With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.
Steerable dyadic wavelet transform and interval wavelets for enhancement of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Koren, Iztok; Yang, Wuhai; Taylor, Fred J.
1995-04-01
This paper describes two approaches for accomplishing interactive feature analysis by overcomplete multiresolution representations. We show quantitatively that transform coefficients, modified by an adaptive non-linear operator, can make more obvious unseen or barely seen features of mammography without requiring additional radiation. Our results are compared with traditional image enhancement techniques by measuring the local contrast of known mammographic features. We design a filter bank representing a steerable dyadic wavelet transform that can be used for multiresolution analysis along arbitrary orientations. Digital mammograms are enhanced by orientation analysis performed by a steerable dyadic wavelet transform. Arbitrary regions of interest (ROI) are enhanced by Deslauriers-Dubuc interpolation representations on an interval. We demonstrate that our methods can provide radiologists with an interactive capability to support localized processing of selected (suspicion) areas (lesions). Features extracted from multiscale representations can provide an adaptive mechanism for accomplishing local contrast enhancement. By improving the visualization of breast pathology can improve changes of early detection while requiring less time to evaluate mammograms for most patients.
Sensitivity evaluation of dynamic speckle activity measurements using clustering methods.
Etchepareborda, Pablo; Federico, Alejandro; Kaufmann, Guillermo H
2010-07-01
We evaluate and compare the use of competitive neural networks, self-organizing maps, the expectation-maximization algorithm, K-means, and fuzzy C-means techniques as partitional clustering methods, when the sensitivity of the activity measurement of dynamic speckle images needs to be improved. The temporal history of the acquired intensity generated by each pixel is analyzed in a wavelet decomposition framework, and it is shown that the mean energy of its corresponding wavelet coefficients provides a suited feature space for clustering purposes. The sensitivity obtained by using the evaluated clustering techniques is also compared with the well-known methods of Konishi-Fujii, weighted generalized differences, and wavelet entropy. The performance of the partitional clustering approach is evaluated using simulated dynamic speckle patterns and also experimental data.
Multiscale 3-D shape representation and segmentation using spherical wavelets.
Nain, Delphine; Haker, Steven; Bobick, Aaron; Tannenbaum, Allen
2007-04-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details.
Multiscale 3-D Shape Representation and Segmentation Using Spherical Wavelets
Nain, Delphine; Haker, Steven; Bobick, Aaron
2013-01-01
This paper presents a novel multiscale shape representation and segmentation algorithm based on the spherical wavelet transform. This work is motivated by the need to compactly and accurately encode variations at multiple scales in the shape representation in order to drive the segmentation and shape analysis of deep brain structures, such as the caudate nucleus or the hippocampus. Our proposed shape representation can be optimized to compactly encode shape variations in a population at the needed scale and spatial locations, enabling the construction of more descriptive, nonglobal, nonuniform shape probability priors to be included in the segmentation and shape analysis framework. In particular, this representation addresses the shortcomings of techniques that learn a global shape prior at a single scale of analysis and cannot represent fine, local variations in a population of shapes in the presence of a limited dataset. Specifically, our technique defines a multiscale parametric model of surfaces belonging to the same population using a compact set of spherical wavelets targeted to that population. We further refine the shape representation by separating into groups wavelet coefficients that describe independent global and/or local biological variations in the population, using spectral graph partitioning. We then learn a prior probability distribution induced over each group to explicitly encode these variations at different scales and spatial locations. Based on this representation, we derive a parametric active surface evolution using the multiscale prior coefficients as parameters for our optimization procedure to naturally include the prior for segmentation. Additionally, the optimization method can be applied in a coarse-to-fine manner. We apply our algorithm to two different brain structures, the caudate nucleus and the hippocampus, of interest in the study of schizophrenia. We show: 1) a reconstruction task of a test set to validate the expressiveness of our multiscale prior and 2) a segmentation task. In the reconstruction task, our results show that for a given training set size, our algorithm significantly improves the approximation of shapes in a testing set over the Point Distribution Model, which tends to oversmooth data. In the segmentation task, our validation shows our algorithm is computationally efficient and outperforms the Active Shape Model algorithm, by capturing finer shape details. PMID:17427745
S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation
2014-01-01
Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620
Shao, Yu; Chang, Chip-Hong
2007-08-01
We present a new speech enhancement scheme for a single-microphone system to meet the demand for quality noise reduction algorithms capable of operating at a very low signal-to-noise ratio. A psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise and improve the intelligibility of speech. The proposed method is a generalized time-frequency subtraction algorithm, which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. The wavelet coefficients are used to calculate the Bark spreading energy and temporal spreading energy, from which a time-frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the proposed method. An unvoiced speech enhancement algorithm is also integrated into the system to improve the intelligibility of speech. Through rigorous objective and subjective evaluations, it is shown that the proposed speech enhancement system is capable of reducing noise with little speech degradation in adverse noise environments and the overall performance is superior to several competitive methods.
Characterization and Simulation of Gunfire with Wavelets
Smallwood, David O.
1999-01-01
Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Novel wavelet threshold denoising method in axle press-fit zone ultrasonic detection
NASA Astrophysics Data System (ADS)
Peng, Chaoyong; Gao, Xiaorong; Peng, Jianping; Wang, Ai
2017-02-01
Axles are important part of railway locomotives and vehicles. Periodic ultrasonic inspection of axles can effectively detect and monitor axle fatigue cracks. However, in the axle press-fit zone, the complex interface contact condition reduces the signal-noise ratio (SNR). Therefore, the probability of false positives and false negatives increases. In this work, a novel wavelet threshold function is created to remove noise and suppress press-fit interface echoes in axle ultrasonic defect detection. The novel wavelet threshold function with two variables is designed to ensure the precision of optimum searching process. Based on the positive correlation between the correlation coefficient and SNR and with the experiment phenomenon that the defect and the press-fit interface echo have different axle-circumferential correlation characteristics, a discrete optimum searching process for two undetermined variables in novel wavelet threshold function is conducted. The performance of the proposed method is assessed by comparing it with traditional threshold methods using real data. The statistic results of the amplitude and the peak SNR of defect echoes show that the proposed wavelet threshold denoising method not only maintains the amplitude of defect echoes but also has a higher peak SNR.
Response of Autonomic Nervous System to Body Positions:
NASA Astrophysics Data System (ADS)
Xu, Aiguo; Gonnella, G.; Federici, A.; Stramaglia, S.; Simone, F.; Zenzola, A.; Santostasi, R.
Two mathematical methods, the Fourier and wavelet transforms, were used to study the short term cardiovascular control system. Time series, picked from electrocardiogram and arterial blood pressure lasting 6 minutes, were analyzed in supine position (SUP), during the first (HD1) and the second parts (HD2) of 90° head down tilt, and during recovery (REC). The wavelet transform was performed using the Haar function of period T=2j (j=1,2,...,6) to obtain wavelet coefficients. Power spectra components were analyzed within three bands, VLF (0.003-0.04), LF (0.04-0.15) and HF (0.15-0.4) with the frequency unit cycle/interval. Wavelet transform demonstrated a higher discrimination among all analyzed periods than the Fourier transform. For the Fourier analysis, the LF of R-R intervals and VLF of systolic blood pressure show more evident difference for different body positions. For the wavelet analysis, the systolic blood pressures show much more evident differences than the R-R intervals. This study suggests a difference in the response of the vessels and the heart to different body positions. The partial dissociation between VLF and LF results is a physiologically relevant finding of this work.
NASA Astrophysics Data System (ADS)
Wu, Yunnan; Luo, Lin; Li, Jin; Zhang, Ya-Qin
2000-05-01
The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the in- place calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computational saving achieved by PIWS is demonstrated with extensive experiment results.
Feng, Zhao; Ling, Jie; Ming, Min; Xiao, Xiao-Hui
2017-08-01
For precision motion, high-bandwidth and flexible tracking are the two important issues for significant performance improvement. Iterative learning control (ILC) is an effective feedforward control method only for systems that operate strictly repetitively. Although projection ILC can track varying references, the performance is still limited by the fixed-bandwidth Q-filter, especially for triangular waves tracking commonly used in a piezo nanopositioner. In this paper, a wavelet transform-based linear time-varying (LTV) Q-filter design for projection ILC is proposed to compensate high-frequency errors and improve the ability to tracking varying references simultaneously. The LVT Q-filter is designed based on the modulus maximum of wavelet detail coefficients calculated by wavelet transform to determine the high-frequency locations of each iteration with the advantages of avoiding cross-terms and segmenting manually. The proposed approach was verified on a piezo nanopositioner. Experimental results indicate that the proposed approach can locate the high-frequency regions accurately and achieve the best performance under varying references compared with traditional frequency-domain and projection ILC with a fixed-bandwidth Q-filter, which validates that through implementing the LTV filter on projection ILC, high-bandwidth and flexible tracking can be achieved simultaneously by the proposed approach.
High-performance wavelet engine
NASA Astrophysics Data System (ADS)
Taylor, Fred J.; Mellot, Jonathon D.; Strom, Erik; Koren, Iztok; Lewis, Michael P.
1993-11-01
Wavelet processing has shown great promise for a variety of image and signal processing applications. Wavelets are also among the most computationally expensive techniques in signal processing. It is demonstrated that a wavelet engine constructed with residue number system arithmetic elements offers significant advantages over commercially available wavelet accelerators based upon conventional arithmetic elements. Analysis is presented predicting the dynamic range requirements of the reported residue number system based wavelet accelerator.
NASA Astrophysics Data System (ADS)
Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.
2014-12-01
Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.
Elgendi, Mohamed; Kumar, Shine; Guo, Long; Rutledge, Jennifer; Coe, James Y.; Zemp, Roger; Schuurmans, Dale; Adatia, Ian
2015-01-01
Background Automatic detection of the 1st (S1) and 2nd (S2) heart sounds is difficult, and existing algorithms are imprecise. We sought to develop a wavelet-based algorithm for the detection of S1 and S2 in children with and without pulmonary arterial hypertension (PAH). Method Heart sounds were recorded at the second left intercostal space and the cardiac apex with a digital stethoscope simultaneously with pulmonary arterial pressure (PAP). We developed a Daubechies wavelet algorithm for the automatic detection of S1 and S2 using the wavelet coefficient ‘D 6’ based on power spectral analysis. We compared our algorithm with four other Daubechies wavelet-based algorithms published by Liang, Kumar, Wang, and Zhong. We annotated S1 and S2 from an audiovisual examination of the phonocardiographic tracing by two trained cardiologists and the observation that in all subjects systole was shorter than diastole. Results We studied 22 subjects (9 males and 13 females, median age 6 years, range 0.25–19). Eleven subjects had a mean PAP < 25 mmHg. Eleven subjects had PAH with a mean PAP ≥ 25 mmHg. All subjects had a pulmonary artery wedge pressure ≤ 15 mmHg. The sensitivity (SE) and positive predictivity (+P) of our algorithm were 70% and 68%, respectively. In comparison, the SE and +P of Liang were 59% and 42%, Kumar 19% and 12%, Wang 50% and 45%, and Zhong 43% and 53%, respectively. Our algorithm demonstrated robustness and outperformed the other methods up to a signal-to-noise ratio (SNR) of 10 dB. For all algorithms, detection errors arose from low-amplitude peaks, fast heart rates, low signal-to-noise ratio, and fixed thresholds. Conclusion Our algorithm for the detection of S1 and S2 improves the performance of existing Daubechies-based algorithms and justifies the use of the wavelet coefficient ‘D 6’ through power spectral analysis. Also, the robustness despite ambient noise may improve real world clinical performance. PMID:26629704
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiang, N. B.; Kong, D. F., E-mail: nanbin@ynao.ac.cn
The Physikalisch Meteorologisches Observatorium Davos total solar irradiance (TSI), Active Cavity Radiometer Irradiance Monitoring TSI, and Royal Meteorological Institute of Belgium TSI are three typical TSI composites. Magnetic Plage Strength Index (MPSI) and Mount Wilson Sunspot Index (MWSI) should indicate the weak and strong magnetic field activity on the solar full disk, respectively. Cross-correlation (CC) analysis of MWSI with three TSI composites shows that TSI should be weakly correlated with MWSI, and not be in phase with MWSI at timescales of solar cycles. The wavelet coherence (WTC) and partial wavelet coherence (PWC) of TSI with MWSI indicate that the inter-solar-cyclemore » variation of TSI is also not related to solar strong magnetic field activity, which is represented by MWSI. However, CC analysis of MPSI with three TSI composites indicates that TSI should be moderately correlated and accurately in phase with MPSI at timescales of solar cycles, and that the statistical significance test indicates that the correlation coefficient of three TSI composites with MPSI is statistically significantly higher than that of three TSI composites with MWSI. Furthermore, the cross wavelet transform (XWT) and WTC of TSI with MPSI show that the TSI is highly related and actually in phase with MPSI at a timescale of a solar cycle as well. Consequently, the CC analysis, XWT, and WTC indicate that the solar weak magnetic activity on the full disk, which is represented by MPSI, dominates the inter-solar-cycle variation of TSI.« less
Intelligent Gearbox Diagnosis Methods Based on SVM, Wavelet Lifting and RBR
Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng
2010-01-01
Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis. PMID:22399894
Intelligent gearbox diagnosis methods based on SVM, wavelet lifting and RBR.
Gao, Lixin; Ren, Zhiqiang; Tang, Wenliang; Wang, Huaqing; Chen, Peng
2010-01-01
Given the problems in intelligent gearbox diagnosis methods, it is difficult to obtain the desired information and a large enough sample size to study; therefore, we propose the application of various methods for gearbox fault diagnosis, including wavelet lifting, a support vector machine (SVM) and rule-based reasoning (RBR). In a complex field environment, it is less likely for machines to have the same fault; moreover, the fault features can also vary. Therefore, a SVM could be used for the initial diagnosis. First, gearbox vibration signals were processed with wavelet packet decomposition, and the signal energy coefficients of each frequency band were extracted and used as input feature vectors in SVM for normal and faulty pattern recognition. Second, precision analysis using wavelet lifting could successfully filter out the noisy signals while maintaining the impulse characteristics of the fault; thus effectively extracting the fault frequency of the machine. Lastly, the knowledge base was built based on the field rules summarized by experts to identify the detailed fault type. Results have shown that SVM is a powerful tool to accomplish gearbox fault pattern recognition when the sample size is small, whereas the wavelet lifting scheme can effectively extract fault features, and rule-based reasoning can be used to identify the detailed fault type. Therefore, a method that combines SVM, wavelet lifting and rule-based reasoning ensures effective gearbox fault diagnosis.
A Graph Theory Practice on Transformed Image: A Random Image Steganography
Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan
2013-01-01
Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios
Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A totalmore » of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet-based EFCM clustering model was introduced toward noise reduction and detail preservation. The proposed method improves the overall US image quality, which in turn could affect the decision-making on whether additional imaging and/or intervention is needed.« less
Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Karnabatidis, Dimitrios; Hazle, John D; Kagadis, George C
2014-07-01
Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists' qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. A new wavelet-based EFCM clustering model was introduced toward noise reduction and detail preservation. The proposed method improves the overall US image quality, which in turn could affect the decision-making on whether additional imaging and/or intervention is needed.
ECG feature extraction and disease diagnosis.
Bhyri, Channappa; Hamde, S T; Waghmare, L M
2011-01-01
An important factor to consider when using findings on electrocardiograms for clinical decision making is that the waveforms are influenced by normal physiological and technical factors as well as by pathophysiological factors. In this paper, we propose a method for the feature extraction and heart disease diagnosis using wavelet transform (WT) technique and LabVIEW (Laboratory Virtual Instrument Engineering workbench). LabVIEW signal processing tools are used to denoise the signal before applying the developed algorithm for feature extraction. First, we have developed an algorithm for R-peak detection using Haar wavelet. After 4th level decomposition of the ECG signal, the detailed coefficient is squared and the standard deviation of the squared detailed coefficient is used as the threshold for detection of R-peaks. Second, we have used daubechies (db6) wavelet for the low resolution signals. After cross checking the R-peak location in 4th level, low resolution signal of daubechies wavelet P waves and T waves are detected. Other features of diagnostic importance, mainly heart rate, R-wave width, Q-wave width, T-wave amplitude and duration, ST segment and frontal plane axis are also extracted and scoring pattern is applied for the purpose of heart disease diagnosis. In this study, detection of tachycardia, bradycardia, left ventricular hypertrophy, right ventricular hypertrophy and myocardial infarction have been considered. In this work, CSE ECG data base which contains 5000 samples recorded at a sampling frequency of 500 Hz and the ECG data base created by the S.G.G.S. Institute of Engineering and Technology, Nanded (Maharashtra) have been used.
Detection of Cardiac Abnormalities from Multilead ECG using Multiscale Phase Alternation Features.
Tripathy, R K; Dandapat, S
2016-06-01
The cardiac activities such as the depolarization and the relaxation of atria and ventricles are observed in electrocardiogram (ECG). The changes in the morphological features of ECG are the symptoms of particular heart pathology. It is a cumbersome task for medical experts to visually identify any subtle changes in the morphological features during 24 hours of ECG recording. Therefore, the automated analysis of ECG signal is a need for accurate detection of cardiac abnormalities. In this paper, a novel method for automated detection of cardiac abnormalities from multilead ECG is proposed. The method uses multiscale phase alternation (PA) features of multilead ECG and two classifiers, k-nearest neighbor (KNN) and fuzzy KNN for classification of bundle branch block (BBB), myocardial infarction (MI), heart muscle defect (HMD) and healthy control (HC). The dual tree complex wavelet transform (DTCWT) is used to decompose the ECG signal of each lead into complex wavelet coefficients at different scales. The phase of the complex wavelet coefficients is computed and the PA values at each wavelet scale are used as features for detection and classification of cardiac abnormalities. A publicly available multilead ECG database (PTB database) is used for testing of the proposed method. The experimental results show that, the proposed multiscale PA features and the fuzzy KNN classifier have better performance for detection of cardiac abnormalities with sensitivity values of 78.12 %, 80.90 % and 94.31 % for BBB, HMD and MI classes. The sensitivity value of proposed method for MI class is compared with the state-of-art techniques from multilead ECG.
NASA Astrophysics Data System (ADS)
Wang, Bingjie; Pi, Shaohua; Sun, Qi; Jia, Bo
2015-05-01
An improved classification algorithm that considers multiscale wavelet packet Shannon entropy is proposed. Decomposition coefficients at all levels are obtained to build the initial Shannon entropy feature vector. After subtracting the Shannon entropy map of the background signal, components of the strongest discriminating power in the initial feature vector are picked out to rebuild the Shannon entropy feature vector, which is transferred to radial basis function (RBF) neural network for classification. Four types of man-made vibrational intrusion signals are recorded based on a modified Sagnac interferometer. The performance of the improved classification algorithm has been evaluated by the classification experiments via RBF neural network under different diffusion coefficients. An 85% classification accuracy rate is achieved, which is higher than the other common algorithms. The classification results show that this improved classification algorithm can be used to classify vibrational intrusion signals in an automatic real-time monitoring system.
NASA Astrophysics Data System (ADS)
Ebrahimi Orimi, H.; Esmaeili, M.; Refahi Oskouei, A.; Mirhadizadehd, S. A.; Tse, P. W.
2017-10-01
Condition monitoring of rotary devices such as helical gears is an issue of great significance in industrial projects. This paper introduces a feature extraction method for gear fault diagnosis using wavelet packet due to its higher frequency resolution. During this investigation, the mother wavelet Daubechies 10 (Db-10) was applied to calculate the coefficient entropy of each frequency band of 5th level (32 frequency bands) as features. In this study, the peak value of the signal entropies was selected as applicable features in order to improve frequency band differentiation and reduce feature vectors' dimension. Feature extraction is followed by the fusion network where four different structured multi-layer perceptron networks are trained to classify the recorded signals (healthy/faulty). The robustness of fusion network outputs is greater compared to perceptron networks. The results provided by the fusion network indicate a classification of 98.88 and 97.95% for healthy and faulty classes, respectively.
Zhang, Lingli; Zeng, Li; Guo, Yumeng
2018-01-01
Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
Seismic Data Analysis throught Multi-Class Classification.
NASA Astrophysics Data System (ADS)
Anderson, P.; Kappedal, R. D.; Magana-Zook, S. A.
2017-12-01
In this research, we conducted twenty experiments of varying time and frequency bands on 5000seismic signals with the intent of finding a method to classify signals as either an explosion or anearthquake in an automated fashion. We used a multi-class approach by clustering of the data throughvarious techniques. Dimensional reduction was examined through the use of wavelet transforms withthe use of the coiflet mother wavelet and various coefficients to explore possible computational time vsaccuracy dependencies. Three and four classes were generated from the clustering techniques andexamined with the three class approach producing the most accurate and realistic results.
End-point detection in potentiometric titration by continuous wavelet transform.
Jakubowska, Małgorzata; Baś, Bogusław; Kubiak, Władysław W
2009-10-15
The aim of this work was construction of the new wavelet function and verification that a continuous wavelet transform with a specially defined dedicated mother wavelet is a useful tool for precise detection of end-point in a potentiometric titration. The proposed algorithm does not require any initial information about the nature or the type of analyte and/or the shape of the titration curve. The signal imperfection, as well as random noise or spikes has no influence on the operation of the procedure. The optimization of the new algorithm was done using simulated curves and next experimental data were considered. In the case of well-shaped and noise-free titration data, the proposed method gives the same accuracy and precision as commonly used algorithms. But, in the case of noisy or badly shaped curves, the presented approach works good (relative error mainly below 2% and coefficients of variability below 5%) while traditional procedures fail. Therefore, the proposed algorithm may be useful in interpretation of the experimental data and also in automation of the typical titration analysis, specially in the case when random noise interfere with analytical signal.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim; Boukadoum, Mounir
2015-08-01
We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Detection of P300 waves in single trials by the wavelet transform (WT).
Demiralp, T; Ademoglu, A; Schürmann, M; Başar-Eroglu, C; Başar, E
1999-01-01
The P300 response is conventionally obtained by averaging the responses to the task-relevant (target) stimuli of the oddball paradigm. However, it is well known that cognitive ERP components show a high variability due to changes of cognitive state during an experimental session. With simple tasks such changes may not be demonstrable by the conventional method of averaging the sweeps chosen according to task-relevance. Therefore, the present work employed a response-based classification procedure to choose the trials containing the P300 component from the whole set of sweeps of an auditory oddball paradigm. For this purpose, the most significant response property reflecting the P300 wave was identified by using the wavelet transform (WT). The application of a 5 octave quadratic B-spline-WT on single sweeps yielded discrete coefficients in each octave with an appropriate time resolution for each frequency range. The main feature indicating a P300 response was the positivity of the 4th delta (0.5-4 Hz) coefficient (310-430 ms) after stimulus onset. The average of selected single sweeps from the whole set of data according to this criterion yielded more enhanced P300 waves compared with the average of the target responses, and the average of the remaining sweeps showed a significantly smaller positivity in the P300 latency range compared with the average of the non-target responses. The combination of sweeps classified according to the task-based and response-based criteria differed significantly. This suggests an influence of changes in cognitive state on the presence of the P300 wave which cannot be assessed by task performance alone. Copyright 1999 Academic Press.
Sudarshan, Vidya K; Acharya, U Rajendra; Oh, Shu Lih; Adam, Muhammad; Tan, Jen Hong; Chua, Chua Kuang; Chua, Kok Poo; Tan, Ru San
2017-04-01
Identification of alarming features in the electrocardiogram (ECG) signal is extremely significant for the prediction of congestive heart failure (CHF). ECG signal analysis carried out using computer-aided techniques can speed up the diagnosis process and aid in the proper management of CHF patients. Therefore, in this work, dual tree complex wavelets transform (DTCWT)-based methodology is proposed for an automated identification of ECG signals exhibiting CHF from normal. In the experiment, we have performed a DTCWT on ECG segments of 2s duration up to six levels to obtain the coefficients. From these DTCWT coefficients, statistical features are extracted and ranked using Bhattacharyya, entropy, minimum redundancy maximum relevance (mRMR), receiver-operating characteristics (ROC), Wilcoxon, t-test and reliefF methods. Ranked features are subjected to k-nearest neighbor (KNN) and decision tree (DT) classifiers for automated differentiation of CHF and normal ECG signals. We have achieved 99.86% accuracy, 99.78% sensitivity and 99.94% specificity in the identification of CHF affected ECG signals using 45 features. The proposed method is able to detect CHF patients accurately using only 2s of ECG signal length and hence providing sufficient time for the clinicians to further investigate on the severity of CHF and treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chiariotti, P.; Martarelli, M.; Revel, G. M.
2017-12-01
A novel non-destructive testing procedure for delamination detection based on the exploitation of the simultaneous time and spatial sampling provided by Continuous Scanning Laser Doppler Vibrometry (CSLDV) and the feature extraction capability of Multi-Level wavelet-based processing is presented in this paper. The processing procedure consists in a multi-step approach. Once the optimal mother-wavelet is selected as the one maximizing the Energy to Shannon Entropy Ratio criterion among the mother-wavelet space, a pruning operation aiming at identifying the best combination of nodes inside the full-binary tree given by Wavelet Packet Decomposition (WPD) is performed. The pruning algorithm exploits, in double step way, a measure of the randomness of the point pattern distribution on the damage map space with an analysis of the energy concentration of the wavelet coefficients on those nodes provided by the first pruning operation. A combination of the point pattern distributions provided by each node of the ensemble node set from the pruning algorithm allows for setting a Damage Reliability Index associated to the final damage map. The effectiveness of the whole approach is proven on both simulated and real test cases. A sensitivity analysis related to the influence of noise on the CSLDV signal provided to the algorithm is also discussed, showing that the processing developed is robust enough to measurement noise. The method is promising: damages are well identified on different materials and for different damage-structure varieties.
Wavelet pressure reactivity index: A validation study.
Liu, Xiuyun; Czosnyka, Marek; Donnelly, Joseph; Cardim, Danilo; Cabeleira, Manuel; Hutchinson, Peter J; Hu, Xiao; Smielewski, Peter; Brady, Ken
2018-04-17
The brain is vulnerable to damage from too little or too much blood flow. A physiological mechanism called cerebral autoregulation (CA) exists to maintain stable blood flow even if cerebral perfusion pressure (CPP) is changing. A robust method for assessing CA is not yet available. There are still some problems with the traditional measure, the pressure reactivity index (PRx). We introduced a new method, wavelet transform method (wPRx) to assess CA using data from two sets of controlled hypotension experiments in piglets: One set with artificially manipulated ABP oscillations; the other group were spontaneous ABP waves. A significant linear relationship was found between wPRx and PRx in both groups, with wPRx rendering a more stable result for the spontaneous waves. Although both methods showed similar accuracy in distinguishing intact and impaired CA, it seems that wPRx tend to perform better than PRx, though not significantly. We present a novel method to monitor cerebral autoregulation (CA) using the wavelet transform (WT). The new method is validated against the pressure reactivity index (PRx) in two piglet experiments with controlled hypotension. The first experiment (n = 12) had controlled haemorrhage with artificial stationary arterial blood pressure (ABP) and intracranial pressure (ICP) oscillations induced by sinusoidal slow changes in positive end-expiratory pressure ('PEEP group') . The second experiment (n = 17) had venous balloon inflation during spontaneous, non-stationary ABP and ICP oscillations ('non-PEEP group'). Wavelet transform phase shift (WTP) between ABP and ICP was calculated in the frequency 0.0067-0.05 Hz. Wavelet semblance, the cosine of WTP was used to make the values comparable to PRx, and the new index was termed wavelet pressure reactivity index (wPRx). The traditional PRx, the running correlation coefficient between ABP and ICP, was calculated. The result showed a significant linear relationship between wPRx and PRx in the PEEP group (R = 0.88) and non-PEEP group (R = 0.56). In non-PEEP group, wPRx showed better performance than PRx in distinguishing CPP above and below lower limit of autoregulation (LLA). When CPP was decreased below LLA, wPRx increased from 0.43 ± 0.28 to 0.69 ± 0.12 (p = 0.003) while PRx increased from 0.07 ± 0.21 to 0.27 ± 0.37 (p = 0.04). Moreover, wPRx rendered a more stable result than PRx (SD of PRx was 0.40 ± 0.07, and SD of wPRx was 0.28 ± 0.11, p = 0.001). Assessment of CA using wavelet derived phase shift between ABP and ICP is feasible. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Acharya, Rajendra; Tan, Peck Ha; Subramaniam, Tavintharan; Tamura, Toshiyo; Chua, Kuang Chua; Goh, Seach Chyr Ernest; Lim, Choo Min; Goh, Shu Yi Diana; Chung, Kang Rui Conrad; Law, Chelsea
2008-02-01
Diabetes is a disorder of metabolism-the way our bodies use digested food for growth and energy. The most common form of diabetes is Type 2 diabetes. Abnormal plantar pressures are considered to play a major role in the pathologies of neuropathic ulcers in the diabetic foot. The purpose of this study was to examine the plantar pressure distribution in normal, diabetic Type 2 with and without neuropathy subjects. Foot scans were obtained using the F-scan (Tekscan USA) pressure measurement system. Various discrete wavelet coefficients were evaluated from the foot images. These extracted parameters were extracted using the discrete wavelet transform (DWT) and presented to the Gaussian mixture model (GMM) and a four-layer feed forward neural network for classification. We demonstrated a sensitivity of 100% and a specificity of more than 85% for the classifiers.
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Wavelet denoising of multiframe optical coherence tomography data.
Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2012-03-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.
Wavelet Based Protection Scheme for Multi Terminal Transmission System with PV and Wind Generation
NASA Astrophysics Data System (ADS)
Manju Sree, Y.; Goli, Ravi kumar; Ramaiah, V.
2017-08-01
A hybrid generation is a part of large power system in which number of sources usually attached to a power electronic converter and loads are clustered can operate independent of the main power system. The protection scheme is crucial against faults based on traditional over current protection since there are adequate problems due to fault currents in the mode of operation. This paper adopts a new approach for detection, discrimination of the faults for multi terminal transmission line protection in presence of hybrid generation. Transient current based protection scheme is developed with discrete wavelet transform. Fault indices of all phase currents at all terminals are obtained by analyzing the detail coefficients of current signals using bior 1.5 mother wavelet. This scheme is tested for different types of faults and is found effective for detection and discrimination of fault with various fault inception angle and fault impedance.
Pattern recognition by wavelet transforms using macro fibre composites transducers
NASA Astrophysics Data System (ADS)
Ruiz de la Hermosa González-Carrato, Raúl; García Márquez, Fausto Pedro; Dimlaye, Vichaar; Ruiz-Hernández, Diego
2014-10-01
This paper presents a novel pattern recognition approach for a non-destructive test based on macro fibre composite transducers applied in pipes. A fault detection and diagnosis (FDD) method is employed to extract relevant information from ultrasound signals by wavelet decomposition technique. The wavelet transform is a powerful tool that reveals particular characteristics as trends or breakdown points. The FDD developed for the case study provides information about the temperatures on the surfaces of the pipe, leading to monitor faults associated with cracks, leaks or corrosion. This issue may not be noticeable when temperatures are not subject to sudden changes, but it can cause structural problems in the medium and long-term. Furthermore, the case study is completed by a statistical method based on the coefficient of determination. The main purpose will be to predict future behaviours in order to set alarm levels as a part of a structural health monitoring system.
Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L
2008-01-01
This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.
Hierarchical Diagnosis of Vocal Fold Disorders
NASA Astrophysics Data System (ADS)
Nikkhah-Bahrami, Mansour; Ahmadi-Noubari, Hossein; Seyed Aghazadeh, Babak; Khadivi Heris, Hossein
This paper explores the use of hierarchical structure for diagnosis of vocal fold disorders. The hierarchical structure is initially used to train different second-level classifiers. At the first level normal and pathological signals have been distinguished. Next, pathological signals have been classified into neurogenic and organic vocal fold disorders. At the final level, vocal fold nodules have been distinguished from polyps in organic disorders category. For feature selection at each level of hierarchy, the reconstructed signal at each wavelet packet decomposition sub-band in 5 levels of decomposition with mother wavelet of (db10) is used to extract the nonlinear features of self-similarity and approximate entropy. Also, wavelet packet coefficients are used to measure energy and Shannon entropy features at different spectral sub-bands. Davies-Bouldin criterion has been employed to find the most discriminant features. Finally, support vector machines have been adopted as classifiers at each level of hierarchy resulting in the diagnosis accuracy of 92%.
Twofold processing for denoising ultrasound medical images.
Kishore, P V V; Kumar, K V V; Kumar, D Anil; Prasad, M V D; Goutham, E N D; Rahul, R; Krishna, C B S Vamsi; Sandeep, Y
2015-01-01
Ultrasound medical (US) imaging non-invasively pictures inside of a human body for disease diagnostics. Speckle noise attacks ultrasound images degrading their visual quality. A twofold processing algorithm is proposed in this work to reduce this multiplicative speckle noise. First fold used block based thresholding, both hard (BHT) and soft (BST), on pixels in wavelet domain with 8, 16, 32 and 64 non-overlapping block sizes. This first fold process is a better denoising method for reducing speckle and also inducing object of interest blurring. The second fold process initiates to restore object boundaries and texture with adaptive wavelet fusion. The degraded object restoration in block thresholded US image is carried through wavelet coefficient fusion of object in original US mage and block thresholded US image. Fusion rules and wavelet decomposition levels are made adaptive for each block using gradient histograms with normalized differential mean (NDF) to introduce highest level of contrast between the denoised pixels and the object pixels in the resultant image. Thus the proposed twofold methods are named as adaptive NDF block fusion with hard and soft thresholding (ANBF-HT and ANBF-ST). The results indicate visual quality improvement to an interesting level with the proposed twofold processing, where the first fold removes noise and second fold restores object properties. Peak signal to noise ratio (PSNR), normalized cross correlation coefficient (NCC), edge strength (ES), image quality Index (IQI) and structural similarity index (SSIM), measure the quantitative quality of the twofold processing technique. Validation of the proposed method is done by comparing with anisotropic diffusion (AD), total variational filtering (TVF) and empirical mode decomposition (EMD) for enhancement of US images. The US images are provided by AMMA hospital radiology labs at Vijayawada, India.
NASA Astrophysics Data System (ADS)
Li, Duan; Li, Xiaoli; Hagihira, Satoshi; Sleigh, Jamie W.
2011-10-01
Bicoherence quantifies the degree of quadratic phase coupling among different frequency components within a signal. Previous studies, using Fourier-based methods of bicoherence calculation (FBIC), have demonstrated that electroencephalographic bicoherence can be related to the end-tidal concentration of inhaled anesthetic drugs. However, FBIC methods require excessively long sections of the encephalogram. This problem might be overcome by the use of wavelet-based methods. In this study, we compare FBIC and a recently developed wavelet bicoherence (WBIC) method as a tool to quantify the effect of isoflurane on the electroencephalogram. We analyzed a set of previously published electroencephalographic data, obtained from 29 patients who underwent elective abdominal surgery under isoflurane general anesthesia combined with epidural anesthesia. Nine potential indices of the electroencephalographic anesthetic effect were obtained from the WBIC and FBIC techniques. The relationship between each index and end-tidal concentrations of isoflurane was evaluated using correlation coefficients (r), the inter-individual variations (CV) of index values, the coefficient of determination (R2) of the PKPD models and the prediction probability (PK). The WBIC-based indices tracked anesthetic effects better than the traditional FBIC-based ones. The DiagBic_En index (derived from the Shannon entropy of the diagonal bicoherence values) performed best [r = 0.79 (0.66-0.92), CV = 0.08 (0.05-0.12), R2 = 0.80 (0.75-0.85), PK = 0.79 (0.75-0.83)]. Short data segments of ~10-30 s were sufficient to reliably calculate the indices of WBIC. The wavelet-based bicoherence has advantages over the traditional Fourier-based bicoherence in analyzing volatile anesthetic effects on the electroencephalogram.
Abbasi Tarighat, Maryam
2016-02-01
Simultaneous spectrophotometric determination of a mixture of overlapped complexes of Fe(3+), Mn(2+), Cu(2+), and Zn(2+) ions with 2-(3-hydroxy-1-phenyl-but-2-enylideneamino) pyridine-3-ol(HPEP) by orthogonal projection approach-feed forward neural network (OPA-FFNN) and continuous wavelet transform-feed forward neural network (CWT-FFNN) is discussed. Ionic complexes HPEP were formulated with varying reagent concentration, pH and time of color formation for completion of complexation reactions. It was found that, at 5.0 × 10(-4) mol L(-1) of HPEP, pH 9.5 and 10 min after mixing the complexation reactions were completed. The spectral data were analyzed using partial response plots, and identified non-linearity modeled using FFNN. Reducing the number of OPA-FFNN and CWT-FFNN inputs were simplified using dissimilarity pure spectra of OPA and selected wavelet coefficients. Once the pure dissimilarity plots ad optimal wavelet coefficients are selected, different ANN models were employed for the calculation of the final calibration models. The performance of these two approaches were tested with regard to root mean square errors of prediction (RMSE %) values, using synthetic solutions. Under the working conditions, the proposed methods were successfully applied to the simultaneous determination of metal ions in different vegetable and foodstuff samples. The results show that, OPA-FFNN and CWT-FFNN were effective in simultaneously determining Fe(3+), Mn(2+), Cu(2+), and Zn(2+) concentration. Also, concentrations of metal ions in the samples were determined by flame atomic absorption spectrometry (FAAS). The amounts of metal ions obtained by the proposed methods were in good agreement with those obtained by FAAS. Copyright © 2015 Elsevier Ltd. All rights reserved.
Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth
NASA Astrophysics Data System (ADS)
Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana
2017-10-01
In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.
“Skill of Generalized Additive Model to Detect PM2.5 Health ...
Summary. Measures of health outcomes are collinear with meteorology and air quality, making analysis of connections between human health and air quality difficult. The purpose of this analysis was to determine time scales and periods shared by the variables of interest (and by implication scales and periods that are not shared). Hospital admissions, meteorology (temperature and relative humidity), and air quality (PM2.5 and daily maximum ozone) for New York City during the period 2000-2006 were decomposed into temporal scales ranging from 2 days to greater than two years using a complex wavelet transform. Health effects were modeled as functions of the wavelet components of meteorology and air quality using the generalized additive model (GAM) framework. This simulation study showed that GAM is extremely successful at extracting and estimating a health effect embedded in a dataset. It also shows that, if the objective in mind is to estimate the health signal but not to fully explain this signal, a simple GAM model with a single confounder (calendar time) whose smooth representation includes a sufficient number of constraints is as good as a more complex model.Introduction. In the context of wavelet regression, confounding occurs when two or more independent variables interact with the dependent variable at the same frequency. Confounding also acts on a variety of time scales, changing the PM2.5 coefficient (magnitude and sign) and its significance ac
Single Trial EEG Patterns for the Prediction of Individual Differences in Fluid Intelligence.
Qazi, Emad-Ul-Haq; Hussain, Muhammad; Aboalsamh, Hatim; Malik, Aamir Saeed; Amin, Hafeez Ullah; Bamatraf, Saeed
2016-01-01
Assessing a person's intelligence level is required in many situations, such as career counseling and clinical applications. EEG evoked potentials in oddball task and fluid intelligence score are correlated because both reflect the cognitive processing and attention. A system for prediction of an individual's fluid intelligence level using single trial Electroencephalography (EEG) signals has been proposed. For this purpose, we employed 2D and 3D contents and 34 subjects each for 2D and 3D, which were divided into low-ability (LA) and high-ability (HA) groups using Raven's Advanced Progressive Matrices (RAPM) test. Using visual oddball cognitive task, neural activity of each group was measured and analyzed over three midline electrodes (Fz, Cz, and Pz). To predict whether an individual belongs to LA or HA group, features were extracted using wavelet decomposition of EEG signals recorded in visual oddball task and support vector machine (SVM) was used as a classifier. Two different types of Haar wavelet transform based features have been extracted from the band (0.3 to 30 Hz) of EEG signals. Statistical wavelet features and wavelet coefficient features from the frequency bands 0.0-1.875 Hz (delta low) and 1.875-3.75 Hz (delta high), resulted in the 100 and 98% prediction accuracies, respectively, both for 2D and 3D contents. The analysis of these frequency bands showed clear difference between LA and HA groups. Further, discriminative values of the features have been validated using statistical significance tests and inter-class and intra-class variation analysis. Also, statistical test showed that there was no effect of 2D and 3D content on the assessment of fluid intelligence level. Comparisons with state-of-the-art techniques showed the superiority of the proposed system.
Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain
Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan
2014-01-01
Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT) based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT), the fast discrete curvelet transform (FDCT), and the dual tree complex wavelet transform (DTCWT) based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images. PMID:25214889
NASA Technical Reports Server (NTRS)
Trejo, Leonard J.; Shensa, Mark J.; Remington, Roger W. (Technical Monitor)
1998-01-01
This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many f ree parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation,-, algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance.
NASA Technical Reports Server (NTRS)
Trejo, L. J.; Shensa, M. J.
1999-01-01
This report describes the development and evaluation of mathematical models for predicting human performance from discrete wavelet transforms (DWT) of event-related potentials (ERP) elicited by task-relevant stimuli. The DWT was compared to principal components analysis (PCA) for representation of ERPs in linear regression and neural network models developed to predict a composite measure of human signal detection performance. Linear regression models based on coefficients of the decimated DWT predicted signal detection performance with half as many free parameters as comparable models based on PCA scores. In addition, the DWT-based models were more resistant to model degradation due to over-fitting than PCA-based models. Feed-forward neural networks were trained using the backpropagation algorithm to predict signal detection performance based on raw ERPs, PCA scores, or high-power coefficients of the DWT. Neural networks based on high-power DWT coefficients trained with fewer iterations, generalized to new data better, and were more resistant to overfitting than networks based on raw ERPs. Networks based on PCA scores did not generalize to new data as well as either the DWT network or the raw ERP network. The results show that wavelet expansions represent the ERP efficiently and extract behaviorally important features for use in linear regression or neural network models of human performance. The efficiency of the DWT is discussed in terms of its decorrelation and energy compaction properties. In addition, the DWT models provided evidence that a pattern of low-frequency activity (1 to 3.5 Hz) occurring at specific times and scalp locations is a reliable correlate of human signal detection performance. Copyright 1999 Academic Press.
Chitchian, Shahab; Fiddy, Michael; Fried, Nathaniel M
2008-01-01
Preservation of the cavernous nerves during prostate cancer surgery is critical in preserving sexual function after surgery. Optical coherence tomography (OCT) of the prostate nerves has recently been studied for potential use in nerve-sparing prostate surgery. In this study, the discrete wavelet transform and complex dual-tree wavelet transform are implemented for wavelet shrinkage denoising in OCT images of the rat prostate. Applying the complex dual-tree wavelet transform provides improved results for speckle noise reduction in the OCT prostate image. Image quality metrics of the cavernous nerves and signal-to-noise ratio (SNR) were improved significantly using this complex wavelet denoising technique.
Constructing storyboards based on hierarchical clustering analysis
NASA Astrophysics Data System (ADS)
Hasebe, Satoshi; Sami, Mustafa M.; Muramatsu, Shogo; Kikuchi, Hisakazu
2005-07-01
There are growing needs for quick preview of video contents for the purpose of improving accessibility of video archives as well as reducing network traffics. In this paper, a storyboard that contains a user-specified number of keyframes is produced from a given video sequence. It is based on hierarchical cluster analysis of feature vectors that are derived from wavelet coefficients of video frames. Consistent use of extracted feature vectors is the key to avoid a repetition of computationally-intensive parsing of the same video sequence. Experimental results suggest that a significant reduction in computational time is gained by this strategy.
Face recognition by applying wavelet subband representation and kernel associative memory.
Zhang, Bai-Ling; Zhang, Haihong; Ge, Shuzhi Sam
2004-01-01
In this paper, we propose an efficient face recognition scheme which has two features: 1) representation of face images by two-dimensional (2-D) wavelet subband coefficients and 2) recognition by a modular, personalised classification method based on kernel associative memory models. Compared to PCA projections and low resolution "thumb-nail" image representations, wavelet subband coefficients can efficiently capture substantial facial features while keeping computational complexity low. As there are usually very limited samples, we constructed an associative memory (AM) model for each person and proposed to improve the performance of AM models by kernel methods. Specifically, we first applied kernel transforms to each possible training pair of faces sample and then mapped the high-dimensional feature space back to input space. Our scheme using modular autoassociative memory for face recognition is inspired by the same motivation as using autoencoders for optical character recognition (OCR), for which the advantages has been proven. By associative memory, all the prototypical faces of one particular person are used to reconstruct themselves and the reconstruction error for a probe face image is used to decide if the probe face is from the corresponding person. We carried out extensive experiments on three standard face recognition datasets, the FERET data, the XM2VTS data, and the ORL data. Detailed comparisons with earlier published results are provided and our proposed scheme offers better recognition accuracy on all of the face datasets.
NASA Astrophysics Data System (ADS)
Ibáñez, Flor; Baltazar, Arturo; Mijarez, Rito; Aranda, Jorge
2015-03-01
Multiwire cables are widely used in important civil structures. Since they are exposed to several dynamic and static loads, their structural health can be compromised. The cables can also be submitted to mechanical contact, tension and energy propagation in addition to changes in size and material within their wires. Due to the critical role played by multiwire cables, it is necessary to develop a non-destructive health monitoring method to maintain their structure and proper performance. Ultrasonic inspection using guided waves is a promising non-destructive damage monitoring technique for rods, single wires and multiwire cables. The propagated guided waves are composed by an infinite number of vibrational modes making their analysis difficult. In this work, an entropy-based method to identify small changes in non-stationary signals is proposed. A system to capture and post-process acoustic signals is implemented. The Discrete Wavelet Transform (DWT) is computed in order to obtain the reconstructed wavelet coefficients of the signals and to analyze the energy at different scales. The feasibility of using the concept of entropy evolution of non-stationary signals to detect damage in multiwire cables is evaluated. The results show that there is a high correlation between the entropy value and damage level of the cable. The proposed method has low sensitivity to noise and reduces the computational complexity found in a typical time-frequency analysis.
Multiscale computations with a wavelet-adaptive algorithm
NASA Astrophysics Data System (ADS)
Rastigejev, Yevgenii Anatolyevich
A wavelet-based adaptive multiresolution algorithm for the numerical solution of multiscale problems governed by partial differential equations is introduced. The main features of the method include fast algorithms for the calculation of wavelet coefficients and approximation of derivatives on nonuniform stencils. The connection between the wavelet order and the size of the stencil is established. The algorithm is based on the mathematically well established wavelet theory. This allows us to provide error estimates of the solution which are used in conjunction with an appropriate threshold criteria to adapt the collocation grid. The efficient data structures for grid representation as well as related computational algorithms to support grid rearrangement procedure are developed. The algorithm is applied to the simulation of phenomena described by Navier-Stokes equations. First, we undertake the study of the ignition and subsequent viscous detonation of a H2 : O2 : Ar mixture in a one-dimensional shock tube. Subsequently, we apply the algorithm to solve the two- and three-dimensional benchmark problem of incompressible flow in a lid-driven cavity at large Reynolds numbers. For these cases we show that solutions of comparable accuracy as the benchmarks are obtained with more than an order of magnitude reduction in degrees of freedom. The simulations show the striking ability of the algorithm to adapt to a solution having different scales at different spatial locations so as to produce accurate results at a relatively low computational cost.
Discrete wavelet transform: a tool in smoothing kinematic data.
Ismail, A R; Asfour, S S
1999-03-01
Motion analysis systems typically introduce noise to the displacement data recorded. Butterworth digital filters have been used to smooth the displacement data in order to obtain smoothed velocities and accelerations. However, this technique does not yield satisfactory results, especially when dealing with complex kinematic motions that occupy the low- and high-frequency bands. The use of the discrete wavelet transform, as an alternative to digital filters, is presented in this paper. The transform passes the original signal through two complementary low- and high-pass FIR filters and decomposes the signal into an approximation function and a detail function. Further decomposition of the signal results in transforming the signal into a hierarchy set of orthogonal approximation and detail functions. A reverse process is employed to perfectly reconstruct the signal (inverse transform) back from its approximation and detail functions. The discrete wavelet transform was applied to the displacement data recorded by Pezzack et al., 1977. The smoothed displacement data were twice differentiated and compared to Pezzack et al.'s acceleration data in order to choose the most appropriate filter coefficients and decomposition level on the basis of maximizing the percentage of retained energy (PRE) and minimizing the root mean square error (RMSE). Daubechies wavelet of the fourth order (Db4) at the second decomposition level showed better results than both the biorthogonal and Coiflet wavelets (PRE = 97.5%, RMSE = 4.7 rad s-2). The Db4 wavelet was then used to compress complex displacement data obtained from a noisy mathematically generated function. Results clearly indicate superiority of this new smoothing approach over traditional filters.
Novel Spectral Representations and Sparsity-Driven Algorithms for Shape Modeling and Analysis
NASA Astrophysics Data System (ADS)
Zhong, Ming
In this dissertation, we focus on extending classical spectral shape analysis by incorporating spectral graph wavelets and sparsity-seeking algorithms. Defined with the graph Laplacian eigenbasis, the spectral graph wavelets are localized both in the vertex domain and graph spectral domain, and thus are very effective in describing local geometry. With a rich dictionary of elementary vectors and forcing certain sparsity constraints, a real life signal can often be well approximated by a very sparse coefficient representation. The many successful applications of sparse signal representation in computer vision and image processing inspire us to explore the idea of employing sparse modeling techniques with dictionary of spectral basis to solve various shape modeling problems. Conventional spectral mesh compression uses the eigenfunctions of mesh Laplacian as shape bases, which are highly inefficient in representing local geometry. To ameliorate, we advocate an innovative approach to 3D mesh compression using spectral graph wavelets as dictionary to encode mesh geometry. The spectral graph wavelets are locally defined at individual vertices and can better capture local shape information than Laplacian eigenbasis. The multi-scale SGWs form a redundant dictionary as shape basis, so we formulate the compression of 3D shape as a sparse approximation problem that can be readily handled by greedy pursuit algorithms. Surface inpainting refers to the completion or recovery of missing shape geometry based on the shape information that is currently available. We devise a new surface inpainting algorithm founded upon the theory and techniques of sparse signal recovery. Instead of estimating the missing geometry directly, our novel method is to find this low-dimensional representation which describes the entire original shape. More specifically, we find that, for many shapes, the vertex coordinate function can be well approximated by a very sparse coefficient representation with respect to the dictionary comprising its Laplacian eigenbasis, and it is then possible to recover this sparse representation from partial measurements of the original shape. Taking advantage of the sparsity cue, we advocate a novel variational approach for surface inpainting, integrating data fidelity constraints on the shape domain with coefficient sparsity constraints on the transformed domain. Because of the powerful properties of Laplacian eigenbasis, the inpainting results of our method tend to be globally coherent with the remaining shape. Informative and discriminative feature descriptors are vital in qualitative and quantitative shape analysis for a large variety of graphics applications. We advocate novel strategies to define generalized, user-specified features on shapes. Our new region descriptors are primarily built upon the coefficients of spectral graph wavelets that are both multi-scale and multi-level in nature, consisting of both local and global information. Based on our novel spectral feature descriptor, we developed a user-specified feature detection framework and a tensor-based shape matching algorithm. Through various experiments, we demonstrate the competitive performance of our proposed methods and the great potential of spectral basis and sparsity-driven methods for shape modeling.
Solid harmonic wavelet scattering for predictions of molecule properties
NASA Astrophysics Data System (ADS)
Eickenberg, Michael; Exarchakis, Georgios; Hirn, Matthew; Mallat, Stéphane; Thiry, Louis
2018-06-01
We present a machine learning algorithm for the prediction of molecule properties inspired by ideas from density functional theory (DFT). Using Gaussian-type orbital functions, we create surrogate electronic densities of the molecule from which we compute invariant "solid harmonic scattering coefficients" that account for different types of interactions at different scales. Multilinear regressions of various physical properties of molecules are computed from these invariant coefficients. Numerical experiments show that these regressions have near state-of-the-art performance, even with relatively few training examples. Predictions over small sets of scattering coefficients can reach a DFT precision while being interpretable.
NASA Astrophysics Data System (ADS)
Chen, Guoxiong; Cheng, Qiuming
2016-02-01
Multi-resolution and scale-invariance have been increasingly recognized as two closely related intrinsic properties endowed in geofields such as geochemical and geophysical anomalies, and they are commonly investigated by using multiscale- and scaling-analysis methods. In this paper, the wavelet-based multiscale decomposition (WMD) method was proposed to investigate the multiscale natures of geochemical pattern from large scale to small scale. In the light of the wavelet transformation of fractal measures, we demonstrated that the wavelet approximation operator provides a generalization of box-counting method for scaling analysis of geochemical patterns. Specifically, the approximation coefficient acts as the generalized density-value in density-area fractal modeling of singular geochemical distributions. Accordingly, we presented a novel local singularity analysis (LSA) using the WMD algorithm which extends the conventional moving averaging to a kernel-based operator for implementing LSA. Finally, the novel LSA was validated using a case study dealing with geochemical data (Fe2O3) in stream sediments for mineral exploration in Inner Mongolia, China. In comparison with the LSA implemented using the moving averaging method the novel LSA using WMD identified improved weak geochemical anomalies associated with mineralization in covered area.
Wavelet-based localization of oscillatory sources from magnetoencephalography data.
Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C
2014-08-01
Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.
Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan
2016-07-27
This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.
NASA Astrophysics Data System (ADS)
Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean
2010-05-01
Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients across scales. We also focus on the likely gains of future inversions of finite-frequency seismic data using a sparsity promoting penalty in combination with our new wavelet approach.
NASA Astrophysics Data System (ADS)
Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian
2018-01-01
In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.
Muthusamy, Hariharan; Polat, Kemal; Yaacob, Sazali
2015-01-01
In the recent years, many research works have been published using speech related features for speech emotion recognition, however, recent studies show that there is a strong correlation between emotional states and glottal features. In this work, Mel-frequency cepstralcoefficients (MFCCs), linear predictive cepstral coefficients (LPCCs), perceptual linear predictive (PLP) features, gammatone filter outputs, timbral texture features, stationary wavelet transform based timbral texture features and relative wavelet packet energy and entropy features were extracted from the emotional speech (ES) signals and its glottal waveforms(GW). Particle swarm optimization based clustering (PSOC) and wrapper based particle swarm optimization (WPSO) were proposed to enhance the discerning ability of the features and to select the discriminating features respectively. Three different emotional speech databases were utilized to gauge the proposed method. Extreme learning machine (ELM) was employed to classify the different types of emotions. Different experiments were conducted and the results show that the proposed method significantly improves the speech emotion recognition performance compared to previous works published in the literature. PMID:25799141
YÜCEL, MERYEM A.; SELB, JULIETTE; COOPER, ROBERT J.; BOAS, DAVID A.
2014-01-01
As near-infrared spectroscopy (NIRS) broadens its application area to different age and disease groups, motion artifacts in the NIRS signal due to subject movement is becoming an important challenge. Motion artifacts generally produce signal fluctuations that are larger than physiological NIRS signals, thus it is crucial to correct for them before obtaining an estimate of stimulus evoked hemodynamic responses. There are various methods for correction such as principle component analysis (PCA), wavelet-based filtering and spline interpolation. Here, we introduce a new approach to motion artifact correction, targeted principle component analysis (tPCA), which incorporates a PCA filter only on the segments of data identified as motion artifacts. It is expected that this will overcome the issues of filtering desired signals that plagues standard PCA filtering of entire data sets. We compared the new approach with the most effective motion artifact correction algorithms on a set of data acquired simultaneously with a collodion-fixed probe (low motion artifact content) and a standard Velcro probe (high motion artifact content). Our results show that tPCA gives statistically better results in recovering hemodynamic response function (HRF) as compared to wavelet-based filtering and spline interpolation for the Velcro probe. It results in a significant reduction in mean-squared error (MSE) and significant enhancement in Pearson’s correlation coefficient to the true HRF. The collodion-fixed fiber probe with no motion correction performed better than the Velcro probe corrected for motion artifacts in terms of MSE and Pearson’s correlation coefficient. Thus, if the experimental study permits, the use of a collodion-fixed fiber probe may be desirable. If the use of a collodion-fixed probe is not feasible, then we suggest the use of tPCA in the processing of motion artifact contaminated data. PMID:25360181
NASA Astrophysics Data System (ADS)
Eppelbaum, Lev
2015-04-01
Geophysical methods are prompt, non-invasive and low-cost tool for quantitative delineation of buried archaeological targets. However, taking into account the complexity of geological-archaeological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient (Khesin and Eppelbaum, 1997). Besides this, it is well-known that the majority of inverse-problem solutions in geophysics are ill-posed (e.g., Zhdanov, 2002), which means, according to Hadamard (1902), that the solution does not exist, or is not unique, or is not a continuous function of observed geophysical data (when small perturbations in the observations will cause arbitrary mistakes in the solution). This fact has a wide application for informational, probabilistic and wavelet methodologies in archaeological geophysics (Eppelbaum, 2014a). The goal of the modern geophysical data examination is to detect the geophysical signatures of buried targets at noisy areas via the analysis of some physical parameters with a minimal number of false alarms and miss-detections (Eppelbaum et al., 2011; Eppelbaum, 2014b). The proposed wavelet approach to recognition of archaeological targets (AT) by the examination of geophysical method integration consists of advanced processing of each geophysical method and nonconventional integration of different geophysical methods between themselves. The recently developed technique of diffusion clustering combined with the abovementioned wavelet methods was utilized to integrate the geophysical data and detect existing irregularities. The approach is based on the wavelet packet techniques applied as to the geophysical images (or graphs) versus coordinates. For such an analysis may be utilized practically all geophysical methods (magnetic, gravity, seismic, GPR, ERT, self-potential, etc.). On the first stage of the proposed investigation a few tens of typical physical-archaeological models (PAM) (e.g., Eppelbaum et al., 2010; Eppelbaum, 2011) of the targets under study for the concrete area (region) are developed. These PAM are composed on the basis of the known archaeological and geological data, results of previous archaeogeophysical investigations and 3D modeling of geophysical data. It should be underlined that the PAMs must differ (by depth, size, shape and physical properties of AT as well as peculiarities of the host archaeological-geological media). The PAMs must include also noise components of different orders (corresponding to the archaeogeophysical conditions of the area under study). The same models are computed and without the AT. Introducing complex PAMs (for example, situated in the vicinity of electric power lines, some objects of infrastructure, etc. (Eppelbaum et al., 2001)) will reflect some real class of AT occurring in such unfavorable for geophysical searching conditions. Anomalous effects from such complex PAMs will significantly disturb the geophysical anomalies from AT and impede the wavelet methodology employment. At the same time, the 'self-learning' procedure laid in this methodology will help further to recognize the AT even in the cases of unfavorable S/N ratio. Modern developments in the wavelet theory and data mining are utilized for the analysis of the integrated data. Wavelet approach is applied for derivation of enhanced (e.g., coherence portraits) and combined images of geophysical fields. The modern methodologies based on the matching pursuit with wavelet packet dictionaries enables to extract desired signals even from strongly noised data (Averbuch et al., 2014). Researchers usually met the problem of extraction of essential features from available data contaminated by a random noise and by a non-relevant background (Averbuch et al., 2014). If the essential structure of a signal consists of several sine waves then we may represent it via trigonometric basis (Fourier analysis). In this case one can compare the signal with a set of sinusoids and extract consistent ones. An indicator of presence a wave in a signal f(t) is the Fourier coefficient ∫ f(t) sinwt dt. Wavelet analysis provides a rich library of waveforms available and fast, computationally efficient procedures of representation of signals and of selection of relevant waveforms. The basic assumption justifying an application of wavelet analysis is that the essential structure of a signal analyzed consists of not a large number of various waveforms. The best way to reveal this structure is representation of the signal by a set of basic elements containing waveforms coherent to the signal. For structures of the signal coherent to the basis, large coefficients are attributed to a few basic waveforms, whereas we expect small coefficients for the noise and structures incoherent to all basic waveforms. Wavelets are a family of functions ranging from functions of arbitrary smoothness to fractal ones. Wavelet procedure involves two aspects. The first one is a decomposition, i.e. breaking up a signal to obtain the wavelet coefficients and the 2nd one is a reconstruction, which consists of a reassembling the signal from coefficients There are many modifications of the WA. Note, first of all, so-called Continuous WA in whichsignal f(t) is tested for presence of waveforms ψ(t-b) a. Here, a is scaling parameter (dilation), bdetermines location of the wavelet ψ(t-b) a in a signal f(t). The integral ( ) ∫ t-b (W ψf) (b,a) = f (t) ψ a dt is the Continuous Wavelet Transform.When parameters a,b in ψ( ) t-ab take some discrete values, we have the Discrete Wavelet Transform. A general scheme of the Wavelet Decomposition Tree is shown, for instance, in (Averbuch et al., 2014; Eppelbaum et al., 2014). The signal is compared with the testing signal on each scale. It is estimated wavelet coefficients which enable to reconstruct the 1st approximation of the signal and details. On the next level, wavelet transform is applied to the approximation. Then, we can present A1 as A2 + D2, etc. So, if S - Signal, A - Approximation, D - Details, then S = A1 + D1 = A2 + D2 + D1 = A3 + D3 + D2 + D1. Wavelet packet transform is applied to both low pass results (approximations) and high pass results (Details). For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters (Eppelbaum et al., 2012). This set of parameters serves as a signature of the image and is utilized for discrimination of images (a) containing AT from the images (b) non-containing AT (let will designate these images as N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions (Alperovich et al., 2013). As a result of the previous steps we obtain two sets: containing AT and N of the signatures vectors for geophysical images. The obtained 3D set of the data representatives can be used as a reference set for the classification of newly arriving geophysical data. The obtained data sets are reduced by embedding features vectors into the 3D Euclidean space using the so-called diffusion map. This map enables to reveal the internal structure of the datasets AT and N and to distinctly separate them. For this, a matrix of the diffusion distances for the combined feature matrix F = FN ∴ FC of size 60 x C is constructed (Coifman and Lafon, 2006; Averbuch et al., 2010). Then, each row of the matrices FN and FC is projected onto three first eigenvectors of the matrix D(F ). As a result, each data curve is represented by a 3D point in the Euclidean space formed by eigenvectors of D(F ). The Euclidean distances between these 3D points reflect the similarity of the data curves. The scattered projections of the data curves onto the diffusion eigenvectors will be composed. Finally we observe that as a result of the above operations we embedded the original data into 3-dimensional space where data related to the AT subsurface are well separated from the N data. This 3D set of the data representatives can be used as a reference set for the classification of newly arriving data. Geophysically it means a reliable division of the studied areas for the AT-containing and not containing (N) these objects. Testing this methodology for delineation of archaeological cavities by magnetic and gravity data analysis displayed an effectiveness of this approach. References Alperovich, L., Eppelbaum, L., Zheludev, V., Dumoulin, J., Soldovieri, F., Proto, M., Bavusi, M. and Loperte, A., 2013. A new combined wavelet methodology applied to GPR and ERT data in the Montagnole experiment (French Alps). Journal of Geophysics and Engineering, 10, No. 2, 025017, 1-17. Averbuch, A., Hochman, K., Rabin, N., Schclar, A. and Zheludev, V., 2010. A diffusion frame-work for detection of moving vehicles. Digital Signal Processing, 20, No.1, 111-122. Averbuch A.Z., Neittaanmäki, P., and Zheludev, V.A., 2014. Spline and Spline Wavelet Methods with Applications to Signal and Image Processing. Volume I: Periodic Splines. Springer. Coifman, R.R. and Lafon, S., 2006. Diffusion maps, Applied and Computational Harmonic Analysis. Special issue on Diffusion Maps and Wavelets, 21, No. 7, 5-30. Eppelbaum, L.V., 2011. Study of magnetic anomalies over archaeological targets in urban conditions. Physics and Chemistry of the Earth, 36, No. 16, 1318-1330. Eppelbaum, L.V., 2014a. Geophysical observations at archaeological sites: Estimating informational content. Archaeological Prospection, 21, No. 2, 25-38. Eppelbaum, L.V. 2014b. Four Color Theorem and Applied Geophysics. Applied Mathematics, 5, 358-366. Eppelbaum, L.V., Alperovich, L., Zheludev, V. and Pechersky, A., 2011. Application of informational and wavelet approaches for integrated processing of geophysical data in complex environments. Proceed. of the 2011 SAGEEP Conference, Charleston, South Carolina, USA, 24, 24-60. Eppelbaum, L.V., Khesin, B.E. and Itkis, S.E., 2001. Prompt magnetic investigations of archaeological remains in areas of infrastructure development: Israeli experience. Archaeological Prospection, 8, No.3, 163-185. Eppelbaum, L.V., Khesin, B.E. and Itkis, S.E., 2010. Archaeological geophysics in arid environments: Examples from Israel. Journal of Arid Environments, 74, No. 7, 849-860. Eppelbaum, L.V., Zheludev, V. and Averbuch, A., 2014. Diffusion maps as a powerful tool for integrated geophysical field analysis to detecting hidden karst terranes. Izv. Acad. Sci. Azerb. Rep., Ser.: Earth Sciences, No. 1-2, 36-46. Hadamard, J., 1902. Sur les problèmes aux dérivées partielles et leur signification physique. Princeton University Bulletin, 13, 49-52. Khesin, B.E. and Eppelbaum, L.V., 1997. The number of geophysical methods required for target classification: quantitative estimation. Geoinformatics, 8, No.1, 31-39. Zhdanov, M.S., 2002. Geophysical Inverse Theory and Regularization Problems. Methods in Geochemistry and Geophysics, Vol. 36. Elsevier, Amsterdam.
NASA Astrophysics Data System (ADS)
Ng, J.; Kingsbury, N. G.
2004-02-01
This book provides an overview of the theory and practice of continuous and discrete wavelet transforms. Divided into seven chapters, the first three chapters of the book are introductory, describing the various forms of the wavelet transform and their computation, while the remaining chapters are devoted to applications in fluids, engineering, medicine and miscellaneous areas. Each chapter is well introduced, with suitable examples to demonstrate key concepts. Illustrations are included where appropriate, thus adding a visual dimension to the text. A noteworthy feature is the inclusion, at the end of each chapter, of a list of further resources from the academic literature which the interested reader can consult. The first chapter is purely an introduction to the text. The treatment of wavelet transforms begins in the second chapter, with the definition of what a wavelet is. The chapter continues by defining the continuous wavelet transform and its inverse and a description of how it may be used to interrogate signals. The continuous wavelet transform is then compared to the short-time Fourier transform. Energy and power spectra with respect to scale are also discussed and linked to their frequency counterparts. Towards the end of the chapter, the two-dimensional continuous wavelet transform is introduced. Examples of how the continuous wavelet transform is computed using the Mexican hat and Morlet wavelets are provided throughout. The third chapter introduces the discrete wavelet transform, with its distinction from the discretized continuous wavelet transform having been made clear at the end of the second chapter. In the first half of the chapter, the logarithmic discretization of the wavelet function is described, leading to a discussion of dyadic grid scaling, frames, orthogonal and orthonormal bases, scaling functions and multiresolution representation. The fast wavelet transform is introduced and its computation is illustrated with an example using the Haar wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The treatment on finance touches on the use of wavelets by other authors in studying stock prices, commodity behaviour, market dynamics and foreign exchange rates. The treatment on geophysics covers what was omitted from the fourth chapter, namely, seismology, well logging, topographic feature analysis and the analysis of climatic data. The text concludes with an assortment of other application areas which could only be mentioned in passing. Unlike most other publications in the subject, this book does not treat wavelet transforms in a mathematically rigorous manner but rather aims to explain the mechanics of the wavelet transform in a way that is easy to understand. Consequently, it serves as an excellent overview of the subject rather than as a reference text. Keeping the mathematics to a minimum and omitting cumbersome and detailed proofs from the text, the book is best-suited to those who are new to wavelets or who want an intuitive understanding of the subject. Such an audience may include graduate students in engineering and professionals and researchers in engineering and the applied sciences.
Statistical characterization of portal images and noise from portal imaging systems.
González-López, Antonio; Morales-Sánchez, Juan; Verdú-Monedero, Rafael; Larrey-Ruiz, Jorge
2013-06-01
In this paper, we consider the statistical characteristics of the so-called portal images, which are acquired prior to the radiotherapy treatment, as well as the noise that present the portal imaging systems, in order to analyze whether the well-known noise and image features in other image modalities, such as natural image, can be found in the portal imaging modality. The study is carried out in the spatial image domain, in the Fourier domain, and finally in the wavelet domain. The probability density of the noise in the spatial image domain, the power spectral densities of the image and noise, and the marginal, joint, and conditional statistical distributions of the wavelet coefficients are estimated. Moreover, the statistical dependencies between noise and signal are investigated. The obtained results are compared with practical and useful references, like the characteristics of the natural image and the white noise. Finally, we discuss the implication of the results obtained in several noise reduction methods that operate in the wavelet domain.
Shift-invariant discrete wavelet transform analysis for retinal image classification.
Khademi, April; Krishnan, Sridhar
2007-12-01
This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.
NASA Astrophysics Data System (ADS)
Suciati, Nanik; Herumurti, Darlis; Wijaya, Arya Yudhi
2017-02-01
Batik is one of Indonesian's traditional cloth. Motif or pattern drawn on a piece of batik fabric has a specific name and philosopy. Although batik cloths are widely used in everyday life, but only few people understand its motif and philosophy. This research is intended to develop a batik motif recognition system which can be used to identify motif of Batik image automatically. First, a batik image is decomposed into sub-images using wavelet transform. Six texture descriptors, i.e. max probability, correlation, contrast, uniformity, homogenity and entropy, are extracted from gray-level co-occurrence matrix of each sub-image. The texture features are then matched to the template features using canberra distance. The experiment is performed on Batik Dataset consisting of 1088 batik images grouped into seven motifs. The best recognition rate, that is 92,1%, is achieved using feature extraction process with 5 level wavelet decomposition and 4 directional gray-level co-occurrence matrix.
NASA Astrophysics Data System (ADS)
Bunget, Gheorghe; Tilmon, Brevin; Yee, Andrew; Stewart, Dylan; Rogers, James; Webster, Matthew; Farinholt, Kevin; Friedersdorf, Fritz; Pepi, Marc; Ghoshal, Anindya
2018-04-01
Widespread damage in aging aircraft is becoming an increasing concern as both civil and military fleet operators are extending the service lifetime of their aircraft. Metallic components undergoing variable cyclic loadings eventually fatigue and form dislocations as precursors to ultimate failure. In order to characterize the progression of fatigue damage precursors (DP), the acoustic nonlinearity parameter is measured as the primary indicator. However, using proven standard ultrasonic technology for nonlinear measurements presents limitations for settings outside of the laboratory environment. This paper presents an approach for ultrasonic inspection through automated immersion scanning of hot section engine components where mature ultrasonic technology is used during periodic inspections. Nonlinear ultrasonic measurements were analyzed using wavelet analysis to extract multiple harmonics from the received signals. Measurements indicated strong correlations of nonlinearity coefficients and levels of fatigue in aluminum and Ni-based superalloys. This novel wavelet cross-correlation (WCC) algorithm is a potential technique to scan for fatigue damage precursors and identify critical locations for remaining life prediction.
Mahrooghy, Majid; Ashraf, Ahmed B; Daye, Dania; Mies, Carolyn; Feldman, Michael; Rosen, Mark; Kontos, Despina
2013-01-01
Breast tumors are heterogeneous lesions. Intra-tumor heterogeneity presents a major challenge for cancer diagnosis and treatment. Few studies have worked on capturing tumor heterogeneity from imaging. Most studies to date consider aggregate measures for tumor characterization. In this work we capture tumor heterogeneity by partitioning tumor pixels into subregions and extracting heterogeneity wavelet kinetic (HetWave) features from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to obtain the spatiotemporal patterns of the wavelet coefficients and contrast agent uptake from each partition. Using a genetic algorithm for feature selection, and a logistic regression classifier with leave one-out cross validation, we tested our proposed HetWave features for the task of classifying breast cancer recurrence risk. The classifier based on our features gave an ROC AUC of 0.78, outperforming previously proposed kinetic, texture, and spatial enhancement variance features which give AUCs of 0.69, 0.64, and 0.65, respectively.
Lavine, Barry K; White, Collin G; Ding, Tao
2018-03-01
Pattern recognition techniques have been applied to the infrared (IR) spectral libraries of the Paint Data Query (PDQ) database to differentiate between nonidentical but similar IR spectra of automotive paints. To tackle the problem of library searching, search prefilters were developed to identify the vehicle make from IR spectra of the clear coat, surfacer-primer, and e-coat layers. To develop these search prefilters with the appropriate degree of accuracy, IR spectra from the PDQ database were preprocessed using the discrete wavelet transform to enhance subtle but significant features in the IR spectral data. Wavelet coefficients characteristic of vehicle make were identified using a genetic algorithm for pattern recognition and feature selection. Search prefilters to identify automotive manufacturer through IR spectra obtained from a paint chip recovered at a crime scene were developed using 1596 original manufacturer's paint systems spanning six makes (General Motors, Chrysler, Ford, Honda, Nissan, and Toyota) within a limited production year range (2000-2006). Search prefilters for vehicle manufacturer that were developed as part of this study were successfully validated using IR spectra obtained directly from the PDQ database. Information obtained from these search prefilters can serve to quantify the discrimination power of original automotive paint encountered in casework and further efforts to succinctly communicate trace evidential significance to the courts.
NASA Astrophysics Data System (ADS)
Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-01
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
Peng, Hong; Hu, Bin; Shi, Qiuxia; Ratcliffe, Martyn; Zhao, Qinglin; Qi, Yanbing; Gao, Guoping
2013-05-01
A new model to remove ocular artifacts (OA) from electroencephalograms (EEGs) is presented. The model is based on discrete wavelet transformation (DWT) and adaptive noise cancellation (ANC). Using simulated and measured data, the accuracy of the model is compared with the accuracy of other existing methods based on stationary wavelet transforms and our previous work based on wavelet packet transform and independent component analysis. A particularly novel feature of the new model is the use of DWTs to construct an OA reference signal, using the three lowest frequency wavelet coefficients of the EEGs. The results show that the new model demonstrates an improved performance with respect to the recovery of true EEG signals and also has a better tracking performance. Because the new model requires only single channel sources, it is well suited for use in portable environments where constraints with respect to acceptable wearable sensor attachments usually dictate single channel devices. The model is also applied and evaluated against data recorded within the EUFP 7 Project--Online Predictive Tools for Intervention in Mental Illness (OPTIMI). The results show that the proposed model is effective in removing OAs and meets the requirements of portable systems used for patient monitoring as typified by the OPTIMI project.
[A new peak detection algorithm of Raman spectra].
Jiang, Cheng-Zhi; Sun, Qiang; Liu, Ying; Liang, Jing-Qiu; An, Yan; Liu, Bing
2014-01-01
The authors proposed a new Raman peak recognition method named bi-scale correlation algorithm. The algorithm uses the combination of the correlation coefficient and the local signal-to-noise ratio under two scales to achieve Raman peak identification. We compared the performance of the proposed algorithm with that of the traditional continuous wavelet transform method through MATLAB, and then tested the algorithm with real Raman spectra. The results show that the average time for identifying a Raman spectrum is 0.51 s with the algorithm, while it is 0.71 s with the continuous wavelet transform. When the signal-to-noise ratio of Raman peak is greater than or equal to 6 (modern Raman spectrometers feature an excellent signal-to-noise ratio), the recognition accuracy with the algorithm is higher than 99%, while it is less than 84% with the continuous wavelet transform method. The mean and the standard deviations of the peak position identification error of the algorithm are both less than that of the continuous wavelet transform method. Simulation analysis and experimental verification prove that the new algorithm possesses the following advantages: no needs of human intervention, no needs of de-noising and background removal operation, higher recognition speed and higher recognition accuracy. The proposed algorithm is operable in Raman peak identification.
NASA Astrophysics Data System (ADS)
Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant
2014-03-01
In this paper, we propose a real-time human versus animal classification technique using a pyro-electric sensor array and Hidden Markov Model. The technique starts with the variational energy functional level set segmentation technique to separate the object from background. After segmentation, we convert the segmented object to a signal by considering column-wise pixel values and then finding the wavelet coefficients of the signal. HMMs are trained to statistically model the wavelet features of individuals through an expectation-maximization learning process. Human versus animal classifications are made by evaluating a set of new wavelet feature data against the trained HMMs using the maximum-likelihood criterion. Human and animal data acquired-using a pyro-electric sensor in different terrains are used for performance evaluation of the algorithms. Failures of the computationally effective SURF feature based approach that we develop in our previous research are because of distorted images produced when the object runs very fast or if the temperature difference between target and background is not sufficient to accurately profile the object. We show that wavelet based HMMs work well for handling some of the distorted profiles in the data set. Further, HMM achieves improved classification rate over the SURF algorithm with almost the same computational time.
NASA Astrophysics Data System (ADS)
Ong, Swee Khai; Lim, Wee Keong; Soo, Wooi King
2013-04-01
Trademark, a distinctive symbol, is used to distinguish products or services provided by a particular person, group or organization from other similar entries. As trademark represents the reputation and credit standing of the owner, it is important to differentiate one trademark from another. Many methods have been proposed to identify, classify and retrieve trademarks. However, most methods required features database and sample sets for training prior to recognition and retrieval process. In this paper, a new feature on wavelet coefficients, the localized wavelet energy, is introduced to extract features of trademarks. With this, unsupervised content-based symmetrical trademark image retrieval is proposed without the database and prior training set. The feature analysis is done by an integration of the proposed localized wavelet energy and quadtree decomposed regional symmetrical vector. The proposed framework eradicates the dependence on query database and human participation during the retrieval process. In this paper, trademarks for soccer games sponsors are the intended trademark category. Video frames from soccer telecast are extracted and processed for this study. Reasonably good localization and retrieval results on certain categories of trademarks are achieved. A distinctive symbol is used to distinguish products or services provided by a particular person, group or organization from other similar entries.
Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali
2005-09-01
To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Hohil, Myron E.; Desai, Sachi V.; Bass, Henry E.; Chambers, Jim
2005-03-01
Feature extraction methods based on the discrete wavelet transform and multiresolution analysis are used to develop a robust classification algorithm that reliably discriminates between conventional and simulated chemical/biological artillery rounds via acoustic signals produced during detonation. Distinct characteristics arise within the different airburst signatures because high explosive warheads emphasize concussive and shrapnel effects, while chemical/biological warheads are designed to disperse their contents over large areas, therefore employing a slower burning, less intense explosive to mix and spread their contents. The ensuing blast waves are readily characterized by variations in the corresponding peak pressure and rise time of the blast, differences in the ratio of positive pressure amplitude to the negative amplitude, and variations in the overall duration of the resulting waveform. Unique attributes can also be identified that depend upon the properties of the gun tube, projectile speed at the muzzle, and the explosive burn rates of the warhead. In this work, the discrete wavelet transform is used to extract the predominant components of these characteristics from air burst signatures at ranges exceeding 2km. Highly reliable discrimination is achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of wavelet coefficients and higher frequency details found within different levels of the multiresolution decomposition.
Delakis, Ioannis; Hammad, Omer; Kitney, Richard I
2007-07-07
Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.
Comparative Study of Speckle Filtering Methods in PolSAR Radar Images
NASA Astrophysics Data System (ADS)
Boutarfa, S.; Bouchemakh, L.; Smara, Y.
2015-04-01
Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.
Zhang, Yan; Zou, Hong-Yan; Shi, Pei; Yang, Qin; Tang, Li-Juan; Jiang, Jian-Hui; Wu, Hai-Long; Yu, Ru-Qin
2016-01-01
Determination of benzo[a]pyrene (BaP) in cigarette smoke can be very important for the tobacco quality control and the assessment of its harm to human health. In this study, mid-infrared spectroscopy (MIR) coupled to chemometric algorithm (DPSO-WPT-PLS), which was based on the wavelet packet transform (WPT), discrete particle swarm optimization algorithm (DPSO) and partial least squares regression (PLS), was used to quantify harmful ingredient benzo[a]pyrene in the cigarette mainstream smoke with promising result. Furthermore, the proposed method provided better performance compared to several other chemometric models, i.e., PLS, radial basis function-based PLS (RBF-PLS), PLS with stepwise regression variable selection (Stepwise-PLS) as well as WPT-PLS with informative wavelet coefficients selected by correlation coefficient test (rtest-WPT-PLS). It can be expected that the proposed strategy could become a new effective, rapid quantitative analysis technique in analyzing the harmful ingredient BaP in cigarette mainstream smoke. Copyright © 2015 Elsevier B.V. All rights reserved.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
NASA Astrophysics Data System (ADS)
Dong, Min; Dong, Chenghui; Guo, Miao; Wang, Zhe; Mu, Xiaomin
2018-04-01
Multiresolution-based methods, such as wavelet and Contourlet are usually used to image fusion. This work presents a new image fusion frame-work by utilizing area-based standard deviation in dual tree Contourlet trans-form domain. Firstly, the pre-registered source images are decomposed with dual tree Contourlet transform; low-pass and high-pass coefficients are obtained. Then, the low-pass bands are fused with weighted average based on area standard deviation rather than the simple "averaging" rule. While the high-pass bands are merged with the "max-absolute' fusion rule. Finally, the modified low-pass and high-pass coefficients are used to reconstruct the final fused image. The major advantage of the proposed fusion method over conventional fusion is the approximately shift invariance and multidirectional selectivity of dual tree Contourlet transform. The proposed method is compared with wavelet- , Contourletbased methods and other the state-of-the art methods on common used multi focus images. Experiments demonstrate that the proposed fusion framework is feasible and effective, and it performs better in both subjective and objective evaluation.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography.
Zaki, Farzana; Wang, Yahui; Su, Hao; Yuan, Xin; Liu, Xuan
2017-05-01
Optical coherence tomography (OCT) is based on coherence detection of interferometric signals and hence inevitably suffers from speckle noise. To remove speckle noise in OCT images, wavelet domain thresholding has demonstrated significant advantages in suppressing noise magnitude while preserving image sharpness. However, speckle noise in OCT images has different characteristics in different spatial scales, which has not been considered in previous applications of wavelet domain thresholding. In this study, we demonstrate a noise adaptive wavelet thresholding (NAWT) algorithm that exploits the difference of noise characteristics in different wavelet sub-bands. The algorithm is simple, fast, effective and is closely related to the physical origin of speckle noise in OCT image. Our results demonstrate that NAWT outperforms conventional wavelet thresholding.
Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification
NASA Astrophysics Data System (ADS)
Sharif, I.; Khare, S.
2014-11-01
With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Paulson, K. V.
For audio-frequency magnetotelluric surveys where the signals are lightning-stroke transients, the conventional Fourier transform method often fails to produce a high quality impedance tensor. An alternative approach is to use the wavelet transform method which is capable of localizing target information simultaneously in both the temporal and frequency domains. Unlike Fourier analysis that yields an average amplitude and phase, the wavelet transform produces an instantaneous estimate of the amplitude and phase of a signal. In this paper a complex well-localized wavelet, the Morlet wavelet, has been used to transform and analyze audio-frequency magnetotelluric data. With the Morlet wavelet, the magnetotelluric impedance tensor can be computed directly in the wavelet transform domain. The lightning-stroke transients are easily identified on the dilation-translation plane. Choosing those wavelet transform values where the signals are located, a higher signal-to-noise ratio estimation of the impedance tensor can be obtained. In a test using real data, the wavelet transform showed a significant improvement in the signal-to-noise ratio over the conventional Fourier transform.
Research on fusion algorithm of polarization image in tetrolet domain
NASA Astrophysics Data System (ADS)
Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing
2015-12-01
Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect
Investigation of geomagnetic induced current at high latitude during the storm-time variation
NASA Astrophysics Data System (ADS)
Falayi, E. O.; Ogunmodimu, O.; Bolaji, O. S.; Ayanda, J. D.; Ojoniyi, O. S.
2017-06-01
During the geomagnetic disturbances, the geomagnetically induced current (GIC) are influenced by the geoelectric field flowing in conductive Earth. In this paper, we studied the variability of GICs, the time derivatives of the geomagnetic field (dB/dt), geomagnetic indices: Symmetric disturbance field in H (SYM-H) index, AU (eastward electrojet) and AL (westward electrojet) indices, Interplanetary parameters such as solar wind speed (v), and interplanetary magnetic field (Bz) during the geomagnetic storms on 31 March 2001, 21 October 2001, 6 November 2001, 29 October 2003, 31 October 2003 and 9 November 2004 with high solar wind speed due to a coronal mass ejection. Wavelet spectrum based approach was employed to analyze the GIC time series in a sequence of time scales of one to twenty four hours. It was observed that there are more concentration of power between the 14-24 h on 31 March 2001, 17-24 h on 21 October 2001, 1-7 h on 6 November 2001, two peaks were observed between 5-8 h and 21-24 h on 29 October 2003, 1-3 h on 31 October 2003 and 18-22 h on 9 November 2004. Bootstrap method was used to obtain regression correlations between the time derivative of the geomagnetic field (dB/dt) and the observed values of the geomagnetic induced current on 31 March 2001, 21 October 2001, 6 November 2001, 29 October 2003, 31 October 2003 and 9 November 2004 which shows a distributed cluster of correlation coefficients at around r = -0.567, -0.717, -0.477, -0.419, -0.210 and r = -0.488 respectively. We observed that high energy wavelet coefficient correlated well with bootstrap correlation, while low energy wavelet coefficient gives low bootstrap correlation. It was noticed that the geomagnetic storm has a influence on GIC and geomagnetic field derivatives (dB/dt). This might be ascribed to the coronal mass ejection with solar wind due to particle acceleration processes in the solar atmosphere.
Determination of optical absorption coefficient with focusing photoacoustic imaging.
Li, Zhifang; Li, Hui; Zeng, Zhiping; Xie, Wenming; Chen, Wei R
2012-06-01
Absorption coefficient of biological tissue is an important factor for photothermal therapy and photoacoustic imaging. However, its determination remains a challenge. In this paper, we propose a method using focusing photoacoustic imaging technique to quantify the target optical absorption coefficient. It utilizes the ratio of the amplitude of the peak signal from the top boundary of the target to that from the bottom boundary based on wavelet transform. This method is self-calibrating. Factors, such as absolute optical fluence, ultrasound parameters, and Grüneisen parameter, can be canceled by dividing the amplitudes of the two peaks. To demonstrate this method, we quantified the optical absorption coefficient of a target with various concentrations of an absorbing dye. This method is particularly useful to provide accurate absorption coefficient for predicting the outcomes of photothermal interaction for cancer treatment with absorption enhancement.
Method and system for determining precursors of health abnormalities from processing medical records
None, None
2013-06-25
Medical reports are converted to document vectors in computing apparatus and sampled by applying a maximum variation sampling function including a fitness function to the document vectors to reduce a number of medical records being processed and to increase the diversity of the medical records being processed. Linguistic phrases are extracted from the medical records and converted to s-grams. A Haar wavelet function is applied to the s-grams over the preselected time interval; and the coefficient results of the Haar wavelet function are examined for patterns representing the likelihood of health abnormalities. This confirms certain s-grams as precursors of the health abnormality and a parameter can be calculated in relation to the occurrence of such a health abnormality.
NASA Astrophysics Data System (ADS)
Wang, Zhuozheng; Deller, J. R.; Fleet, Blair D.
2016-01-01
Acquired digital images are often corrupted by a lack of camera focus, faulty illumination, or missing data. An algorithm is presented for fusion of multiple corrupted images of a scene using the lifting wavelet transform. The method employs adaptive fusion arithmetic based on matrix completion and self-adaptive regional variance estimation. Characteristics of the wavelet coefficients are used to adaptively select fusion rules. Robust principal component analysis is applied to low-frequency image components, and regional variance estimation is applied to high-frequency components. Experiments reveal that the method is effective for multifocus, visible-light, and infrared image fusion. Compared with traditional algorithms, the new algorithm not only increases the amount of preserved information and clarity but also improves robustness.
Paul, R R; Mukherjee, A; Dutta, P K; Banerjee, S; Pal, M; Chatterjee, J; Chaudhuri, K; Mukkerjee, K
2005-01-01
Aim: To describe a novel neural network based oral precancer (oral submucous fibrosis; OSF) stage detection method. Method: The wavelet coefficients of transmission electron microscopy images of collagen fibres from normal oral submucosa and OSF tissues were used to choose the feature vector which, in turn, was used to train the artificial neural network. Results: The trained network was able to classify normal and oral precancer stages (less advanced and advanced) after obtaining the image as an input. Conclusions: The results obtained from this proposed technique were promising and suggest that with further optimisation this method could be used to detect and stage OSF, and could be adapted for other conditions. PMID:16126873
Anastasiadou, Maria N; Christodoulakis, Manolis; Papathanasiou, Eleftherios S; Papacostas, Savvas S; Mitsis, Georgios D
2017-09-01
This paper proposes supervised and unsupervised algorithms for automatic muscle artifact detection and removal from long-term EEG recordings, which combine canonical correlation analysis (CCA) and wavelets with random forests (RF). The proposed algorithms first perform CCA and continuous wavelet transform of the canonical components to generate a number of features which include component autocorrelation values and wavelet coefficient magnitude values. A subset of the most important features is subsequently selected using RF and labelled observations (supervised case) or synthetic data constructed from the original observations (unsupervised case). The proposed algorithms are evaluated using realistic simulation data as well as 30min epochs of non-invasive EEG recordings obtained from ten patients with epilepsy. We assessed the performance of the proposed algorithms using classification performance and goodness-of-fit values for noisy and noise-free signal windows. In the simulation study, where the ground truth was known, the proposed algorithms yielded almost perfect performance. In the case of experimental data, where expert marking was performed, the results suggest that both the supervised and unsupervised algorithm versions were able to remove artifacts without affecting noise-free channels considerably, outperforming standard CCA, independent component analysis (ICA) and Lagged Auto-Mutual Information Clustering (LAMIC). The proposed algorithms achieved excellent performance for both simulation and experimental data. Importantly, for the first time to our knowledge, we were able to perform entirely unsupervised artifact removal, i.e. without using already marked noisy data segments, achieving performance that is comparable to the supervised case. Overall, the results suggest that the proposed algorithms yield significant future potential for improving EEG signal quality in research or clinical settings without the need for marking by expert neurophysiologists, EMG signal recording and user visual inspection. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin
2013-01-01
Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509
Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin
2014-01-30
The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.
Spatial and temporal stability of temperature in the first-level basins of China during 1951-2013
NASA Astrophysics Data System (ADS)
Cheng, Yuting; Li, Peng; Xu, Guoce; Li, Zhanbin; Cheng, Shengdong; Wang, Bin; Zhao, Binhua
2018-05-01
In recent years, global warming has attracted great attention around the world. Temperature change is not only involved in global climate change but also closely linked to economic development, the ecological environment, and agricultural production. In this study, based on temperature data recorded by 756 meteorological stations in China during 1951-2013, the spatial and temporal stability characteristics of annual temperature in China and its first-level basins were investigated using the rank correlation coefficient method, the relative difference method, rescaled range (R/S) analysis, and wavelet transforms. The results showed that during 1951-2013, the spatial variation of annual temperature belonged to moderate variability in the national level. Among the first-level basins, the largest variation coefficient was 114% in the Songhuajiang basin and the smallest variation coefficient was 10% in the Huaihe basin. During 1951-2013, the spatial distribution pattern of annual temperature presented extremely strong spatial and temporal stability characteristics in the national level. The variation range of Spearman's rank correlation coefficient was 0.97-0.99, and the spatial distribution pattern of annual temperature showed an increasing trend. In the national level, the Liaohe basin, the rivers in the southwestern region, the Haihe basin, the Yellow River basin, the Yangtze River basin, the Huaihe basin, the rivers in the southeastern region, and the Pearl River basin all had representative meteorological stations for annual temperature. In the Songhuajiang basin and the rivers in the northwestern region, there was no representative meteorological station. R/S analysis, the Mann-Kendall test, and the Morlet wavelet analysis of annual temperature showed that the best representative meteorological station could reflect the variation trend and the main periodic changes of annual temperature in the region. Therefore, strong temporal stability characteristics exist for annual temperature in China and its first-level basins. It was therefore feasible to estimate the annual average temperature by the annual temperature recorded by the representative meteorological station in the region. Moreover, it was of great significance to assess average temperature changes quickly and forecast future change tendencies in the region.
NASA Astrophysics Data System (ADS)
Klausner, Virginia; Domingues, Margarete Oliveira; Mendes, Odim; da Costa, Aracy Mendes; Papa, Andres Reinaldo Rodriguez; Gonzalez, Arian Ojeda
2016-11-01
Coronal mass ejections are the primary cause of the highly disturbed conditions observed in the magnetosphere. Momentum and energy from the solar wind are transferred to the Earth's magnetosphere mainly via magnetic reconnection which produces open field lines connecting the Earth magnetic field to the solar wind. Magnetospheric currents are coupled to the ionosphere through field-aligned currents. This particular characteristic of the magnetosphere-ionosphere interconnection is discussed here on the basis of the energy transfer from high (auroral currents) to low-latitudes (ring current). The objective of this work is to examine how the conditions during a magnetic storm can affect the global space and time configuration of the ring current, and, how these processes can affect the region of the South Atlantic Magnetic Anomaly. The H- or X-components of the Earth's magnetic field were examined using a set of six magnetometers approximately aligned around the geographic longitude at about 10 °, 140 ° and 295 ° from latitudes of 70 ° N to 70 ° S and aligned throughout the equatorial region, for the event of October 18-22, 1998. The investigation of simultaneous observations of data measured at different locations makes it possible to determine the effects of the magnetosphere-ionosphere coupling, and, it tries to establish some relationships among them. This work also compares the responses of the aligned magnetic observatories to the responses in the South Atlantic Magnetic Anomaly region. The major contribution of this paper is related to the applied methodology of the discrete wavelet transform. The wavelet coefficients are used as a filter to extract the information in high frequencies of the analyzed magnetogram. They also better represent information about the injections of energy and, consequently, the disturbances of the geomagnetic field measured on the ground. As a result, we present a better way to visualize the correlation between the X- or H-components. In the latitude range from ∼ 40 ° S to ∼ 60 ° N, the wavelet signatures do not show remarkable differences, except for the amplitudes of the wavelet coefficients. The sequence of transient field variations detected at auroral latitudes is probably associated to occurrences of substorms, while at lower latitudes, these variations are associated to the enhancement of the ring current.
Martini, Romeo; Bagno, Andrea
2018-04-14
The wavelet analysis has been applied to the Laser Doppler Fluxmetry for assessing the frequency spectrum of the flowmotion to study the microvascular function waves.Although the application of wavelet analysis has allowed a detailed evaluation of the microvascular function, its use does not seem to be yet widespread over the last two decades.Aiming to improve the diffusion of this methodology, we herein present a systematic review of the literature about the application of the wavelet analysis to the laser Doppler fluxmetry signal. A computer research has been performed on PubMed and Scopus databases from January 1990 to December 2017. The used terms for the investigation have been "wavelet analysis", "wavelet transform analysis", "Morlet wavelet transform" along with the terms "laser Doppler", "laserdoppler" and/or "flowmetry" or "fluxmetry". One hundred and eighteen studies have been found. After the scrutiny, 97 studies reporting data on humans have been selected. Fifty-three studies, 54.0% (95% CI 44.2-63.6) pooled rate, have been performed on 892 healthy subjects and 44, 45,9 % (95% CI 36.3-55.7%) pooled rate have been performed on 1679 patients. No significant difference has been found between the two groups (p 0,81). On average, the number of studies published each year was 4.8 (95% CI 3.4-6.2). The trend of studies production has increased significantly from 1998 to 2017, (p 0.0006). But only the studies on patients have shown a significant increase trend along the years (p 0.0003), than the studies on healthy subjects (p 0.09).In conclusion, this review highlights that despite being a promising and interesting methodology for the study of the microcirculatory function, the wavelet analysis has remained still neglected.
Image Retrieval using Integrated Features of Binary Wavelet Transform
NASA Astrophysics Data System (ADS)
Agarwal, Megha; Maheshwari, R. P.
2011-12-01
In this paper a new approach for image retrieval is proposed with the application of binary wavelet transform. This new approach facilitates the feature calculation with the integration of histogram and correlogram features extracted from binary wavelet subbands. Experiments are performed to evaluate and compare the performance of proposed method with the published literature. It is verified that average precision and average recall of proposed method (69.19%, 41.78%) is significantly improved compared to optimal quantized wavelet correlogram (OQWC) [6] (64.3%, 38.00%) and Gabor wavelet correlogram (GWC) [10] (64.1%, 40.6%). All the experiments are performed on Corel 1000 natural image database [20].
Determination of total polyphenol index in wines employing a voltammetric electronic tongue.
Cetó, Xavier; Gutiérrez, Juan Manuel; Gutiérrez, Manuel; Céspedes, Francisco; Capdevila, Josefina; Mínguez, Santiago; Jiménez-Jorquera, Cecilia; del Valle, Manel
2012-06-30
This work reports the application of a voltammetric electronic tongue system (ET) made from an array of modified graphite-epoxy composites plus a gold microelectrode in the qualitative and quantitative analysis of polyphenols found in wine. Wine samples were analyzed using cyclic voltammetry without any sample pretreatment. The obtained responses were preprocessed employing discrete wavelet transform (DWT) in order to compress and extract significant features from the voltammetric signals, and the obtained approximation coefficients fed a multivariate calibration method (artificial neural network-ANN-or partial least squares-PLS-) which accomplished the quantification of total polyphenol content. External test subset samples results were compared with the ones obtained with the Folin-Ciocalteu (FC) method and UV absorbance polyphenol index (I(280)) as reference values, with highly significant correlation coefficients of 0.979 and 0.963 in the range from 50 to 2400 mg L(-1) gallic acid equivalents, respectively. In a separate experiment, qualitative discrimination of different polyphenols found in wine was also assessed by principal component analysis (PCA). Copyright © 2012 Elsevier B.V. All rights reserved.
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
NASA Astrophysics Data System (ADS)
Wang, Dong; Ding, Hao; Singh, Vijay P.; Shang, Xiaosan; Liu, Dengfeng; Wang, Yuankun; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing
2015-05-01
For scientific and sustainable management of water resources, hydrologic and meteorologic data series need to be often extended. This paper proposes a hybrid approach, named WA-CM (wavelet analysis-cloud model), for data series extension. Wavelet analysis has time-frequency localization features, known as "mathematics microscope," that can decompose and reconstruct hydrologic and meteorologic series by wavelet transform. The cloud model is a mathematical representation of fuzziness and randomness and has strong robustness for uncertain data. The WA-CM approach first employs the wavelet transform to decompose the measured nonstationary series and then uses the cloud model to develop an extension model for each decomposition layer series. The final extension is obtained by summing the results of extension of each layer. Two kinds of meteorologic and hydrologic data sets with different characteristics and different influence of human activity from six (three pairs) representative stations are used to illustrate the WA-CM approach. The approach is also compared with four other methods, which are conventional correlation extension method, Kendall-Theil robust line method, artificial neural network method (back propagation, multilayer perceptron, and radial basis function), and single cloud model method. To evaluate the model performance completely and thoroughly, five measures are used, which are relative error, mean relative error, standard deviation of relative error, root mean square error, and Thiel inequality coefficient. Results show that the WA-CM approach is effective, feasible, and accurate and is found to be better than other four methods compared. The theory employed and the approach developed here can be applied to extension of data in other areas as well.
NASA Astrophysics Data System (ADS)
Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja
2008-03-01
Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.
Hegazy, Maha A; Lotfy, Hayam M; Mowaka, Shereen; Mohamed, Ekram Hany
2016-07-05
Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations. Copyright © 2016 Elsevier B.V. All rights reserved.
Poisson denoising on the sphere: application to the Fermi gamma ray space telescope
NASA Astrophysics Data System (ADS)
Schmitt, J.; Starck, J. L.; Casandjian, J. M.; Fadili, J.; Grenier, I.
2010-07-01
The Large Area Telescope (LAT), the main instrument of the Fermi gamma-ray Space telescope, detects high energy gamma rays with energies from 20 MeV to more than 300 GeV. The two main scientific objectives, the study of the Milky Way diffuse background and the detection of point sources, are complicated by the lack of photons. That is why we need a powerful Poisson noise removal method on the sphere which is efficient on low count Poisson data. This paper presents a new multiscale decomposition on the sphere for data with Poisson noise, called multi-scale variance stabilizing transform on the sphere (MS-VSTS). This method is based on a variance stabilizing transform (VST), a transform which aims to stabilize a Poisson data set such that each stabilized sample has a quasi constant variance. In addition, for the VST used in the method, the transformed data are asymptotically Gaussian. MS-VSTS consists of decomposing the data into a sparse multi-scale dictionary like wavelets or curvelets, and then applying a VST on the coefficients in order to get almost Gaussian stabilized coefficients. In this work, we use the isotropic undecimated wavelet transform (IUWT) and the curvelet transform as spherical multi-scale transforms. Then, binary hypothesis testing is carried out to detect significant coefficients, and the denoised image is reconstructed with an iterative algorithm based on hybrid steepest descent (HSD). To detect point sources, we have to extract the Galactic diffuse background: an extension of the method to background separation is then proposed. In contrary, to study the Milky Way diffuse background, we remove point sources with a binary mask. The gaps have to be interpolated: an extension to inpainting is then proposed. The method, applied on simulated Fermi LAT data, proves to be adaptive, fast and easy to implement.
Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B
2013-03-01
Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.
Dessouky, Mohamed M; Elrashidy, Mohamed A; Taha, Taha E; Abdelkader, Hatem M
2016-05-01
The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques. © The Author(s) 2015.
Hyperspectral image compressing using wavelet-based method
NASA Astrophysics Data System (ADS)
Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng
2017-10-01
Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.
NASA Astrophysics Data System (ADS)
Chen, Bo; Wen, Zengping; Wang, Fang
2017-04-01
Using near-fault strong motions from Nepal Mw7.8 earthquake at KATNP station in the city center of Kathmandu, velocity-pulse and non-stationary characteristics of the strong motions are shown, and the reason and potential effect on earthquake damage for intense non-stationary characteristics of near fault velocity-pulse strong motions are mainly studied. The observed strong ground motions of main shock were collected from KATNP station located in 76 kilometers south-east away from epicenter along with forward direction of the rupture fault at an inter-montane basin of the Himalaya. Large velocity pulse show the period of velocity pulse reach up to 6.6s and peak ground velocity of the pulse ground motion is 120 cm/s. Compared with the median spectral acceleration value of NGA prediction equation, significant long-period amplification effect due to velocity pulse is detected at period more than 3.2s. Wavelet analysis shows that the two horizontal component of ground motion is intensely concentration of energy in a short time range of 25-38s and period range of 4-8s. The maximum wavelet-coefficient of horizontal component is 2455, which is about four time of vertical component of strong ground motion. On the perspective of this study, large velocity pulses are identified from two orthogonal components using wavelet method. Intense non-stationary characteristics amplitude and frequency content are mainly caused by site conditions and fault rupture mechanism, which will help to understand the damage evaluation and serve local seismic design.
Synthesis of vibroarthrographic signals in knee osteoarthritis diagnosis training.
Shieh, Chin-Shiuh; Tseng, Chin-Dar; Chang, Li-Yun; Lin, Wei-Chun; Wu, Li-Fu; Wang, Hung-Yu; Chao, Pei-Ju; Chiu, Chien-Liang; Lee, Tsair-Fwu
2016-07-19
Vibroarthrographic (VAG) signals are used as useful indicators of knee osteoarthritis (OA) status. The objective was to build a template database of knee crepitus sounds. Internships can practice in the template database to shorten the time of training for diagnosis of OA. A knee sound signal was obtained using an innovative stethoscope device with a goniometer. Each knee sound signal was recorded with a Kellgren-Lawrence (KL) grade. The sound signal was segmented according to the goniometer data. The signal was Fourier transformed on the correlated frequency segment. An inverse Fourier transform was performed to obtain the time-domain signal. Haar wavelet transform was then done. The median and mean of the wavelet coefficients were chosen to inverse transform the synthesized signal in each KL category. The quality of the synthesized signal was assessed by a clinician. The sample signals were evaluated using different algorithms (median and mean). The accuracy rate of the median coefficient algorithm (93 %) was better than the mean coefficient algorithm (88 %) for cross-validation by a clinician using synthesis of VAG. The artificial signal we synthesized has the potential to build a learning system for medical students, internships and para-medical personnel for the diagnosis of OA. Therefore, our method provides a feasible way to evaluate crepitus sounds that may assist in the diagnosis of knee OA.
NASA Astrophysics Data System (ADS)
Baccar, D.; Söffker, D.
2017-11-01
Acoustic Emission (AE) is a suitable method to monitor the health of composite structures in real-time. However, AE-based failure mode identification and classification are still complex to apply due to the fact that AE waves are generally released simultaneously from all AE-emitting damage sources. Hence, the use of advanced signal processing techniques in combination with pattern recognition approaches is required. In this paper, AE signals generated from laminated carbon fiber reinforced polymer (CFRP) subjected to indentation test are examined and analyzed. A new pattern recognition approach involving a number of processing steps able to be implemented in real-time is developed. Unlike common classification approaches, here only CWT coefficients are extracted as relevant features. Firstly, Continuous Wavelet Transform (CWT) is applied to the AE signals. Furthermore, dimensionality reduction process using Principal Component Analysis (PCA) is carried out on the coefficient matrices. The PCA-based feature distribution is analyzed using Kernel Density Estimation (KDE) allowing the determination of a specific pattern for each fault-specific AE signal. Moreover, waveform and frequency content of AE signals are in depth examined and compared with fundamental assumptions reported in this field. A correlation between the identified patterns and failure modes is achieved. The introduced method improves the damage classification and can be used as a non-destructive evaluation tool.
NASA Astrophysics Data System (ADS)
Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao
2013-12-01
A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.
Yahia, K; Cardoso, A J M; Ghoggal, A; Zouzou, S E
2014-03-01
Fast Fourier transform (FFT) analysis has been successfully used for fault diagnosis in induction machines. However, this method does not always provide good results for the cases of load torque, speed and voltages variation, leading to a variation of the motor-slip and the consequent FFT problems that appear due to the non-stationary nature of the involved signals. In this paper, the discrete wavelet transform (DWT) of the apparent-power signal for the airgap-eccentricity fault detection in three-phase induction motors is presented in order to overcome the above FFT problems. The proposed method is based on the decomposition of the apparent-power signal from which wavelet approximation and detail coefficients are extracted. The energy evaluation of a known bandwidth permits to define a fault severity factor (FSF). Simulation as well as experimental results are provided to illustrate the effectiveness and accuracy of the proposed method presented even for the case of load torque variations. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Extended AIC model based on high order moments and its application in the financial market
NASA Astrophysics Data System (ADS)
Mao, Xuegeng; Shang, Pengjian
2018-07-01
In this paper, an extended method of traditional Akaike Information Criteria(AIC) is proposed to detect the volatility of time series by combining it with higher order moments, such as skewness and kurtosis. Since measures considering higher order moments are powerful in many aspects, the properties of asymmetry and flatness can be observed. Furthermore, in order to reduce the effect of noise and other incoherent features, we combine the extended AIC algorithm with multiscale wavelet analysis, in which the newly extended AIC algorithm is applied to wavelet coefficients at several scales and the time series are reconstructed by wavelet transform. After that, we create AIC planes to derive the relationship among AIC values using variance, skewness and kurtosis respectively. When we test this technique on the financial market, the aim is to analyze the trend and volatility of the closing price of stock indices and classify them. And we also adapt multiscale analysis to measure complexity of time series over a range of scales. Empirical results show that the singularity of time series in stock market can be detected via extended AIC algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B
2011-01-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation datamore » of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.« less
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Classification of EMG signals using PSO optimized SVM for diagnosis of neuromuscular disorders.
Subasi, Abdulhamit
2013-06-01
Support vector machine (SVM) is an extensively used machine learning method with many biomedical signal classification applications. In this study, a novel PSO-SVM model has been proposed that hybridized the particle swarm optimization (PSO) and SVM to improve the EMG signal classification accuracy. This optimization mechanism involves kernel parameter setting in the SVM training procedure, which significantly influences the classification accuracy. The experiments were conducted on the basis of EMG signal to classify into normal, neurogenic or myopathic. In the proposed method the EMG signals were decomposed into the frequency sub-bands using discrete wavelet transform (DWT) and a set of statistical features were extracted from these sub-bands to represent the distribution of wavelet coefficients. The obtained results obviously validate the superiority of the SVM method compared to conventional machine learning methods, and suggest that further significant enhancements in terms of classification accuracy can be achieved by the proposed PSO-SVM classification system. The PSO-SVM yielded an overall accuracy of 97.41% on 1200 EMG signals selected from 27 subject records against 96.75%, 95.17% and 94.08% for the SVM, the k-NN and the RBF classifiers, respectively. PSO-SVM is developed as an efficient tool so that various SVMs can be used conveniently as the core of PSO-SVM for diagnosis of neuromuscular disorders. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wavelet entropy of BOLD time series: An application to Rolandic epilepsy.
Gupta, Lalit; Jansen, Jacobus F A; Hofman, Paul A M; Besseling, René M H; de Louw, Anton J A; Aldenkamp, Albert P; Backes, Walter H
2017-12-01
To assess the wavelet entropy for the characterization of intrinsic aberrant temporal irregularities in the time series of resting-state blood-oxygen-level-dependent (BOLD) signal fluctuations. Further, to evaluate the temporal irregularities (disorder/order) on a voxel-by-voxel basis in the brains of children with Rolandic epilepsy. The BOLD time series was decomposed using the discrete wavelet transform and the wavelet entropy was calculated. Using a model time series consisting of multiple harmonics and nonstationary components, the wavelet entropy was compared with Shannon and spectral (Fourier-based) entropy. As an application, the wavelet entropy in 22 children with Rolandic epilepsy was compared to 22 age-matched healthy controls. The images were obtained by performing resting-state functional magnetic resonance imaging (fMRI) using a 3T system, an 8-element receive-only head coil, and an echo planar imaging pulse sequence ( T2*-weighted). The wavelet entropy was also compared to spectral entropy, regional homogeneity, and Shannon entropy. Wavelet entropy was found to identify the nonstationary components of the model time series. In Rolandic epilepsy patients, a significantly elevated wavelet entropy was observed relative to controls for the whole cerebrum (P = 0.03). Spectral entropy (P = 0.41), regional homogeneity (P = 0.52), and Shannon entropy (P = 0.32) did not reveal significant differences. The wavelet entropy measure appeared more sensitive to detect abnormalities in cerebral fluctuations represented by nonstationary effects in the BOLD time series than more conventional measures. This effect was observed in the model time series as well as in Rolandic epilepsy. These observations suggest that the brains of children with Rolandic epilepsy exhibit stronger nonstationary temporal signal fluctuations than controls. 2 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2017;46:1728-1737. © 2017 International Society for Magnetic Resonance in Medicine.
Applications of Wavelet Transform and Fuzzy Neural Network on Power Quality Recognition
NASA Astrophysics Data System (ADS)
Liao, Chiung-Chou; Yang, Hong-Tzer; Lin, Ying-Chun
2008-10-01
The wavelet transform coefficients (WTCs) contain plenty of information needed for transient event identification of power quality (PQ) events. However, adopting WTCs directly has the drawbacks of taking a longer time and too much memory for the recognition system. To solve the abovementioned recognition problems and to effectively reduce the number of features representing power transients, spectrum energies of WTCs in different scales are calculated by Parseval's Theorem. Through the proposed approach, features of the original power signals can be reserved and not influenced by occurring points of PQ events. The fuzzy neural classification systems are then used for signal recognition and fuzzy rule construction. Success rates of recognizing PQ events from noise-riding signals are proven to be feasible in power system applications in this paper.
Hexagonal wavelet processing of digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Schuler, Sergio; Huda, Walter; Honeyman-Buck, Janice C.; Steinbach, Barbara G.
1993-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
NASA Astrophysics Data System (ADS)
Schaefli, B.; Maraun, D.; Holschneider, M.
2007-12-01
Extreme hydrological events are often triggered by exceptional co-variations of the relevant hydrometeorological processes and in particular by exceptional co-oscillations at various temporal scales. Wavelet and cross wavelet spectral analysis offers promising time-scale resolved analysis methods to detect and analyze such exceptional co-oscillations. This paper presents the state-of-the-art methods of wavelet spectral analysis, discusses related subtleties, potential pitfalls and recently developed solutions to overcome them and shows how wavelet spectral analysis, if combined to a rigorous significance test, can lead to reliable new insights into hydrometeorological processes for real-world applications. The presented methods are applied to detect potentially flood triggering situations in a high Alpine catchment for which a recent re-estimation of design floods encountered significant problems simulating the observed high flows. For this case study, wavelet spectral analysis of precipitation, temperature and discharge offers a powerful tool to help detecting potentially flood producing meteorological situations and to distinguish between different types of floods with respect to the prevailing critical hydrometeorological conditions. This opens very new perspectives for the analysis of model performances focusing on the occurrence and non-occurrence of different types of high flow events. Based on the obtained results, the paper summarizes important recommendations for future applications of wavelet spectral analysis in hydrology.
Ebrahimi, Farideh; Mikaeili, Mohammad; Estrada, Edson; Nazeran, Homer
2008-01-01
Currently in the world there is an alarming number of people who suffer from sleep disorders. A number of biomedical signals, such as EEG, EMG, ECG and EOG are used in sleep labs among others for diagnosis and treatment of sleep related disorders. The usual method for sleep stage classification is visual inspection by a sleep specialist. This is a very time consuming and laborious exercise. Automatic sleep stage classification can facilitate this process. The definition of sleep stages and the sleep literature show that EEG signals are similar in Stage 1 of non-rapid eye movement (NREM) sleep and rapid eye movement (REM) sleep. Therefore, in this work an attempt was made to classify four sleep stages consisting of Awake, Stage 1 + REM, Stage 2 and Slow Wave Stage based on the EEG signal alone. Wavelet packet coefficients and artificial neural networks were deployed for this purpose. Seven all night recordings from Physionet database were used in the study. The results demonstrated that these four sleep stages could be automatically discriminated from each other with a specificity of 94.4 +/- 4.5%, a of sensitivity 84.2+3.9% and an accuracy of 93.0 +/- 4.0%.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
The effects of preferred and non-preferred running strike patterns on tissue vibration properties.
Enders, Hendrik; von Tscharner, Vinzenz; Nigg, Benno M
2014-03-01
To characterize soft tissue vibrations during running with a preferred and a non-preferred strike pattern in shoes and barefoot. Cross-sectional study. Participants ran at 3.5 m s(-1) on a treadmill in shoes and barefoot using a rearfoot and a forefoot strike for each footwear condition. The preferred strike patterns for the subjects were a rearfoot strike and a forefoot strike for shod and barefoot running, respectively. Vibrations were recorded with an accelerometer overlying the belly of the medial gastrocnemius. Thirteen non-linearly scaled wavelets were used for the analysis. Damping was calculated as the overall decay of power in the acceleration signal post ground contact. A higher damping coefficient indicates higher damping capacities of the soft tissue. The shod rearfoot strike showed a 93% lower damping coefficient than the shod forefoot strike (p<0.001). A lower damping coefficient indicates less damping of the vibrations. The barefoot forefoot strike showed a trend toward a lower damping coefficient compared to a barefoot rearfoot strike. Running barefoot with a forefoot strike resulted in a significantly lower damping coefficient than a forefoot strike when wearing shoes (p<0.001). The shod rearfoot strike showed lower damping compared to a barefoot rearfoot strike (p<0.001). While rearfoot striking showed lower vibration frequencies in shod and barefoot running, it did not consistently result in lower damping coefficients. This study showed that the use of a preferred movement resulted in lower damping coefficients of running related soft tissue vibrations. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Convertino, Matteo; Mangoubi, Rami S.; Linkov, Igor; Lowry, Nathan C.; Desai, Mukund
2012-01-01
Background The quantification of species-richness and species-turnover is essential to effective monitoring of ecosystems. Wetland ecosystems are particularly in need of such monitoring due to their sensitivity to rainfall, water management and other external factors that affect hydrology, soil, and species patterns. A key challenge for environmental scientists is determining the linkage between natural and human stressors, and the effect of that linkage at the species level in space and time. We propose pixel intensity based Shannon entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess interseasonal and interannual species turnover. Methodology/Principal Findings We model satellite images of regions of interest as textures. We define a texture in an image as a spatial domain where the variations in pixel intensity across the image are both stochastic and multiscale. To compare two textures quantitatively, we first obtain a multiresolution wavelet decomposition of each. Either an appropriate probability density function (pdf) model for the coefficients at each subband is selected, and its parameters estimated, or, a non-parametric approach using histograms is adopted. We choose the former, where the wavelet coefficients of the multiresolution decomposition at each subband are modeled as samples from the generalized Gaussian pdf. We then obtain the joint pdf for the coefficients for all subbands, assuming independence across subbands; an approximation that simplifies the computational burden significantly without sacrificing the ability to statistically distinguish textures. We measure the difference between two textures' representative pdf's via the Kullback-Leibler divergence (KL). Species turnover, or diversity, is estimated using both this KL divergence and the difference in Shannon entropy. Additionally, we predict species richness, or diversity, based on the Shannon entropy of pixel intensity.To test our approach, we specifically use the green band of Landsat images for a water conservation area in the Florida Everglades. We validate our predictions against data of species occurrences for a twenty-eight years long period for both wet and dry seasons. Our method correctly predicts 73% of species richness. For species turnover, the newly proposed KL divergence prediction performance is near 100% accurate. This represents a significant improvement over the more conventional Shannon entropy difference, which provides 85% accuracy. Furthermore, we find that changes in soil and water patterns, as measured by fluctuations of the Shannon entropy for the red and blue bands respectively, are positively correlated with changes in vegetation. The fluctuations are smaller in the wet season when compared to the dry season. Conclusions/Significance Texture-based statistical multiresolution image analysis is a promising method for quantifying interseasonal differences and, consequently, the degree to which vegetation, soil, and water patterns vary. The proposed automated method for quantifying species richness and turnover can also provide analysis at higher spatial and temporal resolution than is currently obtainable from expensive monitoring campaigns, thus enabling more prompt, more cost effective inference and decision making support regarding anomalous variations in biodiversity. Additionally, a matrix-based visualization of the statistical multiresolution analysis is presented to facilitate both insight and quick recognition of anomalous data. PMID:23115629
Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A
2010-07-01
Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.
Rezaee, Kh; Haddadnia, J
2013-09-01
Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters' number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output.
NASA Astrophysics Data System (ADS)
Damayanti, A.; Werdiningsih, I.
2018-03-01
The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.
Content-based multiple bitstream image transmission over noisy channels.
Cao, Lei; Chen, Chang Wen
2002-01-01
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Auger, Ludovic
2003-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2002-01-01
Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.
NASA Astrophysics Data System (ADS)
Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin
2017-04-01
An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Wavelet optimization for content-based image retrieval in medical databases.
Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C
2010-04-01
We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.
Characterization of palmprints by wavelet signatures via directional context modeling.
Zhang, Lei; Zhang, David
2004-06-01
The palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Current palmprint-based systems are more user friendly, more cost effective, and require fewer data signatures than traditional fingerprint-based identification systems. The principal lines and wrinkles captured in a low-resolution palmprint image provide more than enough information to uniquely identify an individual. This paper presents a palmprint identification scheme that characterizes a palmprint using a set of statistical signatures. The palmprint is first transformed into the wavelet domain, and the directional context of each wavelet subband is defined and computed in order to collect the predominant coefficients of its principal lines and wrinkles. A set of statistical signatures, which includes gravity center, density, spatial dispersivity and energy, is then defined to characterize the palmprint with the selected directional context values. A classification and identification scheme based on these signatures is subsequently developed. This scheme exploits the features of principal lines and prominent wrinkles sufficiently and achieves satisfactory results. Compared with the line-segments-matching or interesting-points-matching based palmprint verification schemes, the proposed scheme uses a much smaller amount of data signatures. It also provides a convenient classification strategy and more accurate identification.
[A wavelet-transform-based method for the automatic detection of late-type stars].
Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao
2005-07-01
The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.
NASA Astrophysics Data System (ADS)
T.; Gan, Y.
2009-04-01
First the wavelet analysis was used to analyze the variability of winter (November-January) rainfall (1974-2006) of Taiwan and seasonal sea surface temperature (SST) in selected domains of the Pacific Ocean. From the scale average wavelet power (SAWP) computed for the seasonal rainfall and seasonal SST, it seems that these data exhibit interannual oscillations at 2-4-year period. Correlations between rainfall and SST SAWP were further estimated. Next the SST in selected sectors of the western Pacific Ocean (around 5°N-30°N, 120°E-150°E) was used as predictors to predict the winter rainfall of Taiwan at one season lead time using an Artificial Neural Network calibrated by Genetic Algorithm (ANN-GA). The ANN-GA was first calibrated using the 1974-1998 data and independently validated using 1999-2005 data. In terms of summary statistics such as the correlation coefficient, root-mean-square errors (RMSE), and Hansen-Kuipers (HK) scores, the seasonal prediction for northern and western Taiwan are generally good for both calibration and validation stages, but not so in some stations located in southeast Taiwan and Central Mountain.
Wavelet Analyses of F/A-18 Aeroelastic and Aeroservoelastic Flight Test Data
NASA Technical Reports Server (NTRS)
Brenner, Martin J.
1997-01-01
Time-frequency signal representations combined with subspace identification methods were used to analyze aeroelastic flight data from the F/A-18 Systems Research Aircraft (SRA) and aeroservoelastic data from the F/A-18 High Alpha Research Vehicle (HARV). The F/A-18 SRA data were produced from a wingtip excitation system that generated linear frequency chirps and logarithmic sweeps. HARV data were acquired from digital Schroeder-phased and sinc pulse excitation signals to actuator commands. Nondilated continuous Morlet wavelets implemented as a filter bank were chosen for the time-frequency analysis to eliminate phase distortion as it occurs with sliding window discrete Fourier transform techniques. Wavelet coefficients were filtered to reduce effects of noise and nonlinear distortions identically in all inputs and outputs. Cleaned reconstructed time domain signals were used to compute improved transfer functions. Time and frequency domain subspace identification methods were applied to enhanced reconstructed time domain data and improved transfer functions, respectively. Time domain subspace performed poorly, even with the enhanced data, compared with frequency domain techniques. A frequency domain subspace method is shown to produce better results with the data processed using the Morlet time-frequency technique.
Damage detection on sudden stiffness reduction based on discrete wavelet transform.
Chen, Bo; Chen, Zhi-wei; Wang, Gan-jun; Xie, Wei-ping
2014-01-01
The sudden stiffness reduction in a structure may cause the signal discontinuity in the acceleration responses close to the damage location at the damage time instant. To this end, the damage detection on sudden stiffness reduction of building structures has been actively investigated in this study. The signal discontinuity of the structural acceleration responses of an example building is extracted based on the discrete wavelet transform. It is proved that the variation of the first level detail coefficients of the wavelet transform at damage instant is linearly proportional to the magnitude of the stiffness reduction. A new damage index is proposed and implemented to detect the damage time instant, location, and severity of a structure due to a sudden change of structural stiffness. Numerical simulation using a five-story shear building under different types of excitation is carried out to assess the effectiveness and reliability of the proposed damage index for the building at different damage levels. The sensitivity of the damage index to the intensity and frequency range of measurement noise is also investigated. The made observations demonstrate that the proposed damage index can accurately identify the sudden damage events if the noise intensity is limited.
Scaling properties of foreign exchange volatility
NASA Astrophysics Data System (ADS)
Gençay, Ramazan; Selçuk, Faruk; Whitcher, Brandon
2001-01-01
In this paper, we investigate the scaling properties of foreign exchange volatility. Our methodology is based on a wavelet multi-scaling approach which decomposes the variance of a time series and the covariance between two time series on a scale by scale basis through the application of a discrete wavelet transformation. It is shown that foreign exchange rate volatilities follow different scaling laws at different horizons. Particularly, there is a smaller degree of persistence in intra-day volatility as compared to volatility at one day and higher scales. Therefore, a common practice in the risk management industry to convert risk measures calculated at shorter horizons into longer horizons through a global scaling parameter may not be appropriate. This paper also demonstrates that correlation between the foreign exchange volatilities is the lowest at the intra-day scales but exhibits a gradual increase up to a daily scale. The correlation coefficient stabilizes at scales one day and higher. Therefore, the benefit of currency diversification is the greatest at the intra-day scales and diminishes gradually at higher scales (lower frequencies). The wavelet cross-correlation analysis also indicates that the association between two volatilities is stronger at lower frequencies.
Estimating cognitive workload using wavelet entropy-based features during an arithmetic task.
Zarjam, Pega; Epps, Julien; Chen, Fang; Lovell, Nigel H
2013-12-01
Electroencephalography (EEG) has shown promise as an indicator of cognitive workload; however, precise workload estimation is an ongoing research challenge. In this investigation, seven levels of workload were induced using an arithmetic task, and the entropy of wavelet coefficients extracted from EEG signals is shown to distinguish all seven levels. For a subject-independent multi-channel classification scheme, the entropy features achieved high accuracy, up to 98% for channels from the frontal lobes, in the delta frequency band. This suggests that a smaller number of EEG channels in only one frequency band can be deployed for an effective EEG-based workload classification system. Together with analysis based on phase locking between channels, these results consistently suggest increased synchronization of neural responses for higher load levels. Copyright © 2013 Elsevier Ltd. All rights reserved.
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
Multispectral image sharpening using wavelet transform techniques and spatial correlation of edges
Lemeshewsky, George P.; Schowengerdt, Robert A.
2000-01-01
Several reported image fusion or sharpening techniques are based on the discrete wavelet transform (DWT). The technique described here uses a pixel-based maximum selection rule to combine respective transform coefficients of lower spatial resolution near-infrared (NIR) and higher spatial resolution panchromatic (pan) imagery to produce a sharpened NIR image. Sharpening assumes a radiometric correlation between the spectral band images. However, there can be poor correlation, including edge contrast reversals (e.g., at soil-vegetation boundaries), between the fused images and, consequently, degraded performance. To improve sharpening, a local area-based correlation technique originally reported for edge comparison with image pyramid fusion is modified for application with the DWT process. Further improvements are obtained by using redundant, shift-invariant implementation of the DWT. Example images demonstrate the improvements in NIR image sharpening with higher resolution pan imagery.
NASA Technical Reports Server (NTRS)
Guarnieri, Fernando L.; Tsurutani, Bruce T.; Hajra, Rajkumar; Echer, Ezequiel; Gonzalez, Walter D.; Mannucci, Anthony J.
2014-01-01
High speed solar wind streams cause geomagnetic activity at Earth. In this study we have applied a wavelet interactive filtering and reconstruction technique on the solar wind magnetic field components and AE index series to allowed us to investigate the relationship between the two. The IMF Bz component was found as the most significant solar wind parameter responsible by the control of the AE activity. Assuming magnetic reconnection associated to southward directed Bz is the main mechanism transferring energy into the magnetosphere, we adjust parameters to forecast the AE index. The adjusted routine is able to forecast AE, based only on the Bz measured at the L1 Lagrangian point. This gives a prediction approximately 30-70 minutes in advance of the actual geomagnetic activity. The correlation coefficient between the observed AE data and the forecasted series reached values higher than 0.90. In some cases the forecast reproduced particularities observed in the signal very well.The high correlation values observed and the high efficacy of the forecasting can be taken as a confirmation that reconnection is the main physical mechanism responsible for the energy transfer during HILDCAAs. The study also shows that the IMF Bz component low frequencies are most important for AE prediction.
Automatic quantification framework to detect cracks in teeth
Shah, Hina; Hernandez, Pablo; Budin, Francois; Chittajallu, Deepak; Vimort, Jean-Baptiste; Walters, Rick; Mol, André; Khan, Asma; Paniagua, Beatriz
2018-01-01
Studies show that cracked teeth are the third most common cause for tooth loss in industrialized countries. If detected early and accurately, patients can retain their teeth for a longer time. Most cracks are not detected early because of the discontinuous symptoms and lack of good diagnostic tools. Currently used imaging modalities like Cone Beam Computed Tomography (CBCT) and intraoral radiography often have low sensitivity and do not show cracks clearly. This paper introduces a novel method that can detect, quantify, and localize cracks automatically in high resolution CBCT (hr-CBCT) scans of teeth using steerable wavelets and learning methods. These initial results were created using hr-CBCT scans of a set of healthy teeth and of teeth with simulated longitudinal cracks. The cracks were simulated using multiple orientations. The crack detection was trained on the most significant wavelet coefficients at each scale using a bagged classifier of Support Vector Machines. Our results show high discriminative specificity and sensitivity of this method. The framework aims to be automatic, reproducible, and open-source. Future work will focus on the clinical validation of the proposed techniques on different types of cracks ex-vivo. We believe that this work will ultimately lead to improved tracking and detection of cracks allowing for longer lasting healthy teeth. PMID:29769755
NASA Astrophysics Data System (ADS)
Cheng, L.; Du, J.
2015-12-01
The Xiang River, a main tributary of the Yangtze River, is subjected to high floods frequently in recent twenty years. Climate change, including abrupt shifts and fluctuations in precipitation is an important factor influencing hydrological extreme conditions. In addition, human activities are widely recognized as another reasons leading to high flood risk. With the effects of climate change and human interventions on hydrological cycle, there are several questions that need to be addressed. Are floods in the Xiang River basin getting worse? Whether the extreme streamflow shows an increasing tendency? If so, is it because the extreme rainfall events have predominant effect on floods? To answer these questions, the article detected existing trends in extreme precipitation and discharge using Mann-Kendall test. Continuous wavelet transform method was employed to identify the consistency of changes in extreme precipitation and discharge. The Pearson correlation analysis was applied to investigate how much degree of variations in extreme discharge can be explained by climate change. The results indicate that slightly upward trends can be detected in both extreme rainfalls and discharge in the upper region of Xiang River basin. For the most area of middle and lower river basin, the extreme rainfalls show significant positive trends, but the extreme discharge displays slightly upward trends with no significance at 90% confidence level. Wavelet transform analysis results illustrate that highly similar patterns of signal changes can be seen between extreme precipitation and discharge in upper section of the basin, while the changes in extreme precipitation for the middle and lower reaches do not always coincide with the extreme streamflow. The correlation coefficients of the wavelet transforms for the precipitation and discharge signals in most area of the basin pass the significance test. The conclusion may be drawn that floods in recent years are not getting worse in Xiang River basin. The similar signal patterns and positive correlation between extreme discharge and precipitation indicate that the variability of extreme precipitation has an important effect on extreme discharge of flood, although the intensity of human impacts in lower section of Xiang River basin has increased markedly.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Chen, Hsiao-Ping; Liao, Hui-Ju; Huang, Chih-Min; Wang, Shau-Chun; Yu, Sung-Nien
2010-04-23
This paper employs one chemometric technique to modify the noise spectrum of liquid chromatography-tandem mass spectrometry (LC-MS/MS) chromatogram between two consecutive wavelet-based low-pass filter procedures to improve the peak signal-to-noise (S/N) ratio enhancement. Although similar techniques of using other sets of low-pass procedures such as matched filters have been published, the procedures developed in this work are able to avoid peak broadening disadvantages inherent in matched filters. In addition, unlike Fourier transform-based low-pass filters, wavelet-based filters efficiently reject noises in the chromatograms directly in the time domain without distorting the original signals. In this work, the low-pass filtering procedures sequentially convolve the original chromatograms against each set of low pass filters to result in approximation coefficients, representing the low-frequency wavelets, of the first five resolution levels. The tedious trials of setting threshold values to properly shrink each wavelet are therefore no longer required. This noise modification technique is to multiply one wavelet-based low-pass filtered LC-MS/MS chromatogram with another artificial chromatogram added with thermal noises prior to the other wavelet-based low-pass filter. Because low-pass filter cannot eliminate frequency components below its cut-off frequency, more efficient peak S/N ratio improvement cannot be accomplished using consecutive low-pass filter procedures to process LC-MS/MS chromatograms. In contrast, when the low-pass filtered LC-MS/MS chromatogram is conditioned with the multiplication alteration prior to the other low-pass filter, much better ratio improvement is achieved. The noise frequency spectrum of low-pass filtered chromatogram, which originally contains frequency components below the filter cut-off frequency, is altered to span a broader range with multiplication operation. When the frequency range of this modified noise spectrum shifts toward the high frequency regimes, the other low-pass filter is able to provide better filtering efficiency to obtain higher peak S/N ratios. Real LC-MS/MS chromatograms, of which typically less than 6-fold peak S/N ratio improvement achieved with two consecutive wavelet-based low-pass filters remains the same S/N ratio improvement using one-step wavelet-based low-pass filter, are improved to accomplish much better ratio enhancement 25-folds to 40-folds typically when the noise frequency spectrum is modified between two low-pass filters. The linear standard curves using the filtered LC-MS/MS signals are validated. The filtered LC-MS/MS signals are also reproducible. The more accurate determinations of very low concentration samples (S/N ratio about 7-9) are obtained using the filtered signals than the determinations using the original signals. Copyright 2010 Elsevier B.V. All rights reserved.
2011-01-01
Background Epilepsy is a common neurological disorder characterized by recurrent electrophysiological activities, known as seizures. Without the appropriate detection strategies, these seizure episodes can dramatically affect the quality of life for those afflicted. The rationale of this study is to develop an unsupervised algorithm for the detection of seizure states so that it may be implemented along with potential intervention strategies. Methods Hidden Markov model (HMM) was developed to interpret the state transitions of the in vitro rat hippocampal slice local field potentials (LFPs) during seizure episodes. It can be used to estimate the probability of state transitions and the corresponding characteristics of each state. Wavelet features were clustered and used to differentiate the electrophysiological characteristics at each corresponding HMM states. Using unsupervised training method, the HMM and the clustering parameters were obtained simultaneously. The HMM states were then assigned to the electrophysiological data using expert guided technique. Minimum redundancy maximum relevance (mRMR) analysis and Akaike Information Criterion (AICc) were applied to reduce the effect of over-fitting. The sensitivity, specificity and optimality index of chronic seizure detection were compared for various HMM topologies. The ability of distinguishing early and late tonic firing patterns prior to chronic seizures were also evaluated. Results Significant improvement in state detection performance was achieved when additional wavelet coefficient rates of change information were used as features. The final HMM topology obtained using mRMR and AICc was able to detect non-ictal (interictal), early and late tonic firing, chronic seizures and postictal activities. A mean sensitivity of 95.7%, mean specificity of 98.9% and optimality index of 0.995 in the detection of chronic seizures was achieved. The detection of early and late tonic firing was validated with experimental intracellular electrical recordings of seizures. Conclusions The HMM implementation of a seizure dynamics detector is an improvement over existing approaches using visual detection and complexity measures. The subjectivity involved in partitioning the observed data prior to training can be eliminated. It can also decipher the probabilities of seizure state transitions using the magnitude and rate of change wavelet information of the LFPs. PMID:21504608
NASA Astrophysics Data System (ADS)
Li, Lei; Yu, Long; Yang, Kecheng; Li, Wei; Li, Kai; Xia, Min
2018-04-01
The multiangle dynamic light scattering (MDLS) technique can better estimate particle size distributions (PSDs) than single-angle dynamic light scattering. However, determining the inversion range, angular weighting coefficients, and scattering angle combination is difficult but fundamental to the reconstruction for both unimodal and multimodal distributions. In this paper, we propose a self-adapting regularization method called the wavelet iterative recursion nonnegative Tikhonov-Phillips-Twomey (WIRNNT-PT) algorithm. This algorithm combines a wavelet multiscale strategy with an appropriate inversion method and could self-adaptively optimize several noteworthy issues containing the choices of the weighting coefficients, the inversion range and the optimal inversion method from two regularization algorithms for estimating the PSD from MDLS measurements. In addition, the angular dependence of the MDLS for estimating the PSDs of polymeric latexes is thoroughly analyzed. The dependence of the results on the number and range of measurement angles was analyzed in depth to identify the optimal scattering angle combination. Numerical simulations and experimental results for unimodal and multimodal distributions are presented to demonstrate both the validity of the WIRNNT-PT algorithm and the angular dependence of MDLS and show that the proposed algorithm with a six-angle analysis in the 30-130° range can be satisfactorily applied to retrieve PSDs from MDLS measurements.
Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben
2017-09-12
One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.
Graph wavelet alignment kernels for drug virtual screening.
Smalter, Aaron; Huan, Jun; Lushington, Gerald
2009-06-01
In this paper, we introduce a novel statistical modeling technique for target property prediction, with applications to virtual screening and drug design. In our method, we use graphs to model chemical structures and apply a wavelet analysis of graphs to summarize features capturing graph local topology. We design a novel graph kernel function to utilize the topology features to build predictive models for chemicals via Support Vector Machine classifier. We call the new graph kernel a graph wavelet-alignment kernel. We have evaluated the efficacy of the wavelet-alignment kernel using a set of chemical structure-activity prediction benchmarks. Our results indicate that the use of the kernel function yields performance profiles comparable to, and sometimes exceeding that of the existing state-of-the-art chemical classification approaches. In addition, our results also show that the use of wavelet functions significantly decreases the computational costs for graph kernel computation with more than ten fold speedup.
Scalets, wavelets and (complex) turning point quantization
NASA Astrophysics Data System (ADS)
Handy, C. R.; Brooks, H. A.
2001-05-01
Despite the many successes of wavelet analysis in image and signal processing, the incorporation of continuous wavelet transform theory within quantum mechanics has lacked a compelling, first principles, motivating analytical framework, until now. For arbitrary one-dimensional rational fraction Hamiltonians, we develop a simple, unified formalism, which clearly underscores the complementary, and mutually interdependent, role played by moment quantization theory (i.e. via scalets, as defined herein) and wavelets. This analysis involves no approximation of the Hamiltonian within the (equivalent) wavelet space, and emphasizes the importance of (complex) multiple turning point contributions in the quantization process. We apply the method to three illustrative examples. These include the (double-well) quartic anharmonic oscillator potential problem, V(x) = Z2x2 + gx4, the quartic potential, V(x) = x4, and the very interesting and significant non-Hermitian potential V(x) = -(ix)3, recently studied by Bender and Boettcher.
Denoising time-domain induced polarisation data using wavelet techniques
NASA Astrophysics Data System (ADS)
Deo, Ravin N.; Cull, James P.
2016-05-01
Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.
Modal parameter identification of a CMUT membrane using response data only
NASA Astrophysics Data System (ADS)
Lardiès, Joseph; Bourbon, Gilles; Moal, Patrice Le; Kacem, Najib; Walter, Vincent; Le, Thien-Phu
2018-03-01
Capacitive micromachined ultrasonic transducers (CMUTs) are microelectromechanical systems used for the generation of ultrasounds. The fundamental element of the transducer is a clamped thin metallized membrane that vibrates under voltage variations. To control such oscillations and to optimize its dynamic response it is necessary to know the modal parameters of the membrane such as resonance frequency, damping and stiffness coefficients. The purpose of this work is to identify these parameters using only the time data obtained from the membrane center displacement. Dynamic measurements are conducted in time domain and we use two methods to identify the modal parameters: a subspace method based on an innovation model of the state-space representation and the continuous wavelet transform method based on the use of the ridge of the wavelet transform of the displacement. Experimental results are presented showing the effectiveness of these two procedures in modal parameter identification.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
Predicted Deepwater Bathymetry from Satellite Altimetry: Non-Fourier Transform Alternatives
NASA Astrophysics Data System (ADS)
Salazar, M.; Elmore, P. A.
2017-12-01
Robert Parker (1972) demonstrated the effectiveness of Fourier Transforms (FT) to compute gravitational potential anomalies caused by uneven, non-uniform layers of material. This important calculation relates the gravitational potential anomaly to sea-floor topography. As outlined by Sandwell and Smith (1997), a six-step procedure, utilizing the FT, then demonstrated how satellite altimetry measurements of marine geoid height are inverted into seafloor topography. However, FTs are not local in space and produce Gibb's phenomenon around discontinuities. Seafloor features exhibit spatial locality and features such as seamounts and ridges often have sharp inclines. Initial tests compared the windowed-FT to wavelets in reconstruction of the step and saw-tooth functions and resulted in lower RMS error with fewer coefficients. This investigation, thus, examined the feasibility of utilizing sparser base functions such as the Mexican Hat Wavelet, which is local in space, to first calculate the gravitational potential, and then relate it to sea-floor topography.
Switching algorithm for maglev train double-modular redundant positioning sensors.
He, Ning; Long, Zhiqiang; Xue, Song
2012-01-01
High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments.
Switching Algorithm for Maglev Train Double-Modular Redundant Positioning Sensors
He, Ning; Long, Zhiqiang; Xue, Song
2012-01-01
High-resolution positioning for maglev trains is implemented by detecting the tooth-slot structure of the long stator installed along the rail, but there are large joint gaps between long stator sections. When a positioning sensor is below a large joint gap, its positioning signal is invalidated, thus double-modular redundant positioning sensors are introduced into the system. This paper studies switching algorithms for these redundant positioning sensors. At first, adaptive prediction is applied to the sensor signals. The prediction errors are used to trigger sensor switching. In order to enhance the reliability of the switching algorithm, wavelet analysis is introduced to suppress measuring disturbances without weakening the signal characteristics reflecting the stator joint gap based on the correlation between the wavelet coefficients of adjacent scales. The time delay characteristics of the method are analyzed to guide the algorithm simplification. Finally, the effectiveness of the simplified switching algorithm is verified through experiments. PMID:23112657
Remote-sensing image encryption in hybrid domains
NASA Astrophysics Data System (ADS)
Zhang, Xiaoqiang; Zhu, Guiliang; Ma, Shilong
2012-04-01
Remote-sensing technology plays an important role in military and industrial fields. Remote-sensing image is the main means of acquiring information from satellites, which always contain some confidential information. To securely transmit and store remote-sensing images, we propose a new image encryption algorithm in hybrid domains. This algorithm makes full use of the advantages of image encryption in both spatial domain and transform domain. First, the low-pass subband coefficients of image DWT (discrete wavelet transform) decomposition are sorted by a PWLCM system in transform domain. Second, the image after IDWT (inverse discrete wavelet transform) reconstruction is diffused with 2D (two-dimensional) Logistic map and XOR operation in spatial domain. The experiment results and algorithm analyses show that the new algorithm possesses a large key space and can resist brute-force, statistical and differential attacks. Meanwhile, the proposed algorithm has the desirable encryption efficiency to satisfy requirements in practice.
NASA Astrophysics Data System (ADS)
Liu, Genyou; Duan, Pengshuo; Hao, Xiaoguang; Hu, Xiaogang
2015-04-01
The previous studies indicated that the most of the interannual variations in Length-Of-Day (LOD) could be explained by the joint effects of ENSO (EI Nino-Southern Oscillations) and QBO (Quasi-Biennial Oscillation) phenomenon in the atmosphere. Due to the limit of the used methods, those results cannot give the 'time-frequency' coherence spectrum between ENSO and LOD, and cannot indicate in which specific periods the weak coherence occurred and difficult to give the reliable reason. This paper uses Daubechies wavelet with 10 order vanishing moment to analyze the LOD monthly time series from 1962 to 2011. Based on cross-wavelet and wavelet coherence methods, the analysis of the time-frequency correlations between ENSO and LOD series (1962-2011) on the 1.3~10.7 year scales is given. We have extracted and reconstructed the LOD signals on 1.3~10.7year scales. The result shows that there is obvious weak coherence on both biennial and 5~8 year scales after 1982 relative to before 1982. According to the previous works, the biennial weak coherence is due to QBO, but the weak coherence on 5~8 year scales cannot be interpreted by the effects of ENSO and QBO. In this study, the Geomagnetic field signals (can be characterized as Aa index) are introduced, we have further extracted and reconstructed the LOD, ENSO and Aa signals in 5-8.0 year band using wavelet packet analysis. Through analyzing the standardized series of the three signals, we found a linear time-frequency formula among the original observation series: LOD(t,f) =αENSO(t,f) +βAa(t,f). This study indicates that the LOD signals on 5.3~8.0 year scales can be expressed in term of linear combination of ENSO and Aa signals. Especially after 1982, the contributions of ENSO and Aa to LOD respectively reach about 0.95ms and 1.0ms.The results also imply that there is an obvious Geomagnetic field signal in interannual variations of LOD. Furthermore, after considering the geomagnetic field signal correction, the Pearson correlation coefficient between LOD and ENSO will increase from 0.51 to 0.98. Consequently, we can conclude that the weak coherence after 1982 on 5.3-8.0 year scales between LOD and ENSO is mainly due to the disturbance of Aa signal, and the observed LOD series is the result of the interaction between ENSO and geomagnetic field signals.
NASA Astrophysics Data System (ADS)
Polanco-Martínez, J. M.; Fernández-Macho, J.; Neumann, M. B.; Faria, S. H.
2018-01-01
This paper presents an analysis of EU peripheral (so-called PIIGS) stock market indices and the S&P Europe 350 index (SPEURO), as a European benchmark market, over the pre-crisis (2004-2007) and crisis (2008-2011) periods. We computed a rolling-window wavelet correlation for the market returns and applied a non-linear Granger causality test to the wavelet decomposition coefficients of these stock market returns. Our results show that the correlation is stronger for the crisis than for the pre-crisis period. The stock market indices from Portugal, Italy and Spain were more interconnected among themselves during the crisis than with the SPEURO. The stock market from Portugal is the most sensitive and vulnerable PIIGS member, whereas the stock market from Greece tends to move away from the European benchmark market since the 2008 financial crisis till 2011. The non-linear causality test indicates that in the first three wavelet scales (intraweek, weekly and fortnightly) the number of uni-directional and bi-directional causalities is greater during the crisis than in the pre-crisis period, because of financial contagion. Furthermore, the causality analysis shows that the direction of the Granger cause-effect for the pre-crisis and crisis periods is not invariant in the considered time-scales, and that the causality directions among the studied stock markets do not seem to have a preferential direction. These results are relevant to better understand the behaviour of vulnerable stock markets, especially for investors and policymakers.
NASA Astrophysics Data System (ADS)
Bozchalooi, I. Soltani; Liang, Ming
2008-05-01
The vibration signal measured from a bearing contains vital information for the prognostic and health assessment purposes. However, when bearings are installed as part of a complex mechanical system, the measured signal is often heavily clouded by various noises due to the compounded effect of interferences of other machine elements and background noises present in the measuring device. As such, reliable condition monitoring would not be possible without proper de-noising. This is particularly true for incipient bearing faults with very weak signature signals. A new de-noising scheme is proposed in this paper to enhance the vibration signals acquired from faulty bearings. This de-noising scheme features a spectral subtraction to trim down the in-band noise prior to wavelet filtering. The Gabor wavelet is used in the wavelet transform and its parameters, i.e., scale and shape factor are selected in separate steps. The proper scale is found based on a novel resonance estimation algorithm. This algorithm makes use of the information derived from the variable shaft rotational speed though such variation is highly undesirable in fault detection since it complicates the process substantially. The shape factor value is then selected by minimizing a smoothness index. This index is defined as the ratio of the geometric mean to the arithmetic mean of the wavelet coefficient moduli. De-noising results are presented for simulated signals and experimental data acquired from both normal and faulty bearings with defective outer race, inner race, and rolling element.
NASA Astrophysics Data System (ADS)
Ahmed, Rounaq; Srinivasa Pai, P.; Sriram, N. S.; Bhat, Vasudeva
2018-02-01
Vibration Analysis has been extensively used in recent past for gear fault diagnosis. The vibration signals extracted is usually contaminated with noise and may lead to wrong interpretation of results. The denoising of extracted vibration signals helps the fault diagnosis by giving meaningful results. Wavelet Transform (WT) increases signal to noise ratio (SNR), reduces root mean square error (RMSE) and is effective to denoise the gear vibration signals. The extracted signals have to be denoised by selecting a proper denoising scheme in order to prevent the loss of signal information along with noise. An approach has been made in this work to show the effectiveness of Principal Component Analysis (PCA) to denoise gear vibration signal. In this regard three selected wavelet based denoising schemes namely PCA, Empirical Mode Decomposition (EMD), Neighcoeff Coefficient (NC), has been compared with Adaptive Threshold (AT) an extensively used wavelet based denoising scheme for gear vibration signal. The vibration signals acquired from a customized gear test rig were denoised by above mentioned four denoising schemes. The fault identification capability as well as SNR, Kurtosis and RMSE for the four denoising schemes have been compared. Features extracted from the denoised signals have been used to train and test artificial neural network (ANN) models. The performances of the four denoising schemes have been evaluated based on the performance of the ANN models. The best denoising scheme has been identified, based on the classification accuracy results. PCA is effective in all the regards as a best denoising scheme.
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
An Efficient Image Compressor for Charge Coupled Devices Camera
Li, Jin; Xing, Fei; You, Zheng
2014-01-01
Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p-norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977
Vergence variability: a key to understanding oculomotor adaptability?
Petrock, Annie Marie; Reisman, S; Alvarez, T
2006-01-01
Vergence eye movements were recorded from three different populations: healthy young (ages 18-35 years), adaptive presbyopic and non-adaptive presbyopic(the presbyopic groups aged above 45 years) to determine how the variability of the eye movements made by the populations differs. The variability was determined using Shannon Entropy calculations of Wavelet transform coefficients, to yield a non-linear analysis of the vergence movement variability. The data were then fed through a k-means clustering algorithm to classify each subject, with no a priori knowledge of true subject classification. The results indicate a highly significant difference in the total entropy values between the three groups, indicating a difference in the level of information content, and thus hypothetically the oculomotor adaptability, between the three groups.Further, the frequency distribution of the entropy varied across groups.
Performance of the Wavelet Decomposition on Massively Parallel Architectures
NASA Technical Reports Server (NTRS)
El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)
2001-01-01
Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.
NASA Technical Reports Server (NTRS)
Turso, James; Lawrence, Charles; Litt, Jonathan
2004-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Technical Reports Server (NTRS)
Turso, James A.; Lawrence, Charles; Litt, Jonathan S.
2007-01-01
The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.
NASA Astrophysics Data System (ADS)
Ferrer, Román; Jammazi, Rania; Bolós, Vicente J.; Benítez, Rafael
2018-02-01
This paper examines the interactions between the main U.S. financial stress indices and several measures of economic activity in the time-frequency domain using a number of continuous cross-wavelet tools, including the usual wavelet squared coherence and phase difference as well as two new summary wavelet-based measures. The empirical results show that the relationship between financial stress and the U.S. real economy varies considerably over time and depending on the time horizon considered. A significant adverse effect of financial stress on U.S. economic activity is observed since the onset of the subprime mortgage crisis in the summer of 2007, indicating that the impact of financial market stress on the real economy is particularly severe during periods of major financial turmoil. Furthermore, the significant linkage between financial stress and the economic environment is mostly concentrated at time horizons from one to four years, demonstrating that the effect of financial stress on economic activity is especially visible in the long-run.
Shu, Qiaosheng; Liu, Zuoxin; Si, Bingcheng
2008-01-01
Understanding the correlation between soil hydraulic parameters and soil physical properties is a prerequisite for the prediction of soil hydraulic properties from soil physical properties. The objective of this study was to examine the scale- and location-dependent correlation between two water retention parameters (alpha and n) in the van Genuchten (1980) function and soil physical properties (sand content, bulk density [Bd], and organic carbon content) using wavelet techniques. Soil samples were collected from a transect from Fuxin, China. Soil water retention curves were measured, and the van Genuchten parameters were obtained through curve fitting. Wavelet coherency analysis was used to elucidate the location- and scale-dependent relationships between these parameters and soil physical properties. Results showed that the wavelet coherence between alpha and sand content was significantly different from red noise at small scales (8-20 m) and from a distance of 30 to 470 m. Their wavelet phase spectrum was predominantly out of phase, indicating negative correlation between these two variables. The strong negative correlation between alpha and Bd existed mainly at medium scales (30-80 m). However, parameter n had a strong positive correlation only with Bd at scales between 20 and 80 m. Neither of the two retention parameters had significant wavelet coherency with organic carbon content. These results suggested that location-dependent scale analyses are necessary to improve the performance for soil water retention characteristic predictions.
Sriraam, N.
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238
Sriraam, N
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
NASA Astrophysics Data System (ADS)
Sumarna; Astono, J.; Purwanto, A.; Agustika, D. K.
2018-04-01
Phonocardiograph (PCG) system consisting of an electronic stethoscope, mic condenser, mic preamp, and the battery has been developed. PCG system is used to detect heart abnormalities. Although PCG is not popular because of many things that affect its performance, in this research we try to reduce the factors that affecting its consistency To find out whether the system is repeatable and reliable the system have to be characterized first. This research aims to see whether the PCG system can provide the same results for measurements of the same patient. Characterization of the system is done by analyzing whether the PCG system can recognize the S1 and S2 part of the same person. From the recording result, S1 and S2 then transformed by using Discrete Wavelet Transform of Haar mother wavelet of level 1 and extracted the feature by using data range of approximation coefficients. The result was analyzed by using pattern recognition system of backpropagation neural network. Partially obtained data used as training data and partly used as test data. From the results of the pattern recognition system, it can be concluded that the system accuracy in recognizing S1 reach 87.5% and S2 only hit 67%.
Li, Xiang; Yang, Zhibo; Chen, Xuefeng
2014-01-01
The active structural health monitoring (SHM) approach for the complex composite laminate structures of wind turbine blades (WTBs), addresses the important and complicated problem of signal noise. After illustrating the wind energy industry's development perspectives and its crucial requirement for SHM, an improved redundant second generation wavelet transform (IRSGWT) pre-processing algorithm based on neighboring coefficients is introduced for feeble signal denoising. The method can avoid the drawbacks of conventional wavelet methods that lose information in transforms and the shortcomings of redundant second generation wavelet (RSGWT) denoising that can lead to error propagation. For large scale WTB composites, how to minimize the number of sensors while ensuring accuracy is also a key issue. A sparse sensor array optimization of composites for WTB applications is proposed that can reduce the number of transducers that must be used. Compared to a full sixteen transducer array, the optimized eight transducer configuration displays better accuracy in identifying the correct position of simulated damage (mass of load) on composite laminates with anisotropic characteristics than a non-optimized array. It can help to guarantee more flexible and qualified monitoring of the areas that more frequently suffer damage. The proposed methods are verified experimentally on specimens of carbon fiber reinforced resin composite laminates. PMID:24763210
Wavelet investigation of preferential concentration in particle-laden turbulence
NASA Astrophysics Data System (ADS)
Bassenne, Maxime; Urzay, Javier; Schneider, Kai; Moin, Parviz
2017-11-01
Direct numerical simulations of particle-laden homogeneous-isotropic turbulence are employed in conjunction with wavelet multi-resolution analyses to study preferential concentration in both physical and spectral spaces. Spatially-localized energy spectra for velocity, vorticity and particle-number density are computed, along with their spatial fluctuations that enable the quantification of scale-dependent probability density functions, intermittency and inter-phase conditional statistics. The main result is that particles are found in regions of lower turbulence spectral energy than the corresponding mean. This suggests that modeling the subgrid-scale turbulence intermittency is required for capturing the small-scale statistics of preferential concentration in large-eddy simulations. Additionally, a method is defined that decomposes a particle number-density field into the sum of a coherent and an incoherent components. The coherent component representing the clusters can be sparsely described by at most 1.6% of the total number of wavelet coefficients. An application of the method, motivated by radiative-heat-transfer simulations, is illustrated in the form of a grid-adaptation algorithm that results in non-uniform meshes refined around particle clusters. It leads to a reduction of the number of control volumes by one to two orders of magnitude. PSAAP-II Center at Stanford (Grant DE-NA0002373).
Koley, Ebha; Verma, Khushaboo; Ghosh, Subhojit
2015-01-01
Restrictions on right of way and increasing power demand has boosted development of six phase transmission. It offers a viable alternative for transmitting more power, without major modification in existing structure of three phase double circuit transmission system. Inspite of the advantages, low acceptance of six phase system is attributed to the unavailability of a proper protection scheme. The complexity arising from large number of possible faults in six phase lines makes the protection quite challenging. The proposed work presents a hybrid wavelet transform and modular artificial neural network based fault detector, classifier and locator for six phase lines using single end data only. The standard deviation of the approximate coefficients of voltage and current signals obtained using discrete wavelet transform are applied as input to the modular artificial neural network for fault classification and location. The proposed scheme has been tested for all 120 types of shunt faults with variation in location, fault resistance, fault inception angles. The variation in power system parameters viz. short circuit capacity of the source and its X/R ratio, voltage, frequency and CT saturation has also been investigated. The result confirms the effectiveness and reliability of the proposed protection scheme which makes it ideal for real time implementation.
An image adaptive, wavelet-based watermarking of digital images
NASA Astrophysics Data System (ADS)
Agreste, Santa; Andaloro, Guido; Prestipino, Daniela; Puccio, Luigia
2007-12-01
In digital management, multimedia content and data can easily be used in an illegal way--being copied, modified and distributed again. Copyright protection, intellectual and material rights protection for authors, owners, buyers, distributors and the authenticity of content are crucial factors in solving an urgent and real problem. In such scenario digital watermark techniques are emerging as a valid solution. In this paper, we describe an algorithm--called WM2.0--for an invisible watermark: private, strong, wavelet-based and developed for digital images protection and authenticity. Using discrete wavelet transform (DWT) is motivated by good time-frequency features and well-matching with human visual system directives. These two combined elements are important in building an invisible and robust watermark. WM2.0 works on a dual scheme: watermark embedding and watermark detection. The watermark is embedded into high frequency DWT components of a specific sub-image and it is calculated in correlation with the image features and statistic properties. Watermark detection applies a re-synchronization between the original and watermarked image. The correlation between the watermarked DWT coefficients and the watermark signal is calculated according to the Neyman-Pearson statistic criterion. Experimentation on a large set of different images has shown to be resistant against geometric, filtering and StirMark attacks with a low rate of false alarm.
Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.
2017-10-01
Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.
NASA Astrophysics Data System (ADS)
Wu, T. Y.; Lin, S. F.
2013-10-01
Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
Hyperspectral imaging with wavelet transform for classification of colon tissue biopsy samples
NASA Astrophysics Data System (ADS)
Masood, Khalid
2008-08-01
Automatic classification of medical images is a part of our computerised medical imaging programme to support the pathologists in their diagnosis. Hyperspectral data has found its applications in medical imagery. Its usage is increasing significantly in biopsy analysis of medical images. In this paper, we present a histopathological analysis for the classification of colon biopsy samples into benign and malignant classes. The proposed study is based on comparison between 3D spectral/spatial analysis and 2D spatial analysis. Wavelet textural features in the wavelet domain are used in both these approaches for classification of colon biopsy samples. Experimental results indicate that the incorporation of wavelet textural features using a support vector machine, in 2D spatial analysis, achieve best classification accuracy.
Recent advances in wavelet technology
NASA Technical Reports Server (NTRS)
Wells, R. O., Jr.
1994-01-01
Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.
Necessary and sufficient condition for the realization of the complex wavelet
NASA Astrophysics Data System (ADS)
Keita, Alpha; Qing, Qianqin; Wang, Nengchao
1997-04-01
Wavelet theory is a whole new signal analysis theory in recent years, and the appearance of which is attracting lots of experts in many different fields giving it a deepen study. Wavelet transformation is a new kind of time. Frequency domain analysis method of localization in can-be- realized time domain or frequency domain. It has many perfect characteristics that many other kinds of time frequency domain analysis, such as Gabor transformation or Viginier. For example, it has orthogonality, direction selectivity, variable time-frequency domain resolution ratio, adjustable local support, parsing data in little amount, and so on. All those above make wavelet transformation a very important new tool and method in signal analysis field. Because the calculation of complex wavelet is very difficult, in application, real wavelet function is used. In this paper, we present a necessary and sufficient condition that the real wavelet function can be obtained by the complex wavelet function. This theorem has some significant values in theory. The paper prepares its technique from Hartley transformation, then, it gives the complex wavelet was a signal engineering expert. His Hartley transformation, which also mentioned by Hartley, had been overlooked for about 40 years, for the social production conditions at that time cannot help to show its superiority. Only when it came to the end of 70s and the early 80s, after the development of the fast algorithm of Fourier transformation and the hardware implement to some degree, the completely some positive-negative transforming method was coming to take seriously. W transformation, which mentioned by Zhongde Wang, pushed the studying work of Hartley transformation and its fast algorithm forward. The kernel function of Hartley transformation.
NASA Astrophysics Data System (ADS)
Joshi, Nitin; Gupta, Divya; Suryavanshi, Shakti; Adamowski, Jan; Madramootoo, Chandra A.
2016-12-01
In this study, seasonal trends as well as dominant and significant periods of variability of drought variables were analyzed for 30 rainfall subdivisions in India over 141 years (1871-2012). Standardized precipitation index (SPI) was used as a meteorological drought indicator, and various drought variables (monsoon SPI, non-monsoon SPI, yearly SPI, annual drought duration, annual drought severity and annual drought peak) were analyzed. Discrete wavelet transform was used in conjunction with the Mann-Kendall test to analyze trends and dominant periodicities associated with the drought variables. Furthermore, continuous wavelet transform (CWT) based global wavelet spectrum was used to analyze significant periods of variability associated with the drought variables. From the trend analysis, we observed that over the second half of the 20th century, drought occurrences increased significantly in subdivisions of Northeast and Central India. In both short-term (2-8 years) and decadal (16-32 years) periodicities, the drought variables were found to influence the trend. However, CWT analysis indicated that the dominant periodic components were not significant for most of the geographical subdivisions. Although inter-annual and inter-decadal periodic components play an important role, they may not completely explain the variability associated with the drought variables across the country.
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
NASA Astrophysics Data System (ADS)
Mohyud Din, S. T.; Zubair, T.; Usman, M.; Hamid, M.; Rafiq, M.; Mohsin, S.
2018-04-01
This study is devoted to analyze the influence of variable diffusion coefficient and variable thermal conductivity on heat and mass transfer in Casson fluid flow. The behavior of concentration and temperature profiles in the presence of Joule heating and viscous dissipation is also studied. The dimensionless conversation laws with suitable BCs are solved via Modified Gegenbauer Wavelets Method (MGWM). It has been observed that increase in Casson fluid parameter (β ) and parameter ɛ enhances the Nusselt number. Moreover, Nusselt number of Newtonian fluid is less than that of the Casson fluid. The phenomenon of mass transport can be increased by solute of variable diffusion coefficient rather than solute of constant diffusion coefficient. A detailed analysis of results is appropriately highlighted. The obtained results, error estimates, and convergence analysis reconfirm the credibility of proposed algorithm. It is concluded that MGWM is an appropriate tool to tackle nonlinear physical models and hence may be extended to some other nonlinear problems of diversified physical nature also.
Du, Pan; Kibbe, Warren A; Lin, Simon M
2006-09-01
A major problem for current peak detection algorithms is that noise in mass spectrometry (MS) spectra gives rise to a high rate of false positives. The false positive rate is especially problematic in detecting peaks with low amplitudes. Usually, various baseline correction algorithms and smoothing methods are applied before attempting peak detection. This approach is very sensitive to the amount of smoothing and aggressiveness of the baseline correction, which contribute to making peak detection results inconsistent between runs, instrumentation and analysis methods. Most peak detection algorithms simply identify peaks based on amplitude, ignoring the additional information present in the shape of the peaks in a spectrum. In our experience, 'true' peaks have characteristic shapes, and providing a shape-matching function that provides a 'goodness of fit' coefficient should provide a more robust peak identification method. Based on these observations, a continuous wavelet transform (CWT)-based peak detection algorithm has been devised that identifies peaks with different scales and amplitudes. By transforming the spectrum into wavelet space, the pattern-matching problem is simplified and in addition provides a powerful technique for identifying and separating the signal from the spike noise and colored noise. This transformation, with the additional information provided by the 2D CWT coefficients can greatly enhance the effective signal-to-noise ratio. Furthermore, with this technique no baseline removal or peak smoothing preprocessing steps are required before peak detection, and this improves the robustness of peak detection under a variety of conditions. The algorithm was evaluated with SELDI-TOF spectra with known polypeptide positions. Comparisons with two other popular algorithms were performed. The results show the CWT-based algorithm can identify both strong and weak peaks while keeping false positive rate low. The algorithm is implemented in R and will be included as an open source module in the Bioconductor project.
Rezaee, Kh.; Haddadnia, J.
2013-01-01
Background: Breast cancer is currently one of the leading causes of death among women worldwide. The diagnosis and separation of cancerous tumors in mammographic images require accuracy, experience and time, and it has always posed itself as a major challenge to the radiologists and physicians. Objective: This paper proposes a new algorithm which draws on discrete wavelet transform and adaptive K-means techniques to transmute the medical images implement the tumor estimation and detect breast cancer tumors in mammograms in early stages. It also allows the rapid processing of the input data. Method: In the first step, after designing a filter, the discrete wavelet transform is applied to the input images and the approximate coefficients of scaling components are constructed. Then, the different parts of image are classified in continuous spectrum. In the next step, by using adaptive K-means algorithm for initializing and smart choice of clusters’ number, the appropriate threshold is selected. Finally, the suspicious cancerous mass is separated by implementing the image processing techniques. Results: We Received 120 mammographic images in LJPEG format, which had been scanned in Gray-Scale with 50 microns size, 3% noise and 20% INU from clinical data taken from two medical databases (mini-MIAS and DDSM). The proposed algorithm detected tumors at an acceptable level with an average accuracy of 92.32% and sensitivity of 90.24%. Also, the Kappa coefficient was approximately 0.85, which proved the suitable reliability of the system performance. Conclusion: The exact positioning of the cancerous tumors allows the radiologist to determine the stage of disease progression and suggest an appropriate treatment in accordance with the tumor growth. The low PPV and high NPV of the system is a warranty of the system and both clinical specialists and patients can trust its output. PMID:25505753
Using sparse regularization for multi-resolution tomography of the ionosphere
NASA Astrophysics Data System (ADS)
Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.
2015-10-01
Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
NASA Astrophysics Data System (ADS)
Vaudor, Lise; Piegay, Herve; Wawrzyniak, Vincent; Spitoni, Marie
2016-04-01
The form and functioning of a geomorphic system result from processes operating at various spatial and temporal scales. Longitudinal channel characteristics thus exhibit complex patterns which vary according to the scale of study, might be periodic or segmented, and are generally blurred by noise. Describing the intricate, multiscale structure of such signals, and identifying at which scales the patterns are dominant and over which sub-reach, could help determine at which scales they should be investigated, and provide insights into the main controlling factors. Wavelet transforms aim at describing data at multiple scales (either in time or space), and are now exploited in geophysics for the analysis of nonstationary series of data. They provide a consistent, non-arbitrary, and multiscale description of a signal's variations and help explore potential causalities. Nevertheless, their use in fluvial geomorphology, notably to study longitudinal patterns, is hindered by a lack of user-friendly tools to help understand, implement, and interpret them. We have developed a free application, The Wavelet ToolKat, designed to facilitate the use of wavelet transforms on temporal or spatial series. We illustrate its usefulness describing longitudinal channel curvature and slope of three freely meandering rivers in the Amazon basin (the Purus, Juruá and Madre de Dios rivers), using topographic data generated from NASA's Shuttle Radar Topography Mission (SRTM) in 2000. Three types of wavelet transforms are used, with different purposes. Continuous Wavelet Transforms are used to identify in a non-arbitrary way the dominant scales and locations at which channel curvature and slope vary. Cross-wavelet transforms, and wavelet coherence and phase are used to identify scales and locations exhibiting significant channel curvature and slope co-variations. Maximal Overlap Discrete Wavelet Transforms decompose data into their variations at a series of scales and are used to provide smoothed descriptions of the series at the scales deemed relevant.
Wavelet analysis of MR functional data from the cerebellum
NASA Astrophysics Data System (ADS)
Romero Sánchez, Karen; Vásquez Reyes, Marcos A.; González Gómez, Dulce I.; Hidalgo Tobón, Silvia; Hernández López, Javier M.; Dies Suarez, Pilar; Barragán Pérez, Eduardo; De Celis Alonso, Benito
2014-11-01
The main goal of this project was to create a computer algorithm based on wavelet analysis of BOLD signals, which automatically diagnosed ADHD using information from resting state MR experiments. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Wavelet analysis, which is a mathematical tool used to decompose time series into elementary constituents and detect hidden information, was applied here to the BOLD signal obtained from the cerebellum 8 region of all our volunteers. Statistical differences between the values of the a parameters of wavelet analysis was found and showed significant differences (p<0.02) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD.
NASA Astrophysics Data System (ADS)
Oygur, Tunc; Unal, Gazanfer
Shocks, jumps, booms and busts are typical large fluctuation markers which appear in crisis. Models and leading indicators vary according to crisis type in spite of the fact that there are a lot of different models and leading indicators in literature to determine structure of crisis. In this paper, we investigate structure of dynamic correlation of stock return, interest rate, exchange rate and trade balance differences in crisis periods in Turkey over the period between October 1990 and March 2015 by applying wavelet coherency methodologies to determine nature of crises. The time period includes the Turkeys currency and banking crises; US sub-prime mortgage crisis and the European sovereign debt crisis occurred in 1994, 2001, 2008 and 2009, respectively. Empirical results showed that stock return, interest rate, exchange rate and trade balance differences are significantly linked during the financial crises in Turkey. The cross wavelet power, the wavelet coherency, the multiple wavelet coherency and the quadruple wavelet coherency methodologies have been used to examine structure of dynamic correlation. Moreover, in consequence of quadruple and multiple wavelet coherence, strongly correlated large scales indicate linear behavior and, hence VARMA (vector autoregressive moving average) gives better fitting and forecasting performance. In addition, increasing the dimensions of the model for strongly correlated scales leads to more accurate results compared to scalar counterparts.
NASA Astrophysics Data System (ADS)
Ushenko, Yu. A.; Wanchuliak, O. Y.
2013-06-01
The optical model of polycrystalline networks of myocardium protein fibrils is presented. The technique of determining the coordinate distribution of polarization azimuth of the points of laser images of myocardium histological sections is suggested. The results of investigating the interrelation between the values of statistical (statistical moments of the 1st-4th order) parameters are presented which characterize distributions of wavelet-coefficients polarization maps of myocardium layers and death reasons.
Lu, Jia-hui; Zhang, Yi-bo; Zhang, Zhuo-yong; Meng, Qing-fan; Guo, Wei-liang; Teng, Li-rong
2008-06-01
A calibration model (WT-RBFNN) combination of wavelet transform (WT) and radial basis function neural network (RBFNN) was proposed for synchronous and rapid determination of rifampicin and isoniazide in Rifampicin and Isoniazide tablets by near infrared reflectance spectroscopy (NIRS). The approximation coefficients were used for input data in RBFNN. The network parameters including the number of hidden layer neurons and spread constant (SC) were investigated. WT-RBFNN model which compressed the original spectra data, removed the noise and the interference of background, and reduced the randomness, the capabilities of prediction were well optimized. The root mean square errors of prediction (RMSEP) for the determination of rifampicin and isoniazide obtained from the optimum WT-RBFNN model are 0.00639 and 0.00587, and the root mean square errors of cross-calibration (RMSECV) for them are 0.00604 and 0.00457, respectively which are superior to those obtained by the optimum RBFNN and PLS models. Regression coefficient (R) between NIRS predicted values and RP-HPLC values for rifampicin and isoniazide are 0.99522 and 0.99392, respectively and the relative error is lower than 2.300%. It was verified that WT-RBFNN model is a suitable approach to dealing with NIRS. The proposed WT-RBFNN model is convenient, and rapid and with no pollution for the determination of rifampicin and isoniazide tablets.
NASA Astrophysics Data System (ADS)
Tegowski, J.; Zajfert, G.
2014-12-01
Carbon Capture & Storage (CCS) efficiently prevents the release of anthropogenic CO2 into the atmosphere. We investigate a potential site in the Polish Sector of the Baltic Sea (B3 field site), consisting in a depleted oil and gas reservoir. An area ca. 30 x 8 km was surveyed along 138 acoustic transects, realised from R/V St. Barbara in 2012 and combining multibeam echosounder, sidescan sonar and sub-bottom profiler. Preparation of CCS sites requires accurate knowledge of the subsurface structure of the seafloor, in particular deposit compactness. Gas leaks in the water column were monitored, along with the structure of upper sediment layers. Our analyses show the shallow sub-seabed is layered, and quantified the spatial distribution of gas diffusion chimneys and seabed effusion craters. Remote detection of gas-containing surface sediments can be rather complex if bubbles are not emitted directly into the overlying water and thus detectable acoustically. The heterogeneity of gassy sediments makes conventional bottom sampling methods inefficient. Therefore, we propose a new approach to identification, mapping, and monitoring of potentially gassy surface sediments, based on wavelet analysis of echo signal envelopes of a chirp sub-bottom profiler (EdgeTech SB-0512). Each echo envelope was subjected to wavelet transformation, whose coefficients were used to calculate wavelet energies. The set of echo envelope parameters was input to fuzzy logic and c-means algorithms. The resulting classification highlights seafloor areas with different subsurface morphological features, which can indicate gassy sediments. This work has been conducted under EC FP7-CP-IP project No. 265847: Sub-seabed CO2 Storage: Impact on Marine Ecosystems (ECO2).
Relating soil pore geometry to soil water content dynamics decomposed at multiple frequencies
NASA Astrophysics Data System (ADS)
Qin, Mingming; Gimenez, Daniel; Cooper, Miguel
2016-04-01
Soil structure is a critical factor determining the response of soil water content to meteorological inputs such as precipitation. Wavelet analysis can be used to filter a signal into several wavelet components, each characterizing a given frequency. The purpose of this research was to investigate relationships between the geometry of soil pore systems and the various wavelet components derived from soil water content dynamics. The two study sites investigated were located in the state of São Paulo, Brazil. Each site was comprised of five soil profiles, the first site was situated along a 300-meter transect with about 10% slope in a tropical semi-deciduous forest, while the second one spanned 230-meter over a Brazilian savanna with a slope of about 6%. For each profile, between two to four Water Content Reflectometer CS615 (Campbell Scientific, Inc.) probes were installed according to horizonation at depths varying between 0.1 m and 2.3 m. Bulk soil, three soil cores, and one undisturbed soil block were sampled from selected horizons for determining particle size distributions, water retention curves, and pore geometry, respectively. Pore shape and size were determined from binary images obtained from resin-impregnated blocks and used to characterize pore geometry. Soil water contents were recorded at a 20-minute interval over a 4-month period. The Mexican hat wavelet was used to decompose soil water content measurements into wavelet components. The responses of wavelet components to wetting and drying cycles were characterized by the median height of the peaks in each wavelet component and were correlated with particular pore shapes and sizes. For instance, large elongated and irregular pores, largely responsible for the transmission of water, were significantly correlated with wavelet components at high frequencies (40 minutes to 48 hours) while rounded pores, typically associated to water retention, were only significantly correlated to lower frequency ranges (48 hours and two months). These results will be further discussed in the context of the location of the soil horizons within the toposequence.
Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain.
Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L; Aziz, Tipu Z; Wang, Shouyan
2018-01-01
In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep brain stimulation based on neural states integrating multiple neural oscillations.
Dynamic Neural State Identification in Deep Brain Local Field Potentials of Neuropathic Pain
Luo, Huichun; Huang, Yongzhi; Du, Xueying; Zhang, Yunpeng; Green, Alexander L.; Aziz, Tipu Z.; Wang, Shouyan
2018-01-01
In neuropathic pain, the neurophysiological and neuropathological function of the ventro-posterolateral nucleus of the thalamus (VPL) and the periventricular gray/periaqueductal gray area (PVAG) involves multiple frequency oscillations. Moreover, oscillations related to pain perception and modulation change dynamically over time. Fluctuations in these neural oscillations reflect the dynamic neural states of the nucleus. In this study, an approach to classifying the synchronization level was developed to dynamically identify the neural states. An oscillation extraction model based on windowed wavelet packet transform was designed to characterize the activity level of oscillations. The wavelet packet coefficients sparsely represented the activity level of theta and alpha oscillations in local field potentials (LFPs). Then, a state discrimination model was designed to calculate an adaptive threshold to determine the activity level of oscillations. Finally, the neural state was represented by the activity levels of both theta and alpha oscillations. The relationship between neural states and pain relief was further evaluated. The performance of the state identification approach achieved sensitivity and specificity beyond 80% in simulation signals. Neural states of the PVAG and VPL were dynamically identified from LFPs of neuropathic pain patients. The occurrence of neural states based on theta and alpha oscillations were correlated to the degree of pain relief by deep brain stimulation. In the PVAG LFPs, the occurrence of the state with high activity levels of theta oscillations independent of alpha and the state with low-level alpha and high-level theta oscillations were significantly correlated with pain relief by deep brain stimulation. This study provides a reliable approach to identifying the dynamic neural states in LFPs with a low signal-to-noise ratio by using sparse representation based on wavelet packet transform. Furthermore, it may advance closed-loop deep brain stimulation based on neural states integrating multiple neural oscillations. PMID:29695951
NASA Astrophysics Data System (ADS)
Benitez Buelga, Javier; Rodriguez-Sinobas, Leonor; Sanchez, Raul; Gil, Maria; Tarquis, Ana M.
2014-05-01
Soils can be seen as the result of spatial variation operating over several scales. This observation points to 'variability' as a key soil attribute that should be studied. Soil variability has often been considered to be composed of 'functional' (explained) variations plus random fluctuations or noise. However, the distinction between these two components is scale dependent because increasing the scale of observation almost always reveals structure in the noise. Geostatistical methods and, more recently, multifractal/wavelet techniques have been used to characterize scaling and heterogeneity of soil properties among others coming from complexity science. Multifractal formalism, first proposed by Mandelbrot (1982), is suitable for variables with self-similar distribution on a spatial domain (Kravchenko et al., 2002). Multifractal analysis can provide insight into spatial variability of crop or soil parameters (Vereecken et al., 2007). This technique has been used to characterize the scaling property of a variable measured along a transect as a mass distribution of a statistical measure on a spatial domain of the studied field (Zeleke and Si, 2004). To do this, it divides the transect into a number of self-similar segments. It identifies the differences among the subsets by using a wide range of statistical moments. Wavelets were developed in the 1980s for signal processing, and later introduced to soil science by Lark and Webster (1999). The wavelet transform decomposes a series; whether this be a time series (Whitcher, 1998; Percival and Walden, 2000), or as in our case a series of measurements made along a transect; into components (wavelet coefficients) which describe local variation in the series at different scale (or frequency) intervals, giving up only some resolution in space (Lark et al., 2003, 2004). Wavelet coefficients can be used to estimate scale specific components of variation and correlation. This allows us to see which scales contribute most to signal variation, or to see at which scales signals are most correlated. This can give us an insight into the dominant processes An alternative to both of the above methods has been described recently. Relative entropy and increments in relative entropy has been applied in soil images (Bird et al., 2006) and in soil transect data (Tarquis et al., 2008) to study scale effects localized in scale and provide the information that is complementary to the information about scale dependencies found across a range of scales. We will use them in this work to describe the spatial scaling properties of a set of field water content data measured in an extension of a corn field, in a plot of 500 m2 and an spatial resolution of 25 cm. These measurements are based on an optics cable (BruggSteal) buried on a ziz-zag deployment at 30cm depth. References Bird, N., M.C. Díaz, A. Saa, and A.M. Tarquis. 2006. A review of fractal and multifractal analysis of soil pore-scale images. J. Hydrol. 322:211-219. Kravchenko, A.N., R. Omonode, G.A. Bollero, and D.G. Bullock. 2002. Quantitative mapping of soil drainage classes using topographical data and soil electrical conductivity. Soil Sci. Soc. Am. J. 66:235-243. Lark, R.M., A.E. Milne, T.M. Addiscott, K.W.T. Goulding, C.P. Webster, and S. O'Flaherty. 2004. Scale- and location-dependent correlation of nitrous oxide emissions with soil properties: An analysis using wavelets. Eur. J. Soil Sci. 55:611-627. Lark, R.M., S.R. Kaffka, and D.L. Corwin. 2003. Multiresolution analysis of data on electrical conductivity of soil using wavelets. J. Hydrol. 272:276-290. Lark, R. M. and Webster, R. 1999. Analysis and elucidation of soil variation using wavelets. European J. of Soil Science, 50(2): 185-206. Mandelbrot, B.B. 1982. The fractal geometry of nature. W.H. Freeman, New York. Percival, D.B., and A.T. Walden. 2000. Wavelet methods for time series analysis. Cambridge Univ. Press, Cambridge, UK. Tarquis, A.M., N.R. Bird, A.P. Whitmore, M.C. Cartagena, and Y. Pachepsky. 2008. Multiscale analysis of soil transect data. Vadose Zone J. 7: 563-569. Vereecken, H., R. Kasteel, J. Vanderborght, and T. Harter. 2007. Upscaling hydraulic properties and soil water flow processes in heterogeneous soils: A review. Vadose Zone J. 6:1-28. Whitcher, B.J. 1998. Assessing nonstationary time series using wavelets. Ph.D. diss. Univ. of Washington, Seattle (Diss. Abstr. 9907961). Zeleke, T.B., and B.C. Si. 2004. Scaling properties of topographic indices and crop yield: Multifractal and joint multifractal approaches. Agron J., 96:1082-1090.
A robust color image watermarking algorithm against rotation attacks
NASA Astrophysics Data System (ADS)
Han, Shao-cheng; Yang, Jin-feng; Wang, Rui; Jia, Gui-min
2018-01-01
A robust digital watermarking algorithm is proposed based on quaternion wavelet transform (QWT) and discrete cosine transform (DCT) for copyright protection of color images. The luminance component Y of a host color image in YIQ space is decomposed by QWT, and then the coefficients of four low-frequency subbands are transformed by DCT. An original binary watermark scrambled by Arnold map and iterated sine chaotic system is embedded into the mid-frequency DCT coefficients of the subbands. In order to improve the performance of the proposed algorithm against rotation attacks, a rotation detection scheme is implemented before watermark extracting. The experimental results demonstrate that the proposed watermarking scheme shows strong robustness not only against common image processing attacks but also against arbitrary rotation attacks.
Cloud-scale genomic signals processing classification analysis for gene expression microarray data.
Harvey, Benjamin; Soo-Yeon Ji
2014-01-01
As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring inference though analysis of DNA/mRNA sequence data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological inference by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale classification analysis of microarray data using Wavelet thresholding in a Cloud environment to identify significantly expressed features. This paper proposes a novel methodology that uses Wavelet based Denoising to initialize a threshold for determination of significantly expressed genes for classification. Additionally, this research was implemented and encompassed within cloud-based distributed processing environment. The utilization of Cloud computing and Wavelet thresholding was used for the classification 14 tumor classes from the Global Cancer Map (GCM). The results proved to be more accurate than using a predefined p-value for differential expression classification. This novel methodology analyzed Wavelet based threshold features of gene expression in a Cloud environment, furthermore classifying the expression of samples by analyzing gene patterns, which inform us of biological processes. Moreover, enabling researchers to face the present and forthcoming challenges that may arise in the analysis of data in functional genomics of large microarray datasets.
Barbosa, Daniel J C; Ramos, Jaime; Lima, Carlos S
2008-01-01
Capsule endoscopy is an important tool to diagnose tumor lesions in the small bowel. The capsule endoscopic images possess vital information expressed by color and texture. This paper presents an approach based in the textural analysis of the different color channels, using the wavelet transform to select the bands with the most significant texture information. A new image is then synthesized from the selected wavelet bands, trough the inverse wavelet transform. The features of each image are based on second-order textural information, and they are used in a classification scheme using a multilayer perceptron neural network. The proposed methodology has been applied in real data taken from capsule endoscopic exams and reached 98.7% sensibility and 96.6% specificity. These results support the feasibility of the proposed algorithm.
Coarse-to-fine wavelet-based airport detection
NASA Astrophysics Data System (ADS)
Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun
2015-10-01
Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.
Wavelets solution of MHD 3-D fluid flow in the presence of slip and thermal radiation effects
NASA Astrophysics Data System (ADS)
Usman, M.; Zubair, T.; Hamid, M.; Haq, Rizwan Ul; Wang, Wei
2018-02-01
This article is devoted to analyze the magnetic field, slip, and thermal radiations effects on generalized three-dimensional flow, heat, and mass transfer in a channel of lower stretching wall. We supposed two various lateral direction rates for the lower stretching surface of the wall while the upper wall of the channel is subjected to constant injection. Moreover, influence of thermal slip on the temperature profile beside the viscous dissipation and Joule heating is also taken into account. The governing set of partial differential equations of the heat transfer and flow are transformed to nonlinear set of ordinary differential equations (ODEs) by using the compatible similarity transformations. The obtained nonlinear ODE set tackled by means of a new wavelet algorithm. The outcomes obtained via modified Chebyshev wavelet method are compared with Runge-Kutta (order-4). The worthy comparison, error, and convergence analysis shows an excellent agreement. Additionally, the graphical representation for various physical parameters including the skin friction coefficient, velocity, the temperature gradient, and the temperature profiles are plotted and discussed. It is observed that for a fixed value of velocity slip parameter a suitable selection of stretching ratio parameter can be helpful in hastening the heat transfer rate and in reducing the viscous drag over the stretching sheet. Finally, the convergence analysis is performed which endorsing that this proposed method is well efficient.
Mahrooghy, Majid; Ashraf, Ahmed B; Daye, Dania; McDonald, Elizabeth S; Rosen, Mark; Mies, Carolyn; Feldman, Michael; Kontos, Despina
2015-06-01
Heterogeneity in cancer can affect response to therapy and patient prognosis. Histologic measures have classically been used to measure heterogeneity, although a reliable noninvasive measurement is needed both to establish baseline risk of recurrence and monitor response to treatment. Here, we propose using spatiotemporal wavelet kinetic features from dynamic contrast-enhanced magnetic resonance imaging to quantify intratumor heterogeneity in breast cancer. Tumor pixels are first partitioned into homogeneous subregions using pharmacokinetic measures. Heterogeneity wavelet kinetic (HetWave) features are then extracted from these partitions to obtain spatiotemporal patterns of the wavelet coefficients and the contrast agent uptake. The HetWave features are evaluated in terms of their prognostic value using a logistic regression classifier with genetic algorithm wrapper-based feature selection to classify breast cancer recurrence risk as determined by a validated gene expression assay. Receiver operating characteristic analysis and area under the curve (AUC) are computed to assess classifier performance using leave-one-out cross validation. The HetWave features outperform other commonly used features (AUC = 0.88 HetWave versus 0.70 standard features). The combination of HetWave and standard features further increases classifier performance (AUCs 0.94). The rate of the spatial frequency pattern over the pharmacokinetic partitions can provide valuable prognostic information. HetWave could be a powerful feature extraction approach for characterizing tumor heterogeneity, providing valuable prognostic information.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects
2011-01-01
Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989
Lu, Zhao; Sun, Jing; Butts, Kenneth
2014-05-01
Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.
Real-time implementation of second generation of audio multilevel information coding
NASA Astrophysics Data System (ADS)
Ali, Murtaza; Tewfik, Ahmed H.; Viswanathan, V.
1994-03-01
This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.
NASA Technical Reports Server (NTRS)
Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.
Multiplexed wavelet transform technique for detection of microcalcification in digitized mammograms.
Mini, M G; Devassia, V P; Thomas, Tessamma
2004-12-01
Wavelet transform (WT) is a potential tool for the detection of microcalcifications, an early sign of breast cancer. This article describes the implementation and evaluates the performance of two novel WT-based schemes for the automatic detection of clustered microcalcifications in digitized mammograms. Employing a one-dimensional WT technique that utilizes the pseudo-periodicity property of image sequences, the proposed algorithms achieve high detection efficiency and low processing memory requirements. The detection is achieved from the parent-child relationship between the zero-crossings [Marr-Hildreth (M-H) detector] /local extrema (Canny detector) of the WT coefficients at different levels of decomposition. The detected pixels are weighted before the inverse transform is computed, and they are segmented by simple global gray level thresholding. Both detectors produce 95% detection sensitivity, even though there are more false positives for the M-H detector. The M-H detector preserves the shape information and provides better detection sensitivity for mammograms containing widely distributed calcifications.
A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression
Nguyen, Nha; Vo, An; Choi, Inchan
2015-01-01
Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910
EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN
AlSharabi, Khalil; Ibrahim, Sutrisno; Alsuwailem, Abdullah
2017-01-01
Autism spectrum disorder (ASD) is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD) of autism based on electroencephalography (EEG) signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT), entropy (En), and artificial neural network (ANN). DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC) curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia. PMID:28484720
Anomaly Detection of Electromyographic Signals.
Ijaz, Ahsan; Choi, Jongeun
2018-04-01
In this paper, we provide a robust framework to detect anomalous electromyographic (EMG) signals and identify contamination types. As a first step for feature selection, optimally selected Lawton wavelets transform is applied. Robust principal component analysis (rPCA) is then performed on these wavelet coefficients to obtain features in a lower dimension. The rPCA based features are used for constructing a self-organizing map (SOM). Finally, hierarchical clustering is applied on the SOM that separates anomalous signals residing in the smaller clusters and breaks them into logical units for contamination identification. The proposed methodology is tested using synthetic and real world EMG signals. The synthetic EMG signals are generated using a heteroscedastic process mimicking desired experimental setups. A sub-part of these synthetic signals is introduced with anomalies. These results are followed with real EMG signals introduced with synthetic anomalies. Finally, a heterogeneous real world data set is used with known quality issues under an unsupervised setting. The framework provides recall of 90% (± 3.3) and precision of 99%(±0.4).
Adaptive multiscale processing for contrast enhancement
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu; Fan, Jian; Huda, Walter; Honeyman, Janice C.; Steinbach, Barbara G.
1993-07-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
NASA Astrophysics Data System (ADS)
Gupta, Mousumi; Chatterjee, Somenath
2018-04-01
Surface texture is an important issue to realize the nature (crest and trough) of surfaces. Atomic force microscopy (AFM) image is a key analysis for surface topography. However, in nano-scale, the nature (i.e., deflection or crack) as well as quantification (i.e., height or depth) of deposited layers is essential information for material scientist. In this paper, a gradient-based K-means algorithm is used to differentiate the layered surfaces depending on their color contrast of as-obtained from AFM images. A transformation using wavelet decomposition is initiated to extract the information about deflection or crack on the material surfaces from the same images. Z-axis depth analysis from wavelet coefficients provides information about the crack present in the material. Using the above method corresponding surface information for the material is obtained. In addition, the Gaussian filter is applied to remove the unwanted lines, which occurred during AFM scanning. Few known samples are taken as input, and validity of the above approaches is shown.
Element analysis: a wavelet-based method for analysing time-localized events in noisy time series.
Lilly, Jonathan M
2017-04-01
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized 'events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's 'region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis , is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.
Solar-Terrestrial Signal Record in Tree Ring Width Time Series from Brazil
NASA Astrophysics Data System (ADS)
Rigozo, Nivaor Rodolfo; Lisi, Cláudio Sergio; Filho, Mário Tomazello; Prestes, Alan; Nordemann, Daniel Jean Roger; de Souza Echer, Mariza Pereira; Echer, Ezequiel; da Silva, Heitor Evangelista; Rigozo, Valderez F.
2012-12-01
This work investigates the behavior of the sunspot number and Southern Oscillation Index (SOI) signal recorded in the tree ring time series for three different locations in Brazil: Humaitá in Amazônia State, Porto Ferreira in São Paulo State, and Passo Fundo in Rio Grande do Sul State, using wavelet and cross-wavelet analysis techniques. The wavelet spectra of tree ring time series showed signs of 11 and 22 years, possibly related to the solar activity, and periods of 2-8 years, possibly related to El Niño events. The cross-wavelet spectra for all tree ring time series from Brazil present a significant response to the 11-year solar cycle in the time interval between 1921 to after 1981. These tree ring time series still have a response to the second harmonic of the solar cycle (5.5 years), but in different time intervals. The cross-wavelet maps also showed that the relationship between the SOI x tree ring time series is more intense, for oscillation in the range of 4-8 years.
Wavefront Reconstruction and Mirror Surface Optimizationfor Adaptive Optics
2014-06-01
TERMS Wavefront reconstruction, Adaptive optics , Wavelets, Atmospheric turbulence , Branch points, Mirror surface optimization, Space telescope, Segmented...contribution adapts the proposed algorithm to work when branch points are present from significant atmospheric turbulence . An analysis of vector spaces...estimate the distortion of the collected light caused by the atmosphere and corrected by adaptive optics . A generalized orthogonal wavelet wavefront
The relationship between thunderstorm and solar activity for Brazil from 1951 to 2009
NASA Astrophysics Data System (ADS)
Pinto Neto, Osmar; Pinto, Iara R. C. A.; Pinto, Osmar
2013-06-01
The goal of this article is to investigate the influence of solar activity on thunderstorm activity in Brazil. For this purpose, thunder day data from seven cities in Brazil from 1951 to 2009 are analyzed with the wavelet method for the first time. To identify the 11-year solar cycle in thunder day data, a new quantity is defined. It is named TD1 and represents the power in 1-year in a wavelet spectrum of monthly thunder day data. The wavelet analysis of TD1 values shows more clear the 11-year periodicity than when it is applied directly to annual thunder day data, as it has been normally investigated in the literature. The use of this new quantity is shown to enhance the capability to identify the 11-year periodicity in thunderstorm data. Wavelet analysis of TD1 indicates that six out seven cities investigated exhibit periodicities near 11 years, three of them significant at a 1% significance level (p<0.01). Furthermore, wavelet coherence analysis demonstrated that the 11-year periodicity of TD1 and solar activity are correlated with an anti-phase behavior, three of them (the same cities with periodicities with 1% significance level) significant at a 5% significance level (p<0.05). The results are compared with those obtained from the same data set but using annual thunder day data. Finally, the results are compared with previous results obtained for other regions and a discussion about possible mechanisms to explain them is done. The existence of periodicities around 11 years in six out of seven cities and their anti-phase behavior with respect to 11-year solar cycle suggest a global mechanism probably related to a solar magnetic shielding effect acting on galactic cosmic rays as an explanation for the relationship of thunderstorm and solar activity, although more studies are necessary to clarify its physical origin.
Wavelet and receiver operating characteristic analysis of heart rate variability
NASA Astrophysics Data System (ADS)
McCaffery, G.; Griffith, T. M.; Naka, K.; Frennaux, M. P.; Matthai, C. C.
2002-02-01
Multiresolution wavelet analysis has been used to study the heart rate variability in two classes of patients with different pathological conditions. The scale dependent measure of Thurner et al. was found to be statistically significant in discriminating patients suffering from hypercardiomyopathy from a control set of normal subjects. We have performed Receiver Operating Characteristc (ROC) analysis and found the ROC area to be a useful measure by which to label the significance of the discrimination, as well as to describe the severity of heart dysfunction.
The atmospheric parameters of FGK stars using wavelet analysis of CORALIE spectra
NASA Astrophysics Data System (ADS)
Gill, S.; Maxted, P. F. L.; Smalley, B.
2018-05-01
Context. Atmospheric properties of F-, G- and K-type stars can be measured by spectral model fitting or with the analysis of equivalent width (EW) measurements. These methods require data with good signal-to-noise ratios (S/Ns) and reliable continuum normalisation. This is particularly challenging for the spectra we have obtained with the CORALIE échelle spectrograph for FGK stars with transiting M-dwarf companions. The spectra tend to have low S/Ns, which makes it difficult to analyse them using existing methods. Aims: Our aim is to create a reliable automated spectral analysis routine to determine Teff, [Fe/H], V sini from the CORALIE spectra of FGK stars. Methods: We use wavelet decomposition to distinguish between noise, continuum trends, and stellar spectral features in the CORALIE spectra. A subset of wavelet coefficients from the target spectrum are compared to those from a grid of models in a Bayesian framework to determine the posterior probability distributions of the atmospheric parameters. Results: By testing our method using synthetic spectra we found that our method converges on the best fitting atmospheric parameters. We test the wavelet method on 20 FGK exoplanet host stars for which higher-quality data have been independently analysed using EW measurements. We find that we can determine Teff to a precision of 85 K, [Fe/H] to a precision of 0.06 dex and V sini to a precision of 1.35 km s-1 for stars with V sini ≥ 5 km s-1. We find an offset in metallicity ≈- 0.18 dex relative to the EW fitting method. We can determine log g to a precision of 0.13 dex but find systematic trends with Teff. Measurements of log g are only reliable enough to confirm dwarf-like surface gravity (log g ≈ 4.5). Conclusions: The wavelet method can be used to determine Teff, [Fe/H], and V sini for FGK stars from CORALIE échelle spectra. Measurements of log g are unreliable but can confirm dwarf-like surface gravity. We find that our method is self consistent, and robust for spectra with S/N ⪆ 40.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
Wavelet based approach for posture transition estimation using a waist worn accelerometer.
Bidargaddi, Niranjan; Klingbeil, Lasse; Sarela, Antti; Boyle, Justin; Cheung, Vivian; Yelland, Catherine; Karunanithi, Mohanraj; Gray, Len
2007-01-01
The ability to rise from a chair is considered to be important to achieve functional independence and quality of life. This sit-to-stand task is also a good indicator to assess condition of patients with chronic diseases. We developed a wavelet based algorithm for detecting and calculating the durations of sit-to-stand and stand-to-sit transitions from the signal vector magnitude of the measured acceleration signal. The algorithm was tested on waist worn accelerometer data collected from young subjects as well as geriatric patients. The test demonstrates that both transitions can be detected by using wavelet transformation applied to signal magnitude vector. Wavelet analysis produces an estimate of the transition pattern that can be used to calculate the transition duration that further gives clinically significant information on the patients condition. The method can be applied in a real life ambulatory monitoring system for assessing the condition of a patient living at home.
Mayer, Markus A.; Boretsky, Adam R.; van Kuijk, Frederik J.; Motamedi, Massoud
2012-01-01
Abstract. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained. PMID:23117804
Chitchian, Shahab; Mayer, Markus A; Boretsky, Adam R; van Kuijk, Frederik J; Motamedi, Massoud
2012-11-01
ABSTRACT. Image enhancement of retinal structures, in optical coherence tomography (OCT) scans through denoising, has the potential to aid in the diagnosis of several eye diseases. In this paper, a locally adaptive denoising algorithm using double-density dual-tree complex wavelet transform, a combination of the double-density wavelet transform and the dual-tree complex wavelet transform, is applied to reduce speckle noise in OCT images of the retina. The algorithm overcomes the limitations of commonly used multiple frame averaging technique, namely the limited number of frames that can be recorded due to eye movements, by providing a comparable image quality in significantly less acquisition time equal to an order of magnitude less time compared to the averaging method. In addition, improvements of image quality metrics and 5 dB increase in the signal-to-noise ratio are attained.
Sangeetha, S; Sujatha, C M; Manamalli, D
2014-01-01
In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples.
NASA Astrophysics Data System (ADS)
Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David
2015-04-01
In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).
Frantzidis, Christos A; Vivas, Ana B; Tsolaki, Anthoula; Klados, Manousos A; Tsolaki, Magda; Bamidis, Panagiotis D
2014-01-01
Previous neuroscientific findings have linked Alzheimer's Disease (AD) with less efficient information processing and brain network disorganization. However, pathological alterations of the brain networks during the preclinical phase of amnestic Mild Cognitive Impairment (aMCI) remain largely unknown. The present study aimed at comparing patterns of the detection of functional disorganization in MCI relative to Mild Dementia (MD). Participants consisted of 23 cognitively healthy adults, 17 aMCI and 24 mild AD patients who underwent electroencephalographic (EEG) data acquisition during a resting-state condition. Synchronization analysis through the Orthogonal Discrete Wavelet Transform (ODWT), and directional brain network analysis were applied on the EEG data. This computational model was performed for networks that have the same number of edges (N = 500, 600, 700, 800 edges) across all participants and groups (fixed density values). All groups exhibited a small-world (SW) brain architecture. However, we found a significant reduction in the SW brain architecture in both aMCI and MD patients relative to the group of Healthy controls. This functional disorganization was also correlated with the participant's generic cognitive status. The deterioration of the network's organization was caused mainly by deficient local information processing as quantified by the mean cluster coefficient value. Functional hubs were identified through the normalized betweenness centrality metric. Analysis of the local characteristics showed relative hub preservation even with statistically significant reduced strength. Compensatory phenomena were also evident through the formation of additional hubs on left frontal and parietal regions. Our results indicate a declined functional network organization even during the prodromal phase. Degeneration is evident even in the preclinical phase and coexists with transient network reorganization due to compensation.
Waveform Fingerprinting for Efficient Seismic Signal Detection
NASA Astrophysics Data System (ADS)
Yoon, C. E.; OReilly, O. J.; Beroza, G. C.
2013-12-01
Cross-correlating an earthquake waveform template with continuous waveform data has proven a powerful approach for detecting events missing from earthquake catalogs. If templates do not exist, it is possible to divide the waveform data into short overlapping time windows, then identify window pairs with similar waveforms. Applying these approaches to earthquake monitoring in seismic networks has tremendous potential to improve the completeness of earthquake catalogs, but because effort scales quadratically with time, it rapidly becomes computationally infeasible. We develop a fingerprinting technique to identify similar waveforms, using only a few compact features of the original data. The concept is similar to human fingerprints, which utilize key diagnostic features to identify people uniquely. Analogous audio-fingerprinting approaches have accurately and efficiently found similar audio clips within large databases; example applications include identifying songs and finding copyrighted content within YouTube videos. In order to fingerprint waveforms, we compute a spectrogram of the time series, and segment it into multiple overlapping windows (spectral images). For each spectral image, we apply a wavelet transform, and retain only the sign of the maximum magnitude wavelet coefficients. This procedure retains just the large-scale structure of the data, providing both robustness to noise and significant dimensionality reduction. Each fingerprint is a high-dimensional, sparse, binary data object that can be stored in a database without significant storage costs. Similar fingerprints within the database are efficiently searched using locality-sensitive hashing. We test this technique on waveform data from the Northern California Seismic Network that contains events not detected in the catalog. We show that this algorithm successfully identifies similar waveforms and detects uncataloged low magnitude events in addition to cataloged events, while running to completion faster than a comparison waveform autocorrelation code.
NASA Astrophysics Data System (ADS)
Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai
2014-06-01
Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.
The norms and variances of the Gabor, Morlet and general harmonic wavelet functions
NASA Astrophysics Data System (ADS)
Simonovski, I.; Boltežar, M.
2003-07-01
This paper deals with certain properties of the continuous wavelet transform and wavelet functions. The norms and the spreads in time and frequency of the common Gabor and Morlet wavelet functions are presented. It is shown that the norm of the Morlet wavelet function does not satisfy the normalization condition and that the normalized Morlet wavelet function is identical to the Gabor wavelet function with the parameter σ=1. The general harmonic wavelet function is developed using frequency modulation of the Hanning and Hamming window functions. Several properties of the general harmonic wavelet function are also presented and compared to the Gabor wavelet function. The time and frequency spreads of the general harmonic wavelet function are only slightly higher than the time and frequency spreads of the Gabor wavelet function. However, the general harmonic wavelet function is simpler to use than the Gabor wavelet function. In addition, the general harmonic wavelet function can be constructed in such a way that the zero average condition is truly satisfied. The average value of the Gabor wavelet function can approach a value of zero but it cannot reach it. When calculating the continuous wavelet transform, errors occur at the start- and the end-time indexes. This is called the edge effect and is caused by the fact that the wavelet transform is calculated from a signal of finite length. In this paper, we propose a method that uses signal mirroring to reduce the errors caused by the edge effect. The success of the proposed method is demonstrated by using a simulated signal.
Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette
Huang, Wenzhu; Zhang, Wentao; Li, Fang
2013-01-01
This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266
JPEG XS, a new standard for visually lossless low-latency lightweight image compression
NASA Astrophysics Data System (ADS)
Descampe, Antonin; Keinert, Joachim; Richter, Thomas; Fößel, Siegfried; Rouvroy, Gaël.
2017-09-01
JPEG XS is an upcoming standard from the JPEG Committee (formally known as ISO/IEC SC29 WG1). It aims to provide an interoperable visually lossless low-latency lightweight codec for a wide range of applications including mezzanine compression in broadcast and Pro-AV markets. This requires optimal support of a wide range of implementation technologies such as FPGAs, CPUs and GPUs. Targeted use cases are professional video links, IP transport, Ethernet transport, real-time video storage, video memory buffers, and omnidirectional video capture and rendering. In addition to the evaluation of the visual transparency of the selected technologies, a detailed analysis of the hardware and software complexity as well as the latency has been done to make sure that the new codec meets the requirements of the above-mentioned use cases. In particular, the end-to-end latency has been constrained to a maximum of 32 lines. Concerning the hardware complexity, neither encoder nor decoder should require more than 50% of an FPGA similar to Xilinx Artix 7 or 25% of an FPGA similar to Altera Cyclon 5. This process resulted in a coding scheme made of an optional color transform, a wavelet transform, the entropy coding of the highest magnitude level of groups of coefficients, and the raw inclusion of the truncated wavelet coefficients. This paper presents the details and status of the standardization process, a technical description of the future standard, and the latest performance evaluation results.
Quantitative analysis of bayberry juice acidity based on visible and near-infrared spectroscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao Yongni; He Yong; Mao Jingyuan
Visible and near-infrared (Vis/NIR) reflectance spectroscopy has been investigated for its ability to nondestructively detect acidity in bayberry juice. What we believe to be a new, better mathematic model is put forward, which we have named principal component analysis-stepwise regression analysis-backpropagation neural network (PCA-SRA-BPNN), to build a correlation between the spectral reflectivity data and the acidity of bayberry juice. In this model, the optimum network parameters,such as the number of input nodes, hidden nodes, learning rate, and momentum, are chosen by the value of root-mean-square (rms) error. The results show that its prediction statistical parameters are correlation coefficient (r) ofmore » 0.9451 and root-mean-square error of prediction(RMSEP) of 0.1168. Partial least-squares (PLS) regression is also established to compare with this model. Before doing this, the influences of various spectral pretreatments (standard normal variate, multiplicative scatter correction, S. Golay first derivative, and wavelet package transform) are compared. The PLS approach with wavelet package transform preprocessing spectra is found to provide the best results, and its prediction statistical parameters are correlation coefficient (r) of 0.9061 and RMSEP of 0.1564. Hence, these two models are both desirable to analyze the data from Vis/NIR spectroscopy and to solve the problem of the acidity prediction of bayberry juice. This supplies basal research to ultimately realize the online measurements of the juice's internal quality through this Vis/NIR spectroscopy technique.« less
Sparse regularization for force identification using dictionaries
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Directed differential connectivity graph of interictal epileptiform discharges
Amini, Ladan; Jutten, Christian; Achard, Sophie; David, Olivier; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gh. Ali; Kahane, Philippe; Minotti, Lorella; Vercueil, Laurent
2011-01-01
In this paper, we study temporal couplings between interictal events of spatially remote regions in order to localize the leading epileptic regions from intracerebral electroencephalogram (iEEG). We aim to assess whether quantitative epileptic graph analysis during interictal period may be helpful to predict the seizure onset zone of ictal iEEG. Using wavelet transform, cross-correlation coefficient, and multiple hypothesis test, we propose a differential connectivity graph (DCG) to represent the connections that change significantly between epileptic and non-epileptic states as defined by the interictal events. Post-processings based on mutual information and multi-objective optimization are proposed to localize the leading epileptic regions through DCG. The suggested approach is applied on iEEG recordings of five patients suffering from focal epilepsy. Quantitative comparisons of the proposed epileptic regions within ictal onset zones detected by visual inspection and using electrically stimulated seizures, reveal good performance of the present method. PMID:21156385
Accelerated computer generated holography using sparse bases in the STFT domain.
Blinder, David; Schelkens, Peter
2018-01-22
Computer-generated holography at high resolutions is a computationally intensive task. Efficient algorithms are needed to generate holograms at acceptable speeds, especially for real-time and interactive applications such as holographic displays. We propose a novel technique to generate holograms using a sparse basis representation in the short-time Fourier space combined with a wavefront-recording plane placed in the middle of the 3D object. By computing the point spread functions in the transform domain, we update only a small subset of the precomputed largest-magnitude coefficients to significantly accelerate the algorithm over conventional look-up table methods. We implement the algorithm on a GPU, and report a speedup factor of over 30. We show that this transform is superior over wavelet-based approaches, and show quantitative and qualitative improvements over the state-of-the-art WASABI method; we report accuracy gains of 2dB PSNR, as well improved view preservation.
NASA Astrophysics Data System (ADS)
González Gómez, Dulce I.; Moreno Barbosa, E.; Martínez Hernández, Mario Iván; Ramos Méndez, José; Hidalgo Tobón, Silvia; Dies Suarez, Pilar; Barragán Pérez, Eduardo; De Celis Alonso, Benito
2014-11-01
The main goal of this project was to create a computer algorithm based on wavelet analysis of region of homogeneity images obtained during resting state studies. Ideally it would automatically diagnose ADHD. Because the cerebellum is an area known to be affected by ADHD, this study specifically analysed this region. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Statistical differences between the values of the absolute integrated wavelet spectrum were found and showed significant differences (p<0.0015) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD. Even if results were statistically significant, the small size of the sample limits the applicability of this methods as it is presented here, and further work with larger samples and using freely available datasets must be done.
NASA Astrophysics Data System (ADS)
van den Berg, J. C.
2004-03-01
A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.
NASA Astrophysics Data System (ADS)
van den Berg, J. C.
1999-08-01
A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.
Mouse EEG spike detection based on the adapted continuous wavelet transform
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Kharatishvili, Irina; Chen, Min; Reutens, David C.
2016-04-01
Objective. Electroencephalography (EEG) is an important tool in the diagnosis of epilepsy. Interictal spikes on EEG are used to monitor the development of epilepsy and the effects of drug therapy. EEG recordings are generally long and the data voluminous. Thus developing a sensitive and reliable automated algorithm for analyzing EEG data is necessary. Approach. A new algorithm for detecting and classifying interictal spikes in mouse EEG recordings is proposed, based on the adapted continuous wavelet transform (CWT). The construction of the adapted mother wavelet is founded on a template obtained from a sample comprising the first few minutes of an EEG data set. Main Result. The algorithm was tested with EEG data from a mouse model of epilepsy and experimental results showed that the algorithm could distinguish EEG spikes from other transient waveforms with a high degree of sensitivity and specificity. Significance. Differing from existing approaches, the proposed approach combines wavelet denoising, to isolate transient signals, with adapted CWT-based template matching, to detect true interictal spikes. Using the adapted wavelet constructed from a predefined template, the adapted CWT is calculated on small EEG segments to fit dynamical changes in the EEG recording.
Utilizing Wavelet Analysis to assess hydrograph change in northwestern North America
NASA Astrophysics Data System (ADS)
Tang, W.; Carey, S. K.
2017-12-01
Historical streamflow data in the mountainous regions of northwestern North America suggest that changes flows are driven by warming temperature, declining snowpack and glacier extent, and large-scale teleconnections. However, few sites exist that have robust long-term records for statistical analysis, and pervious research has focussed on high and low-flow indices along with trend analysis using Mann-Kendal test and other similar approaches. Furthermore, there has been less emphasis on ascertaining the drivers of change in changes in shape of the streamflow hydrograph compared with traditional flow metrics. In this work, we utilize wavelet analysis to evaluate changes in hydrograph characteristics for snowmelt driven rivers in northwestern North America across a range of scales. Results suggest that wavelets can be used to detect a lengthening and advancement of freshet with a corresponding decline in peak flows. Furthermore, the gradual transition of flows from nival to pluvial regimes in more southerly catchments is evident in the wavelet spectral power through time. This method of change detection is challenged by evaluating the statistical significance of changes in wavelet spectra as related to hydrograph form, yet ongoing work seeks to link these patters to driving weather and climate along with larger scale teleconnections.
Element analysis: a wavelet-based method for analysing time-localized events in noisy time series
2017-01-01
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized ‘events’. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event’s ‘region of influence’ within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis, is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry. PMID:28484325
Enhancing seismic P phase arrival picking based on wavelet denoising and kurtosis picker
NASA Astrophysics Data System (ADS)
Shang, Xueyi; Li, Xibing; Weng, Lei
2018-01-01
P phase arrival picking of weak signals is still challenging in seismology. A wavelet denoising is proposed to enhance seismic P phase arrival picking, and the kurtosis picker is applied on the wavelet-denoised signal to identify P phase arrival. It has been called the WD-K picker. The WD-K picker, which is different from those traditional wavelet-based pickers on the basis of a single wavelet component or certain main wavelet components, takes full advantage of the reconstruction of main detail wavelet components and the approximate wavelet component. The proposed WD-K picker considers more wavelet components and presents a better P phase arrival feature. The WD-K picker has been evaluated on 500 micro-seismic signals recorded in the Chinese Yongshaba mine. The comparison between the WD-K pickings and manual pickings shows the good picking accuracy of the WD-K picker. Furthermore, the WD-K picking performance has been compared with the main detail wavelet component combining-based kurtosis (WDC-K) picker, the single wavelet component-based kurtosis (SW-K) picker, and certain main wavelet component-based maximum kurtosis (MMW-K) picker. The comparison has demonstrated that the WD-K picker has better picking accuracy than the other three-wavelet and kurtosis-based pickers, thus showing the enhanced ability of wavelet denoising.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, L.H.
After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scalemore » number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.« less
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Multiscale analysis of the gradient of linear polarization
NASA Astrophysics Data System (ADS)
Robitaille, J.-F.; Scaife, A. M. M.
2015-07-01
We propose a new multiscale method to calculate the amplitude of the gradient of the linear polarization vector, |∇ P|, using a wavelet-based formalism. We demonstrate this method using a field of the Canadian Galactic Plane Survey and show that the filamentary structure typically seen in |∇ P| maps depends strongly on the instrumental resolution. Our analysis reveals that different networks of filaments are present on different angular scales. The wavelet formalism allows us to calculate the power spectrum of the fluctuations seen in |∇ P| and to determine the scaling behaviour of this quantity. The power spectrum is found to follow a power law with γ ≈ 2.1. We identify a small drop in power between scales of 80 ≲ l ≲ 300 arcmin, which corresponds well to the overlap in the u-v plane between the Effelsberg 100-m telescope and the Dominion Radio Astrophysical Observatory 26-m telescope data. We suggest that this drop is due to undersampling present in the 26-m telescope data. In addition, the wavelet coefficient distributions show higher skewness on smaller scales than at larger scales. The spatial distribution of the outliers in the tails of these distributions creates a coherent subset of filaments correlated across multiple scales, which trace the sharpest changes in the polarization vector P within the field. We suggest that these structures may be associated with highly compressive shocks in the medium. The power spectrum of the field excluding these outliers shows a steeper power law with γ ≈ 2.5.
Damage localization in aluminum plate with compact rectangular phased piezoelectric transducer array
NASA Astrophysics Data System (ADS)
Liu, Zenghua; Sun, Kunming; Song, Guorong; He, Cunfu; Wu, Bin
2016-03-01
In this work, a detection method for the damage in plate-like structure with a compact rectangular phased piezoelectric transducer array of 16 piezoelectric elements was presented. This compact array can not only detect and locate a single defect (through hole) in plate, but also identify multi-defects (through holes and surface defect simulated by an iron pillar glued to the plate). The experiments proved that the compact rectangular phased transducer array could detect the full range of plate structures and implement multiple-defect detection simultaneously. The processing algorithm proposed in this paper contains two parts: signal filtering and damage imaging. The former part was used to remove noise from signals. Continuous wavelet transform was applicable to signal filtering. Continuous wavelet transform can provide a plot of wavelet coefficients and the signal with narrow frequency band can be easily extracted from the plot. The latter part of processing algorithm was to implement damage detection and localization. In order to accurately locate defects and improve the imaging quality, two images were obtained from amplitude and phase information. One image was obtained with the Total Focusing Method (TFM) and another phase image was obtained with the Sign Coherence Factor (SCF). Furthermore, an image compounding technique for compact rectangular phased piezoelectric transducer array was proposed in this paper. With the proposed technique, the compounded image can be obtained by combining TFM image with SCF image, thus greatly improving the resolution and contrast of image.
Yakovenko, I A; Cheremushkin, E A; Kozlov, M K
2015-01-01
The research of changes of a beta rhythm parameters on condition of working memory loading by extension of a interstimuli interval between the target and triggering stimuli to 16 sec is investigated on 70 healthy adults in two series of experiments with set to a facial expression. In the second series at the middle of this interval for strengthening of the load was entered the additional cognitive task in the form of conditioning stimuli like Go/NoGo--circles of blue or green color. Data analysis of the research was carried out by means of continuous wavelet-transformation on the basis of "mather" complex Morlet-wavelet in the range of 1-35 Hz. Beta rhythm power was characterized by the mean level, maxima of wavelet-transformation coefficient (WLC) and latent periods of maxima. Introduction of additional cognitive task to pause between the target and triggering stimuli led to essential increase in absolute values of the mean level of beta rhythm WLC and relative sizes of maxima of beta rhythm WLC. In the series of experiments without conditioning stimulus subjects with large number of mistakes (from 6 to 40), i.e. rigid set, in comparison with subjects with small number of mistakes (to 5), i.e. plastic set, at the forming stage were characterized by higher values of the mean level of beta rhythm WLC. Introduction of the conditioning stimuli led to smoothing of intergroup distinctions throughout the experiment.
NASA Astrophysics Data System (ADS)
Huang, D.; Wang, G.
2014-12-01
Stochastic simulation of spatially distributed ground-motion time histories is important for performance-based earthquake design of geographically distributed systems. In this study, we develop a novel technique to stochastically simulate regionalized ground-motion time histories using wavelet packet analysis. First, a transient acceleration time history is characterized by wavelet-packet parameters proposed by Yamamoto and Baker (2013). The wavelet-packet parameters fully characterize ground-motion time histories in terms of energy content, time- frequency-domain characteristics and time-frequency nonstationarity. This study further investigates the spatial cross-correlations of wavelet-packet parameters based on geostatistical analysis of 1500 regionalized ground motion data from eight well-recorded earthquakes in California, Mexico, Japan and Taiwan. The linear model of coregionalization (LMC) is used to develop a permissible spatial cross-correlation model for each parameter group. The geostatistical analysis of ground-motion data from different regions reveals significant dependence of the LMC structure on regional site conditions, which can be characterized by the correlation range of Vs30 in each region. In general, the spatial correlation and cross-correlation of wavelet-packet parameters are stronger if the site condition is more homogeneous. Using the regional-specific spatial cross-correlation model and cokriging technique, wavelet packet parameters at unmeasured locations can be best estimated, and regionalized ground-motion time histories can be synthesized. Case studies and blind tests demonstrated that the simulated ground motions generally agree well with the actual recorded data, if the influence of regional-site conditions is considered. The developed method has great potential to be used in computational-based seismic analysis and loss estimation in a regional scale.
Improved medical image fusion based on cascaded PCA and shift invariant wavelet transforms.
Reena Benjamin, J; Jayasree, T
2018-02-01
In the medical field, radiologists need more informative and high-quality medical images to diagnose diseases. Image fusion plays a vital role in the field of biomedical image analysis. It aims to integrate the complementary information from multimodal images, producing a new composite image which is expected to be more informative for visual perception than any of the individual input images. The main objective of this paper is to improve the information, to preserve the edges and to enhance the quality of the fused image using cascaded principal component analysis (PCA) and shift invariant wavelet transforms. A novel image fusion technique based on cascaded PCA and shift invariant wavelet transforms is proposed in this paper. PCA in spatial domain extracts relevant information from the large dataset based on eigenvalue decomposition, and the wavelet transform operating in the complex domain with shift invariant properties brings out more directional and phase details of the image. The significance of maximum fusion rule applied in dual-tree complex wavelet transform domain enhances the average information and morphological details. The input images of the human brain of two different modalities (MRI and CT) are collected from whole brain atlas data distributed by Harvard University. Both MRI and CT images are fused using cascaded PCA and shift invariant wavelet transform method. The proposed method is evaluated based on three main key factors, namely structure preservation, edge preservation, contrast preservation. The experimental results and comparison with other existing fusion methods show the superior performance of the proposed image fusion framework in terms of visual and quantitative evaluations. In this paper, a complex wavelet-based image fusion has been discussed. The experimental results demonstrate that the proposed method enhances the directional features as well as fine edge details. Also, it reduces the redundant details, artifacts, distortions.
Snellings, André; Sagher, Oren; Anderson, David J; Aldridge, J Wayne
2009-10-01
The authors developed a wavelet-based measure for quantitative assessment of neural background activity during intraoperative neurophysiological recordings so that the boundaries of the subthalamic nucleus (STN) can be more easily localized for electrode implantation. Neural electrophysiological data were recorded in 14 patients (20 tracks and 275 individual recording sites) with dopamine-sensitive idiopathic Parkinson disease during the target localization portion of deep brain stimulator implantation surgery. During intraoperative recording, the STN was identified based on audio and visual monitoring of neural firing patterns, kinesthetic tests, and comparisons between neural behavior and the known characteristics of the target nucleus. The quantitative wavelet-based measure was applied offline using commercially available software to measure the magnitude of the neural background activity, and the results of this analysis were compared with the intraoperative conclusions. Wavelet-derived estimates were also compared with power spectral density measurements. The wavelet-derived background levels were significantly higher in regions encompassed by the clinically estimated boundaries of the STN than in the surrounding regions (STN, 225 +/- 61 microV; ventral to the STN, 112 +/- 32 microV; and dorsal to the STN, 136 +/- 66 microV). In every track, the absolute maximum magnitude was found within the clinically identified STN. The wavelet-derived background levels provided a more consistent index with less variability than measurements with power spectral density. Wavelet-derived background activity can be calculated quickly, does not require spike sorting, and can be used to identify the STN reliably with very little subjective interpretation required. This method may facilitate the rapid intraoperative identification of STN borders.
Snellings, André; Sagher, Oren; Anderson, David J.; Aldridge, J. Wayne
2016-01-01
Object A wavelet-based measure was developed to quantitatively assess neural background activity taken during surgical neurophysiological recordings to localize the boundaries of the subthalamic nucleus during target localization for deep brain stimulator implant surgery. Methods Neural electrophysiological data was recorded from 14 patients (20 tracks, n = 275 individual recording sites) with dopamine-sensitive idiopathic Parkinson’s disease during the target localization portion of deep brain stimulator implant surgery. During intraoperative recording the STN was identified based upon audio and visual monitoring of neural firing patterns, kinesthetic tests, and comparisons between neural behavior and known characteristics of the target nucleus. The quantitative wavelet-based measure was applied off-line using MATLAB software to measure the magnitude of the neural background activity, and the results of this analysis were compared to the intraoperative conclusions. Wavelet-derived estimates were compared to power spectral density measures. Results The wavelet-derived background levels were significantly higher in regions encompassed by the clinically estimated boundaries of the STN than in surrounding regions (STN: 225 ± 61 μV vs. ventral to STN: 112 ± 32 μV, and dorsal to STN: 136 ± 66 μV). In every track, the absolute maximum magnitude was found within the clinically identified STN. The wavelet-derived background levels provided a more consistent index with less variability than power spectral density. Conclusions The wavelet-derived background activity assessor can be calculated quickly, requires no spike sorting, and can be reliably used to identify the STN with very little subjective interpretation required. This method may facilitate rapid intraoperative identification of subthalamic nucleus borders. PMID:19344225
Havla, Lukas; Schneider, Moritz J; Thierfelder, Kolja M; Beyer, Sebastian E; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H; Dietrich, Olaf
2016-02-01
The purpose of this study was to propose and evaluate a new wavelet-based technique for classification of arterial and venous vessels using time-resolved cerebral CT perfusion data sets. Fourteen consecutive patients (mean age 73 yr, range 17-97) with suspected stroke but no pathology in follow-up MRI were included. A CT perfusion scan with 32 dynamic phases was performed during intravenous bolus contrast-agent application. After rigid-body motion correction, a Paul wavelet (order 1) was used to calculate voxelwise the wavelet power spectrum (WPS) of each attenuation-time course. The angiographic intensity A was defined as the maximum of the WPS, located at the coordinates T (time axis) and W (scale/width axis) within the WPS. Using these three parameters (A, T, W) separately as well as combined by (1) Fisher's linear discriminant analysis (FLDA), (2) logistic regression (LogR) analysis, or (3) support vector machine (SVM) analysis, their potential to classify 18 different arterial and venous vessel segments per subject was evaluated. The best vessel classification was obtained using all three parameters A and T and W [area under the curve (AUC): 0.953 with FLDA and 0.957 with LogR or SVM]. In direct comparison, the wavelet-derived parameters provided performance at least equal to conventional attenuation-time-course parameters. The maximum AUC obtained from the proposed wavelet parameters was slightly (although not statistically significantly) higher than the maximum AUC (0.945) obtained from the conventional parameters. A new method to classify arterial and venous cerebral vessels with high statistical accuracy was introduced based on the time-domain wavelet transform of dynamic CT perfusion data in combination with linear or nonlinear multidimensional classification techniques.
Wavelet transforms with discrete-time continuous-dilation wavelets
NASA Astrophysics Data System (ADS)
Zhao, Wei; Rao, Raghuveer M.
1999-03-01
Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.
Automated Data Processing (ADP) Research and Development,
1995-08-14
individual explosions were 16x16 ft for M1 and 18x18 ft for M2. 740 I L 1 tic 4 MI I f"hom~ \\fl i\\ 1l-2 t’lkercd li111c <, Crtc > jut!d WSHItlhZ ll cro...National Laboratory under contract W-7405-ENG-48. 733 1 . OBJECTIVES Our primary objective is to develop efficient and reliable automated event location and...real seismograms; Figure 1 shows example wavelet coefficients (in the transform domain) and bandpass filtering versions of a seismogram as a function of