Wavelet and wavelet packet compression of electrocardiograms.
Hilton, M L
1997-05-01
Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.
Perceptually Lossless Wavelet Compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John
1996-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Perceptually Lossless Wavelet Compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John
1996-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Data compression by wavelet transforms
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1992-01-01
A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.
Satellite image compression using wavelet
NASA Astrophysics Data System (ADS)
Santoso, Alb. Joko; Soesianto, F.; Dwiandiyanto, B. Yudi
2010-02-01
Image data is a combination of information and redundancies, the information is part of the data be protected because it contains the meaning and designation data. Meanwhile, the redundancies are part of data that can be reduced, compressed, or eliminated. Problems that arise are related to the nature of image data that spends a lot of memory. In this paper will compare 31 wavelet function by looking at its impact on PSNR, compression ratio, and bits per pixel (bpp) and the influence of decomposition level of PSNR and compression ratio. Based on testing performed, Haar wavelet has the advantage that is obtained PSNR is relatively higher compared with other wavelets. Compression ratio is relatively better than other types of wavelets. Bits per pixel is relatively better than other types of wavelet.
LIDAR data compression using wavelets
NASA Astrophysics Data System (ADS)
Pradhan, B.; Mansor, Shattri; Ramli, Abdul Rahman; Mohamed Sharif, Abdul Rashid B.; Sandeep, K.
2005-10-01
The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the LIDAR data compression. A newly developed data compression approach to approximate the LIDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become a case in point for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original LIDAR data. The results show that this method can be used for significant reduction of data set.
Wavelet transform approach to video compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-04-01
In this research, we propose a video compression scheme that uses the boundary-control vectors to represent the motion field and the embedded zerotree wavelet (EZW) to compress the displacement frame difference. When compared to the DCT-based MPEG, the proposed new scheme achieves a better compression performance in terms of the MSE (mean square error) value and visual perception for the same given bit rate.
Novel wavelet coder for color image compression
NASA Astrophysics Data System (ADS)
Wang, Houng-Jyh M.; Kuo, C.-C. Jay
1997-10-01
A new still image compression algorithm based on the multi-threshold wavelet coding (MTWC) technique is proposed in this work. It is an embedded wavelet coder in the sense that its compression ratio can be controlled depending on the bandwidth requirement of image transmission. At low bite rates, MTWC can avoid the blocking artifact from JPEG to result in a better reconstructed image quality. An subband decision scheme is developed based on the rate-distortion theory to enhance the image fidelity. Moreover, a new quantization sequence order is introduced based on our analysis of error energy reduction in significant and refinement maps. Experimental results are given to demonstrate the superior performance of the proposed new algorithm in its high reconstructed quality for color and gray level image compression and low computational complexity. Generally speaking, it gives a better rate- distortion tradeoff and performs faster than most existing state-of-the-art wavelet coders.
Image compression algorithm using wavelet transform
NASA Astrophysics Data System (ADS)
Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory
2016-09-01
Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.
Compression of echocardiographic scan line data using wavelet packet transform
NASA Technical Reports Server (NTRS)
Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.
2001-01-01
An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.
Compression of echocardiographic scan line data using wavelet packet transform
NASA Technical Reports Server (NTRS)
Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.
2001-01-01
An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.
Compressive sensing exploiting wavelet-domain dependencies for ECG compression
NASA Astrophysics Data System (ADS)
Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.
2012-06-01
Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet
Lossless wavelet compression on medical image
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can
Steady-State and Dynamic Myoelectric Signal Compression Using Embedded Zero-Tree Wavelets
2001-10-25
MES compression. This research investigates static and dynamic MES compression using the embedded zero- tree wavelet ( EZW ) compression algorithm and...compression using a modified version of Shapiro’s [5] embedded zero-tree wavelet ( EZW ) compression algorithm. This research investigates static...and transient MES compression using the EZW compression algorithm and compares its performance to a standard wavelet compression technique. For
Optimization of wavelet decomposition for image compression and feature preservation.
Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T
2003-09-01
A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.
3-D wavelet compression and progressive inverse wavelet synthesis rendering of concentric mosaic.
Luo, Lin; Wu, Yunnan; Li, Jin; Zhang, Ya-Qin
2002-01-01
Using an array of photo shots, the concentric mosaic offers a quick way to capture and model a realistic three-dimensional (3-D) environment. We compress the concentric mosaic image array with a 3-D wavelet transform and coding scheme. Our compression algorithm and bitstream syntax are designed to ensure that a local view rendering of the environment requires only a partial bitstream, thereby eliminating the need to decompress the entire compressed bitstream before rendering. By exploiting the ladder-like structure of the wavelet lifting scheme, the progressive inverse wavelet synthesis (PIWS) algorithm is proposed to maximally reduce the computational cost of selective data accesses on such wavelet compressed datasets. Experimental results show that the 3-D wavelet coder achieves high-compression performance. With the PIWS algorithm, a 3-D environment can be rendered in real time from a compressed dataset.
Brechet, Laurent; Lucas, Marie-Françoise; Doncarli, Christian; Farina, Dario
2007-12-01
We propose a novel scheme for signal compression based on the discrete wavelet packet transform (DWPT) decompositon. The mother wavelet and the basis of wavelet packets were optimized and the wavelet coefficients were encoded with a modified version of the embedded zerotree algorithm. This signal dependant compression scheme was designed by a two-step process. The first (internal optimization) was the best basis selection that was performed for a given mother wavelet. For this purpose, three additive cost functions were applied and compared. The second (external optimization) was the selection of the mother wavelet based on the minimal distortion of the decoded signal given a fixed compression ratio. The mother wavelet was parameterized in the multiresolution analysis framework by the scaling filter, which is sufficient to define the entire decomposition in the orthogonal case. The method was tested on two sets of ten electromyographic (EMG) and ten electrocardiographic (ECG) signals that were compressed with compression ratios in the range of 50%-90%. For 90% compression ratio of EMG (ECG) signals, the percent residual difference after compression decreased from (mean +/- SD) 48.6 +/- 9.9% (21.5 +/- 8.4%) with discrete wavelet transform (DWT) using the wavelet leading to poorest performance to 28.4 +/- 3.0% (6.7 +/- 1.9%) with DWPT, with optimal basis selection and wavelet optimization. In conclusion, best basis selection and optimization of the mother wavelet through parameterization led to substantial improvement of performance in signal compression with respect to DWT and randon selection of the mother wavelet. The method provides an adaptive approach for optimal signal representation for compression and can thus be applied to any type of biomedical signal.
Improved Compression of Wavelet-Transformed Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Klimesh, Matthew
2005-01-01
A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average
Discrete directional wavelet bases for image compression
NASA Astrophysics Data System (ADS)
Dragotti, Pier L.; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar
2003-06-01
The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Coresident sensor fusion and compression using the wavelet transform
Yocky, D.A.
1996-03-11
Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
Myoelectric signal compression using zero-trees of wavelet coefficients.
Norris, Jason A; Englehart, Kevin B; Lovely, Dennis F
2003-11-01
Recent progress in the diagnostic use of the myoelectric signal for neuromuscular diseases, coupled with increasing interests in telemedicine applications, mandate the need for an effective compression technique. The efficacy of the embedded zero-tree wavelet compression algorithm is examined with respect to some important analysis parameters (the length of the analysis segment and wavelet type) and measurement conditions (muscle type and contraction type). It is shown that compression performance improves with segment length, and that good choices of wavelet type include the Meyer wavelet and the fifth order biorthogonal wavelet. The effects of different muscle sites and contraction types on compression performance are less conclusive.A comparison of a number of lossy compression techniques has revealed that the EZW algorithm exhibits superior performance to a hard thresholding wavelet approach, but falls short of adaptive differential pulse code modulation. The bit prioritization capability of the EZW algorithm allows one to specify the compression factor online, making it an appealing technique for streaming data applications, as often encountered in telemedicine.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
Improved zerotree coding algorithm for wavelet image compression
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Yunsong; Wu, Chengke
2000-12-01
A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.
Three-dimensional compression scheme based on wavelet transform
NASA Astrophysics Data System (ADS)
Yang, Wu; Xu, Hui; Liao, Mengyang
1999-03-01
In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.
MR image compression using a wavelet transform coding algorithm.
Angelidis, P A
1994-01-01
We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Wavelets for approximate Fourier transform and data compression
NASA Astrophysics Data System (ADS)
Guo, Haitao
This dissertation has two parts. In the first part, we develop a wavelet-based fast approximate Fourier transform algorithm. The second part is devoted to the developments of several wavelet-based data compression techniques for image and seismic data. We propose an algorithm that uses the discrete wavelet transform (DWT) as a tool to compute the discrete Fourier transform (DFT). The classical Cooley-Tukey FFT is shown to be a special case of the proposed algorithm when the wavelets in use are trivial. The main advantage of our algorithm is that the good time and frequency localization of wavelets can be exploited to approximate the Fourier transform for many classes of signals, resulting in much less computation. Thus the new algorithm provides an efficient complexity versus accuracy tradeoff. When approximations are allowed, under certain sparsity conditions, the algorithm can achieve linear complexity, i.e. O(N). The proposed algorithm also has built-in noise reduction capability. For waveform and image compression, we propose a novel scheme using the recently developed Burrows-Wheeler transform (BWT). We show that the discrete wavelet transform (DWT) should be used before the Burrows-Wheeler transform to improve the compression performance for many natural signals and images. We demonstrate that the simple concatenation of the DWT and BWT coding performs comparably as the embedded zerotree wavelet (EZW) compression for images. Various techniques that significantly improve the performance of our compression scheme are also discussed. The phase information is crucial for seismic data processing. However, traditional compression schemes do not pay special attention to preserving the phase of the seismic data, resulting in the loss of critical information. We propose a lossy compression method that preserves the phase as much as possible. The method is based on the self-adjusting wavelet transform that adapts to the locations of the significant signal components
Image compression with embedded wavelet coding via vector quantization
NASA Astrophysics Data System (ADS)
Katsavounidis, Ioannis; Kuo, C.-C. Jay
1995-09-01
In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.
Parkale, Yuvraj V; Nalbalwar, Sanjay L
2016-01-01
Compressed sensing is a novel signal compression technique in which signal is compressed while sensing. The compressed signal is recovered with the only few numbers of observations compared to conventional Shannon-Nyquist sampling, and thus reduces the storage requirements. In this study, we have proposed the 1-D discrete wavelet transform (DWT) based sensing matrices for speech signal compression. The present study investigates the performance analysis of the different DWT based sensing matrices such as: Daubechies, Coiflets, Symlets, Battle, Beylkin and Vaidyanathan wavelet families. First, we have proposed the Daubechies wavelet family based sensing matrices. The experimental result indicates that the db10 wavelet based sensing matrix exhibits the better performance compared to other Daubechies wavelet based sensing matrices. Second, we have proposed the Coiflets wavelet family based sensing matrices. The result shows that the coif5 wavelet based sensing matrix exhibits the best performance. Third, we have proposed the sensing matrices based on Symlets wavelet family. The result indicates that the sym9 wavelet based sensing matrix demonstrates the less reconstruction time and the less relative error, and thus exhibits the good performance compared to other Symlets wavelet based sensing matrices. Next, we have proposed the DWT based sensing matrices using the Battle, Beylkin and the Vaidyanathan wavelet families. The Beylkin wavelet based sensing matrix demonstrates the less reconstruction time and relative error, and thus exhibits the good performance compared to the Battle and the Vaidyanathan wavelet based sensing matrices. Further, an attempt was made to find out the best-proposed DWT based sensing matrix, and the result reveals that sym9 wavelet based sensing matrix shows the better performance among all other proposed matrices. Subsequently, the study demonstrates the performance analysis of the sym9 wavelet based sensing matrix and state-of-the-art random
Wavelet/scalar quantization compression standard for fingerprint images
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.
Wavelet-based image compression using fixed residual value
NASA Astrophysics Data System (ADS)
Muzaffar, Tanzeem; Choi, Tae-Sun
2000-12-01
Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.
Ives, R.W.; Kiser, C.; Magotra, N.
1998-10-27
While many synthetic aperture radar (SAR) applications use only detected imagery, dramatic improvements in resolution and employment of algorithms requiring complex-valued SAR imagery suggest the need for compression of complex data. Here, we investigate the benefits of using complex- valued wavelets on complex SAR imagery in the embedded zerotree wavelet compression algorithm, compared to using real-valued wavelets applied separately to the real and imaginary components. This compression is applied at low ratios (4:1-12:1) for high fidelity output. The complex spatial correlation metric is used to numerically evaluate quality. Numerical results are tabulated and original and decompressed imagery are presented as well as correlation maps to allow visual comparisons.
Low-complexity wavelet filter design for image compression
NASA Technical Reports Server (NTRS)
Majani, E.
1994-01-01
Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.
Efficient wavelet compression for images of arbitrary size
NASA Astrophysics Data System (ADS)
Murao, Kohei
1996-10-01
Wavelet compression for arbitrary size images is discussed. So far, wavelet compression has dealt with restricted size images, such as 2n X 2m. I propose practical and efficient methods of wavelet transform for arbitrary size images, i.e. method of extension to F (DOT) 2m and method of extension to even numbers at each decomposition. I applied them to 'Mona Lisa' with the size of 137 X 180. The two methods showed almost the same calculation time for both encoding and decoding. The encoding times were 0.83 s and 0.79 s, and the decoding times were 0.60 s and 0.57 s, respectively. The difference in bit-rates was attributed to the difference in the interpolation of the edge data of the image.
Oriented wavelet transform for image compression and denoising.
Chappelier, Vivien; Guillemot, Christine
2006-10-01
In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.
Edge-preserving image compression using adaptive lifting wavelet transform
NASA Astrophysics Data System (ADS)
Zhang, Libao; Qiu, Bingchang
2015-07-01
In this paper, a novel 2-D adaptive lifting wavelet transform is presented. The proposed algorithm is designed to further reduce the high-frequency energy of wavelet transform, improve the image compression efficiency and preserve the edge or texture of original images more effectively. In this paper, a new optional direction set, covering the surrounding integer pixels and sub-pixels, is designed. Hence, our algorithm adapts far better to the image orientation features in local image blocks. To obtain the computationally efficient and coding performance, the complete processes of 2-D adaptive lifting wavelet transform is introduced and implemented. Compared with the traditional lifting-based wavelet transform, the adaptive directional lifting and the direction-adaptive discrete wavelet transform, the new structure reduces the high-frequency wavelet coefficients more effectively, and the texture structures of the reconstructed images are more refined and clear than that of the other methods. The peak signal-to-noise ratio and the subjective quality of the reconstructed images are significantly improved.
Three-dimensional image compression with integer wavelet transforms.
Bilgin, A; Zweig, G; Marcellin, M W
2000-04-10
A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.
Three-Dimensional Image Compression With Integer Wavelet Transforms
NASA Astrophysics Data System (ADS)
Bilgin, Ali; Zweig, George; Marcellin, Michael W.
2000-04-01
A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.
Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization
NASA Astrophysics Data System (ADS)
Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.
2004-02-01
Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.
Medical image compression algorithm based on wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin
2005-02-01
With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.
Wavelet-based Image Compression using Subband Threshold
NASA Astrophysics Data System (ADS)
Muzaffar, Tanzeem; Choi, Tae-Sun
2002-11-01
Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.
Object-based wavelet compression using coefficient selection
NASA Astrophysics Data System (ADS)
Zhao, Lifeng; Kassim, Ashraf A.
1998-12-01
In this paper, we present a novel approach to code image regions of arbitrary shapes. The proposed algorithm combines a coefficient selection scheme with traditional wavelet compression for coding arbitrary regions and uses a shape adaptive embedded zerotree wavelet coding (SA-EZW) to quantize the selected coefficients. Since the shape information is implicitly encoded by the SA-EZW, our decoder can reconstruct the arbitrary region without separate shape coding. This makes the algorithm simple to implement and avoids the problem of contour coding. Our algorithm also provides a sufficient framework to address content-based scalability and improved coding efficiency as described by MPEG-4.
Wavelet-based pavement image compression and noise reduction
NASA Astrophysics Data System (ADS)
Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen
2005-08-01
For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.
Improved successive refinement for wavelet-based embedded image compression
NASA Astrophysics Data System (ADS)
Creusere, Charles D.
1999-10-01
In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.
Adaptive wavelet transform algorithm for lossy image compression
NASA Astrophysics Data System (ADS)
Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio
2004-11-01
A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a
Compression of digital hologram for three-dimensional object using Wavelet-Bandelets transform.
Bang, Le Thanh; Ali, Zulfiqar; Quang, Pham Duc; Park, Jae-Hyeung; Kim, Nam
2011-04-25
In the transformation based compression algorithms of digital hologram for three-dimensional object, the balance between compression ratio and normalized root mean square (NRMS) error is always the core of algorithm development. The Wavelet transform method is efficient to achieve high compression ratio but NRMS error is also high. In order to solve this issue, we propose a hologram compression method using Wavelet-Bandelets transform. Our simulation and experimental results show that the Wavelet-Bandelets method has a higher compression ratio than Wavelet methods and all the other methods investigated in this paper, while it still maintains low NRMS error.
2D image compression using concurrent wavelet transform
NASA Astrophysics Data System (ADS)
Talukder, Kamrul Hasan; Harada, Koichi
2011-10-01
In the recent years wavelet transform (WT) has been widely used for image compression. As WT is a sequential process, much time is required to transform data. Here a new approach has been presented where the transformation process is executed concurrently. As a result the procedure runs first and the time of transformation is reduced. Multiple threads are used for row and column transformation and the communication among threads has been managed effectively. Thus, the transformation time has been reduced significantly. The proposed system provides better compression ratio and PSNR value with lower time complexity.
Integer wavelet transform for embedded lossy to lossless image compression.
Reichel, J; Menegaz, G; Nadenau, M J; Kunt, M
2001-01-01
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.
Kalyanpur, A; Neklesa, V P; Taylor, C R; Daftary, A R; Brink, J A
2000-12-01
To determine acceptable levels of JPEG (Joint Photographic Experts Group) and wavelet compression for teleradiologic transmission of body computed tomographic (CT) images. A digital test pattern (Society of Motion Picture and Television Engineers, 512 x 512 matrix) was transmitted after JPEG or wavelet compression by using point-to-point and Web-based teleradiology, respectively. Lossless, 10:1 lossy, and 20:1 lossy ratios were tested. Images were evaluated for high- and low-contrast resolution, sensitivity to small signal differences, and misregistration artifacts. Three independent observers who were blinded to the compression scheme evaluated these image quality measures in 20 clinical cases with similar levels of compression. High-contrast resolution was not diminished with any tested level of JPEG or wavelet compression. With JPEG compression, low-contrast resolution was not lost with 10:1 lossy compression but was lost at 3% modulation with 20:1 lossy compression. With wavelet compression, there was loss of 1% modulation with 10:1 lossy compression and loss of 5% modulation with 20:1 lossy compression. Sensitivity to small signal differences (5% and 95% of the maximal signal) diminished only with 20:1 lossy wavelet compression. With 10:1 lossy compression, misregistration artifacts were mild and were equivalent with JPEG and wavelet compression. Qualitative clinical findings supported these findings. Lossy 10:1 compression is suitable for on-call electronic transmission of body CT images as long as original images are subsequently reviewed.
Invertible update-then-predict integer lifting wavelet for lossless image compression
NASA Astrophysics Data System (ADS)
Chen, Dong; Li, Yanjuan; Zhang, Haiying; Gao, Wenpeng
2017-01-01
This paper presents a new wavelet family for lossless image compression by re-factoring the channel representation of the update-then-predict lifting wavelet, introduced by Claypoole, Davis, Sweldens and Baraniuk, into lifting steps. We name the new wavelet family as invertible update-then-predict integer lifting wavelets (IUPILWs for short). To build IUPILWs, we investigate some central issues such as normalization, invertibility, integer structure, and scaling lifting. The channel representation of the previous update-then-predict lifting wavelet with normalization is given and the invertibility is discussed firstly. To guarantee the invertibility, we re-factor the channel representation into lifting steps. Then the integer structure and scaling lifting of the invertible update-then-predict wavelet are given and the IUPILWs are built. Experiments show that comparing with the integer lifting structure of 5/3 wavelet, 9/7 wavelet, and iDTT, IUPILW results in the lower bit-rates for lossless image compression.
Adaptive wavelet transform algorithm for image compression applications
NASA Astrophysics Data System (ADS)
Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo
2003-11-01
A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.
Wavelet Compression of Satellite-Transmitted Digital Mammograms
NASA Technical Reports Server (NTRS)
Zheng, Yuan F.
2001-01-01
Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by
Bradley, J.N.; Brislawn, C.M.
1992-04-11
This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.
Application of adaptive wavelet transforms via lifting in image data compression
NASA Astrophysics Data System (ADS)
Ye, Shujiang; Zhang, Ye; Liu, Baisen
2008-10-01
The adaptive wavelet transforms via lifting is proposed. In the transform, update filter is selected by the signal's character. Perfect reconstruction is possible without any overhead cost. To make sure the system's stability, in the lifting scheme of adaptive wavelet, update step is placed before prediction step. The Adaptive wavelet transforms via lifting is benefit for the image compression, because of the high stability, the small coefficients of high frequency parts, and the perfect reconstruction. With the adaptive wavelet transforms via lifting and the SPIHT, the image compression is realized in this paper, and the result is pleasant.
Noisy face recognition using compression-based joint wavelet-transform correlator
NASA Astrophysics Data System (ADS)
Widjaja, Joewono
2012-03-01
A new method for noisy face recognition by incorporating wavelet filter into compression-based joint transform correlator (JTC) is proposed. The simulation results show that the proposed method has advantages over the conventional compression-based JTC in that regardless of the contrast and the noise level of the target, the wavelet filter can optimize the recognition performance to be higher than the classical JTC, provided compressed references have high contrast.
Medical image compression with embedded-wavelet transform
NASA Astrophysics Data System (ADS)
Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz
1997-10-01
The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.
Remotely sensed image compression based on wavelet transform
NASA Technical Reports Server (NTRS)
Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.
1995-01-01
In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.
Lossless image compression with projection-based and adaptive reversible integer wavelet transforms.
Deever, Aaron T; Hemami, Sheila S
2003-01-01
Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.
Property study of integer wavelet transform lossless compression coding based on lifting scheme
NASA Astrophysics Data System (ADS)
Xie, Cheng Jun; Yan, Su; Xiang, Yang
2006-01-01
In this paper the algorithms and its improvement of integer wavelet transform combining SPIHT and arithmetic coding in image lossless compression is mainly studied. The experimental result shows that if the order of low-pass filter vanish matrix is fixed, the improvement of compression effect is not evident when invertible integer wavelet transform is satisfied and focusing of energy property monotonic increase with transform scale. For the same wavelet bases, the order of low-pass filter vanish matrix is more important than the order of high-pass filter vanish matrix in improving the property of image compression. Integer wavelet transform lossless compression coding based on lifting scheme has no relation to the entropy of image. The effect of compression is depended on the the focuing of energy property of image transform.
Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres
2011-01-01
The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t), fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties.
MAXAD distortion minimization for wavelet compression of remote sensing data
NASA Astrophysics Data System (ADS)
Alecu, Alin; Munteanu, Adrian; Schelkens, Peter; Cornelis, Jan P.; Dewitte, Steven
2001-12-01
In the context of compression of high resolution multi-spectral satellite image data consisting of radiances and top-of-the-atmosphere fluxes, it is vital that image calibration characteristics (luminance, radiance) must be preserved within certain limits in lossy image compression. Though existing compression schemes (SPIHT, JPEG2000, SQP) give good results as far as minimization of the global PSNR error is concerned, they fail to guarantee a maximum local error. With respect to this, we introduce a new image compression scheme, which guarantees a MAXAD distortion, defined as the maximum absolute difference between original pixel values and reconstructed pixel values. In terms of defining the Lagrangian optimization problem, this reflects in minimization of the rate given the MAXAD distortion. Our approach thus uses the l-infinite distortion measure, which is applied to the lifting scheme implementation of the 9-7 floating point Cohen-Daubechies-Feauveau (CDF) filter. Scalar quantizers, optimal in the D-R sense, are derived for every subband, by solving a global optimization problem that guarantees a user-defined MAXAD. The optimization problem has been defined and solved for the case of the 9-7 filter, and we show that our approach is valid and may be applied to any finite wavelet filters synthesized via lifting. The experimental assessment of our codec shows that our technique provides excellent results in applications such as those for remote sensing, in which reconstruction of image calibration characteristics within a tolerable local error (MAXAD) is perceived as being of crucial importance compared to obtaining an acceptable global error (PSNR), as is the case of existing quantizer design techniques.
All-optical image processing and compression based on Haar wavelet transform.
Parca, Giorgia; Teixeira, Pedro; Teixeira, Antonio
2013-04-20
Fast data processing and compression methods based on wavelet transform are fundamental tools in the area of real-time 2D data/image analysis, enabling high definition applications and redundant data reduction. The need for information processing at high data rates motivates the efforts on exploiting the speed and the parallelism of the light for data analysis and compression. Among several schemes for optical wavelet transform implementation, the Haar transform offers simple design and fast computation, plus it can be easily implemented by optical planar interferometry. We present an all optical scheme based on an asymmetric couplers network for achieving fast image processing and compression in the optical domain. The implementation of Haar wavelet transform through a 3D passive structure is supported by theoretical formulation and simulations results. Asymmetrical coupler 3D network design and optimization are reported and Haar wavelet transform, including compression, was achieved, thus demonstrating the feasibility of our approach.
An improved image compression algorithm using binary space partition scheme and geometric wavelets.
Chopra, Garima; Pal, A K
2011-01-01
Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The present study improves the geometric wavelet (GW) image coding method by using the slope intercept representation of the straight line in the binary space partition scheme. The performance of the proposed algorithm is compared with the wavelet transform-based compression methods such as the embedded zerotree wavelet (EZW), the set partitioning in hierarchical trees (SPIHT) and the embedded block coding with optimized truncation (EBCOT), and other recently developed "sparse geometric representation" based compression algorithms. The proposed image compression algorithm outperforms the EZW, the Bandelets and the GW algorithm. The presented algorithm reports a gain of 0.22 dB over the GW method at the compression ratio of 64 for the Cameraman test image.
NASA Astrophysics Data System (ADS)
DeVore, Ronald A.; Lucier, Bradley J.
The subject of `wavelets' is expanding at such a tremendous rate that it is impossible to give, within these few pages, a complete introduction to all aspects of its theory. We hope, however, to allow the reader to become sufficiently acquainted with the subject to understand, in part, the enthusiasm of its proponents toward its potential application to various numerical problems. Furthermore, we hope that our exposition can guide the reader who wishes to make more serious excursions into the subject. Our viewpoint is biased by our experience in approximation theory and data compression; we warn the reader that there are other viewpoints that are either not represented here or discussed only briefly. For example, orthogonal wavelets were developed primarily in the context of signal processing, an application upon which we touch only indirectly. However, there are several good expositions (e.g. Daubechies (1990) and Rioul and Vetterli (1991)) of this application. A discussion of wavelet decompositions in the context of Littlewood-Paley theory can be found in the monograph of Frazier et al. (1991). We shall also not attempt to give a complete discussion of the history of wavelets. Historical accounts can be found in the book of Meyer (1990) and the introduction of the article of Daubechies (1990). We shall try to give sufficient historical commentary in the course of our presentation to provide some feeling for the subject's development.
The wavelet/scalar quantization compression standard for digital fingerprint images
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform.
Rajoub, Bashar A
2002-04-01
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.
Wavelet sparse transform optimization in image reconstruction based on compressed sensing
NASA Astrophysics Data System (ADS)
Ziran, Wei; Huachuang, Wang; Jianlin, Zhang
2017-06-01
The high image sparsity is very important to improve the accuracy of compressed sensing reconstruction image, and the wavelet transform can make the image sparse obviously. This paper is the optimization method based on wavelet sparse transform in image reconstruction based on compressed sensing, and we have designed a restraining matrix to optimize the wavelet sparse transform. Firstly, the wavelet coefficients are obtained by wavelet transform of the original signal data, and the wavelet coefficients have a tendency of decreasing gradually. The restraining matrix is used to restrain the small coefficients and is a part of image sparse transform, so as to make the wavelet coefficients more sparse. When the sampling rate is between 0. 15 and 0. 45, the simulation results show that the quality promotion of the reconstructed image is the best, and the peak signal to noise ratio (PSNR) is increased by about 0.5dB to 1dB. At the same time, it is more obvious to improve the reconstruction accuracy of the fingerprint texture image, which to some extent makes up for the shortcomings that reconstruction of texture image by compressed sensing based on the wavelet transform has the low accuracy.
A method of image compression based on lifting wavelet transform and modified SPIHT
NASA Astrophysics Data System (ADS)
Lv, Shiliang; Wang, Xiaoqian; Liu, Jinguo
2016-11-01
In order to improve the efficiency of remote sensing image data storage and transmission we present a method of the image compression based on lifting scheme and modified SPIHT(set partitioning in hierarchical trees) by the design of FPGA program, which realized to improve SPIHT and enhance the wavelet transform image compression. The lifting Discrete Wavelet Transform (DWT) architecture has been selected for exploiting the correlation among the image pixels. In addition, we provide a study on what storage elements are required for the wavelet coefficients. We present lena's image using the 3/5 lifting scheme.
Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity
Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Inter-view wavelet compression of light fields with disparity-compensated lifting
NASA Astrophysics Data System (ADS)
Chang, Chuo-Ling; Zhu, Xiaoqing; Ramanathan, Prashant; Girod, Bernd
2003-06-01
We propose a novel approach that uses disparity-compensated lifting for wavelet compression of light fields. Disparity compensation is incorporated into the lifting structure for the transform across the views to solve the irreversibility limitation in previous wavelet coding schemes. With this approach, we obtain the benefits of wavelet coding, such as scalability in all dimensions, as well as superior compression performance. For light fields of an object, shape adaptation is adopted to improve the compression efficiency and visual quality of reconstructed images. In this work we extend the scheme to handle light fields with arbitrary camera arrangements. A view-sequencing algorithm is developed to encode the images. Experimental results show that the proposed scheme outperforms existing light field compression techniques in terms of compression efficiency and visual quality of the reconstructed views.
NASA Astrophysics Data System (ADS)
Angkura, Navin; Aramvith, Supavadee; Siddhichai, Supakorn
2007-09-01
JPEG has been a widely recognized image compression standard for many years. Nevertheless, it faces its own limitations as compressed image quality degrades significantly at lower bit rates. This limitation has been addressed in JPEG2000 which also has a tendency to replace JPEG, especially in the storage and retrieval applications. To efficiently and practically index and retrieve compressed-domain images from a database, several image features could be extracted directly in compressed domain without having to fully decompress the JPEG2000 images. JPEG2000 utilizes wavelet transform. Wavelet transforms is one of widely-used to analyze and describe texture patterns of image. Another advantage of wavelet transform is that one can analyze textures with multiresolution and can classify directional texture pattern information into each directional subband. Where as, HL subband implies horizontal frequency information, LH subband implies vertical frequency information and HH subband implies diagonal frequency. Nevertheless, many wavelet-based image retrieval approaches are not good tool to use directional subband information, obtained by wavelet transforms, for efficient directional texture pattern classification of retrieved images. This paper proposes a novel image retrieval technique in JPEG2000 compressed domain using image significant map to compute an image context in order to construct image index. Experimental results indicate that the proposed method can effectively differentiate and categorize images with different texture directional information. In addition, an integration of the proposed features with wavelet autocorrelogram also showed improvement in retrieval performance using ANMRR (Average Normalized Modified Retrieval Rank) compared to other known methods.
A new multi-resolution hybrid wavelet for analysis and image compression
NASA Astrophysics Data System (ADS)
Kekre, Hemant B.; Sarode, Tanuja K.; Vig, Rekha
2015-12-01
Most of the current image- and video-related applications require higher resolution of images and higher data rates during transmission, better compression techniques are constantly being sought after. This paper proposes a new and unique hybrid wavelet technique which has been used for image analysis and compression. The proposed hybrid wavelet combines the properties of existing orthogonal transforms in the most desirable way and also provides for multi-resolution analysis. These wavelets have unique properties that they can be generated for various sizes and types by using different component transforms and varying the number of components at each level of resolution. These hybrid wavelets have been applied to various standard images like Lena (512 × 512), Cameraman (256 × 256) and the values of peak signal to noise ratio (PSNR) obtained are compared with those obtained using some standard existing compression techniques. Considerable improvement in the values of PSNR, as much as 5.95 dB higher than the standard methods, has been observed, which shows that hybrid wavelet gives better compression. Images of various sizes like Scenery (200 × 200), Fruit (375 × 375) and Barbara (112 × 224) have also been compressed using these wavelets to demonstrate their use for different sizes and shapes.
Dong, Xiao-Ling; Sun, Xu-Dong
2013-12-01
The feasibility was explored in determination of reducing sugar content of potato granules based on wavelet compression algorithm combined with near-infrared spectroscopy. The spectra of 250 potato granules samples were recorded by Fourier transform near-infrared spectrometer in the range of 4000- 10000 cm-1. The three parameters of vanishing moments, wavelet coefficients and principal component factor were optimized. The optimization results of three parameters were 10, 100 and 20, respectively. The original spectra of 1501 spectral variables were transfered to 100 wavelet coefficients using db wavelet function. The partial least squares (PLS) calibration models were developed by 1501 spectral variables and 100 wavelet coefficients. Sixty two unknown samples of prediction set were applied to evaluate the performance of PLS models. By comparison, the optimal result was obtained by wavelet compression combined with PLS calibration model. The correlation coefficient of prediction and root mean square error of prediction were 0.98 and 0.181%, respectively. Experimental results show that the dimensions of spectral data were reduced, scarcely losing effective information by wavelet compression algorithm combined with near-infrared spectroscopy technology in determination of reducing sugar in potato granules. The PLS model is simplified, and the predictive ability is improved.
Research on application for integer wavelet transform for lossless compression of medical image
NASA Astrophysics Data System (ADS)
Zhou, Zude; Li, Quan; Long, Quan
2003-09-01
This paper proposes an approach based on using lifting scheme to construct integer wavelet transform whose purpose is to realize the lossless compression of images. Then researches on application of medical image, software simulation of corresponding algorithm and experiment result are presented in this paper. Experiment shows that this method could improve the compression ration and resolution.
Best wavelet packet basis for joint image deblurring-denoising and compression
NASA Astrophysics Data System (ADS)
Dherete, Pierre; Durand, Sylvain; Froment, Jacques; Rouge, Bernard
2003-01-01
We propose a unique mathematical framework to deblur, denoise and compress natural images. Images are decomposed in a wavelet packet basis adapted both to the deblurring filter and to the denoising process. Effective denoising is performed by thresholding small wavelet packet coefficients while deblurring is obtained by multiplying the coefficients with a deconvolution kernel. This representation is compressed by quantizing the remaining coefficients and by coding the values using a context-based entropy coder. We present examples of such treatments on a satellite image chain. The results show a significant improvement compared to separate treatments with up-to-date compression approach.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz
1997-10-01
Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
JPEG2000 vs. full frame wavelet packet compression for smart card medical records.
Leehan, Joaquín Azpirox; Lerallut, Jean-Francois
2006-01-01
This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.
Generalized B-spline subdivision-surface wavelets for geometry compression.
Bertram, Martin; Duchaineau, Mark A; Hamann, Bernd; Joy, Kenneth I
2004-01-01
We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail can be expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-Spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform can be computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.
Generalized b-spline subdivision-surface wavelets and lossless compression
Bertram, M; Duchaineau, M A; Hamann, B; Joy, K I
1999-11-24
We present a new construction of wavelets on arbitrary two-manifold topology for geometry compression. The constructed wavelets generalize symmetric tensor product wavelets with associated B-spline scaling functions to irregular polygonal base mesh domains. The wavelets and scaling functions are tensor products almost everywhere, except in the neighborhoods of some extraordinary points (points of valence unequal four) in the base mesh that defines the topology. The compression of arbitrary polygonal meshes representing isosurfaces of scalar-valued trivariate functions is a primary application. The main contribution of this paper is the generalization of lifted symmetric tensor product B-spline wavelets to two-manifold geometries. Surfaces composed of B-spline patches can easily be converted to this scheme. We present a lossless compression method for geometries with or without associated functions like color, texture, or normals. The new wavelet transform is highly efficient and can represent surfaces at any level of resolution with high degrees of continuity, except at a finite number of extraordinary points in the base mesh. In the neighborhoods of these points detail can be added to the surface to approximate any degree of continuity.
Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.
Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre
2008-12-01
Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Brennecke, R; Bürgel, U; Rippin, G; Post, F; Rupprecht, H J; Meyer, J
2001-02-01
Lossless or lossy compression of coronary angiogram data can reduce the enormous amounts of data generated by coronary angiographic imaging. The recent International Study of Angiographic Data Compression (ISAC) assessed the clinical viability of lossy Joint Photographic Expert Group (JPEG) compression but was unable to resolve two related questions: (A) the performance of lossless modes of compression in coronary angiography and (B) the performance of newer lossy wavelet algorithms. This present study seeks to supply some of this information. The performance of several lossless image compression methods was measured in the same set of images as used in the ISAC study. For the assessment of the relative image quality of lossy JPEG and wavelet compression, the observers ranked the perceived image quality of computer-generated coronary angiograms compressed with wavelet compression relative to the same images with JPEG compression. This ranking allowed the matching of compression ratios for wavelet compression with the clinically viable compression ratios for the JPEG method as obtained in the ISAC study. The best lossless compression scheme (LOCO-I) offered a mean compression ratio (CR) of 3.80:1. The quality of images compressed with the lossy wavelet-based method at CR = 10:1 and 20:1 was comparable to JPEG compression at CR = 6:1 and 10:1, respectively. The study has shown that lossless compression can exceed the CR of 2:1 usually quoted. For lossy compression, the range of clinically viable compression ratios can probably be extended by 50 to 100% when applying wavelet compression algorithms as compared to JPEG compression. These results can motivate a larger clinical study.
ECG signal compression by multi-iteration EZW coding for different wavelets and thresholds.
Tohumoglu, Gülay; Sezgin, K Erbil
2007-02-01
The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.
ECG compression using non-recursive wavelet transform with quality control
NASA Astrophysics Data System (ADS)
Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching
2016-09-01
While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.
On Fourier and Wavelets: Representation, Approximation and Compression
2007-11-02
thonormal bases (e.g. Four ier ser ies, wavelet ser ies) • b ior thogonal bases • overcomplete systems or f rames Note: no t ransforms, uncountable ϕn...ther or or: there is no good local or thogonal Four ier basis! Example of a basis: block based Fourier series Note: consequence of BL Thm on OFDM ...shi f t , modulat ion) by (shi f t , scale) or then there exist “good” local ized orthonormal bases , or wavelet bases Ψm n, t( ) 2 m 2
Alternative common bases and signal compression for wavelets application in chemometrics.
Forina, Michele; Oliveri, Paolo; Casale, Monica
2011-02-01
Representation or compression of data sets in the wavelet space is usually performed to retain the maximum variance of the original or pretreated data, like in the compression by means of principal components. In order to represent together a number of objects in the wavelet space, a common basis is required, and this common basis is usually obtained by means of the variance spectrum or of the variance wavelet tree. In this study, the use of alternative common bases is suggested, both for classification and regression problems. In the case of classification or class-modeling, the suggested common bases are based on the spectrum of the Fisher weights (a measure of the between-class to within-class variance ratio) or on the spectrum of the SIMCA discriminant weights. In the case of regression, the suggested common bases are obtained by the correlation spectrum (the correlation coefficients of the predictor variables with a response variable) or by the PLS (Partial Least Squares regression) importance of the predictors (the product between the absolute value of the regression coefficient of the predictor in the PLS model and its standard deviation). Other alternative strategies apply the Gram-Schmidt supervised orthogonalization to the wavelet coefficients. The results indicate that, both in classification and regression, the information retained after compression in the wavelets space can be more efficient than that retained with a common basis obtained by variance.
Wavelet-based low-delay ECG compression algorithm for continuous ECG transmission.
Kim, Byung S; Yoo, Sun K; Lee, Moon H
2006-01-01
The delay performance of compression algorithms is particularly important when time-critical data transmission is required. In this paper, we propose a wavelet-based electrocardiogram (ECG) compression algorithm with a low delay property for instantaneous, continuous ECG transmission suitable for telecardiology applications over a wireless network. The proposed algorithm reduces the frame size as much as possible to achieve a low delay, while maintaining reconstructed signal quality. To attain both low delay and high quality, it employs waveform partitioning, adaptive frame size adjustment, wavelet compression, flexible bit allocation, and header compression. The performances of the proposed algorithm in terms of reconstructed signal quality, processing delay, and error resilience were evaluated using the Massachusetts Institute of Technology University and Beth Israel Hospital (MIT-BIH) and Creighton University Ventricular Tachyarrhythmia (CU) databases and a code division multiple access-based simulation model with mobile channel noise.
[Statistical study of the wavelet-based lossy medical image compression technique].
Puniene, Jūrate; Navickas, Ramūnas; Punys, Vytenis; Jurkevicius, Renaldas
2002-01-01
Medical digital images have informational redundancy. Both the amount of memory for image storage and their transmission time could be reduced if image compression techniques are applied. The techniques are divided into two groups: lossless (compression ratio does not exceed 3 times) and lossy ones. Compression ratio of lossy techniques depends on visibility of distortions. It is a variable parameter and it can exceed 20 times. A compression study was performed to evaluate the compression schemes, which were based on the wavelet transform. The goal was to develop a set of recommendations for an acceptable compression ratio for different medical image modalities: ultrasound cardiac images and X-ray angiographic images. The acceptable image quality after compression was evaluated by physicians. Statistical analysis of the evaluation results was used to form a set of recommendations.
Wavelet Approach to Data Analysis, Manipulation, Compression, and Communication
2007-08-07
applications to animation movie production. According to our colleague Tony DeRose of Pixar Animation Studios, recently acquired by Walt Disney ...rendering and animation , as well as wavelet-based digital image restoration. (a) Papers published in peer-reviewed journals (N/A for none) List of...Accepted for publication. (18) Coherent line drawing (with H. Kang and S. Lee), ACM SIGGRAPH on Non-photorealistic Animation and Rendering
Faster techniques to evolve wavelet coefficients for better fingerprint image compression
NASA Astrophysics Data System (ADS)
Shanavaz, K. T.; Mythili, P.
2013-05-01
In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 × 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.
Method for low-light-level image compression based on wavelet transform
NASA Astrophysics Data System (ADS)
Sun, Shaoyuan; Zhang, Baomin; Wang, Liping; Bai, Lianfa
2001-10-01
Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.
Lamard, Mathieu; Daccache, Wissam; Cazuguel, Guy; Roux, Christian; Cochener, Beatrice
2005-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in diabetic retinopathy. We characterize images without extracting significant features, and use histograms obtained from the compressed images in JPEG-2000 wavelet scheme to build signatures. The research is carried out by calculating signature distances between the query and database images. A weighted distance between histograms is used. Retrieval efficiency is given for different standard types of JPEG-2000 wavelets, and for different values of histogram weights. A classified diabetic retinopathy image database is built allowing algorithms tests. On this image database, results are promising: the retrieval efficiency is higher than 70% for some lesion types.
2007-11-02
be approved in the near future. The main features of JPEG2000 are use of wavelet transform and ROI (Region of Interest) method. It is expected that... wavelet transform is more effective than Fourier transform for ultrasonic echo signal/image processing. Furthermore, ROI method seems to be appropriate...compression method of medical images. The purpose of this paper is to investigate the effectiveness of wavelet transform compared with DCT (JPEG) and
Multispectral image compression technology based on dual-tree discrete wavelet transform
NASA Astrophysics Data System (ADS)
Fang, Zhijun; Luo, Guihua; Liu, Zhicheng; Gan, Yun; Lu, Yu
2009-10-01
The paper proposes a combination of DCT and the Dual-Tree Discrete Wavelet Transform (DDWT) to solve the problems in multi-spectral image data storage and transmission. The proposed method not only removes spectral redundancy by1D DCT, but also removes spatial redundancy by 2D Dual-Tree Discrete Wavelet Transform. Therefore, it achieves low distortion under the conditions of high compression and high-quality reconstruction of the multi-spectral image. Tested by DCT, Haar and DDWT, the results show that the proposed method eliminates the blocking effect of wavelet and has strong visual sense and smooth image, which means the superiors with DDWT has more prominent quality of reconstruction and less noise.
[A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].
Zhao, An; Wu, Baoming
2006-12-01
This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.
DSP accelerator for the wavelet compression/decompression of high- resolution images
Hunt, M.A.; Gleason, S.S.; Jatko, W.B.
1993-07-23
A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.
NASA Astrophysics Data System (ADS)
Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.
2013-08-01
We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.
Optical image compression based on adaptive directional prediction discrete wavelet transform
NASA Astrophysics Data System (ADS)
Zhang, Libao; Qiu, Bingchang
2013-11-01
The traditional lifting wavelet transform cannot effectively reconstruct the nonhorizontal and nonvertical high-frequency information of an image. In this paper, we present a new image compression method based on adaptive directional prediction discrete wavelet transform (ADP-DWT). We first design a directional prediction model to obtain the optimal transform direction of the lifting wavelet. Then, we execute the directional lifting transform along the optimal transform direction. The edge and texture energy can be reduced in the nonhorizontal and nonvertical directions of the high-frequency sub-bands. Finally, the wavelet coefficients are coded with the set partitioning in hierarchical trees (SPIHT) algorithm. The new method holds the advantages of both adaptive directional lifting (ADL) and direction-adaptive discrete wavelet transform (DA-DWT), and the computational complexity is far lower than that in these methods. For the images containing regular and fine textures or edges, the coding preformance of ADP-DWT is better than that of ADL and DA-DWT.
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
Review of digital fingerprint acquisition systems and wavelet compression
NASA Astrophysics Data System (ADS)
Hopper, Thomas
2003-04-01
Over the last decade many criminal justice agencies have replaced their fingerprint card based systems with electronic processing. We examine these new systems and find that image acquisition to support the identification application is consistently a challenge. Image capture and compression are widely dispersed and relatively new technologies within criminal justice information systems. Image quality assurance programs are just beginning to mature.
The wavelet transform and the suppression theory of binocular vision for stereo image compression
Reynolds, W.D. Jr; Kenyon, R.V.
1996-08-01
In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.
Comparison of wavelet and Karhunen-Loeve transforms in video compression applications
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Soloveyko, Olexandr M.; Kurashov, Vitalij N.
1999-12-01
In the paper we present comparison of three advanced techniques for video compression. Among them 3D Embedded Zerotree Wavelet (EZW) coding, recently suggested Optimal Image Coding using Karhunen-Loeve (KL) transform (OICKL) and new algorithm of video compression based on 3D EZW coding scheme but with using KL transform for frames decorrelation (3D-EZWKL). It is shown that OICKL technique provides the best performance and usage of KL transform with 3D-EZW coding scheme gives better results than just usage of 3D-EZW algorithm.
Wavelet-based ECG compression by bit-field preserving and running length encoding.
Chan, Hsiao-Lung; Siao, You-Chen; Chen, Szi-Wen; Yu, Shih-Fan
2008-04-01
Efficient electrocardiogram (ECG) compression can reduce the payload of real-time ECG transmission as well as reduce the amount of data storage in long-term ECG recording. In this paper an ECG compression/decompression architecture based on the bit-field preserving (BFP) and running length encoding (RLE)/decoding schemes incorporated with the discrete wavelet transform (DWT) is proposed. Compared to complex and repetitive manipulations in the set partitioning in hierarchical tree (SPIHT) coding and the vector quantization (VQ), the proposed algorithm has advantages of simple manipulations and a feedforward structure that would be suitable to implement on very-large-scale integrated circuits and general microcontrollers.
An Evaluation of the Effects of Wavelet Coefficient Quantization in Transform Based EEG Compression
Higgins, Garry; McGinley, Brian; Jones, Edward; Glavin, Martin
2016-01-01
In recent years, there has been a growing interest in the compression of electroencephalographic (EEG) signals for telemedical and ambulatory EEG applications. Data compression is an important factor in these applications as a means of reducing the amount of data required for transmission. Allowing for a carefully controlled level of loss in the compression method can provide significant gains in data compression. Quantization is an easy to implement method of data reduction that requires little power expenditure. However, it is a relatively simple, noninvertible operation, and reducing the bit-level too far can result in the loss of too much information to reproduce the original signal to an appropriate fidelity. Other lossy compression methods allow for finer control over compression parameters, generally relying on discarding signal components the coder deems insignificant. SPIHT is a state of the art signal compression method based on the Discrete Wavelet Transform (DWT), originally designed for images but highly regarded as a general means of data compression. This paper compares the approaches of compression by changing the quantization level of the DWT coefficients in SPIHT, with the standard thresholding method used in SPIHT, to evaluate the effects of each on EEG signals. The combination of increasing quantization and the use of SPIHT as an entropy encoder has been shown to provide significantly improved results over using the standard SPIHT algorithm alone. PMID:23668341
An evaluation of the effects of wavelet coefficient quantisation in transform based EEG compression.
Garry, Higgins; McGinley, Brian; Jones, Edward; Glavin, Martin
2013-07-01
In recent years, there has been a growing interest in the compression of electroencephalographic (EEG) signals for telemedical and ambulatory EEG applications. Data compression is an important factor in these applications as a means of reducing the amount of data required for transmission. Allowing for a carefully controlled level of loss in the compression method can provide significant gains in data compression. Quantisation is easy to implement method of data reduction that requires little power expenditure. However, it is a relatively simple, non-invertible operation, and reducing the bit-level too far can result in the loss of too much information to reproduce the original signal to an appropriate fidelity. Other lossy compression methods allow for finer control over compression parameters, generally relying on discarding signal components the coder deems insignificant. SPIHT is a state of the art signal compression method based on the Discrete Wavelet Transform (DWT), originally designed for images but highly regarded as a general means of data compression. This paper compares the approaches of compression by changing the quantisation level of the DWT coefficients in SPIHT, with the standard thresholding method used in SPIHT, to evaluate the effects of each on EEG signals. The combination of increasing quantisation and the use of SPIHT as an entropy encoder has been shown to provide significantly improved results over using the standard SPIHT algorithm alone.
Speech coding and compression using wavelets and lateral inhibitory networks
NASA Astrophysics Data System (ADS)
Ricart, Richard
1990-12-01
The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.
Adaptive lifting scheme of wavelet transforms for image compression
NASA Astrophysics Data System (ADS)
Wu, Yu; Wang, Guoyin; Nie, Neng
2001-03-01
Aiming at the demand of adaptive wavelet transforms via lifting, a three-stage lifting scheme (predict-update-adapt) is proposed according to common two-stage lifting scheme (predict-update) in this paper. The second stage is updating stage. The third is adaptive predicting stage. Our scheme is an update-then-predict scheme that can detect jumps in image from the updated data and it needs not any more additional information. The first stage is the key in our scheme. It is the interim of updating. Its coefficient can be adjusted to adapt to data to achieve a better result. In the adaptive predicting stage, we use symmetric prediction filters in the smooth area of image, while asymmetric prediction filters at the edge of jumps to reduce predicting errors. We design these filters using spatial method directly. The inherent relationships between the coefficients of the first stage and the other stages are found and presented by equations. Thus, the design result is a class of filters with coefficient that are no longer invariant. Simulation result of image coding with our scheme is good.
Evaluation of color-embedded wavelet image compression techniques
NASA Astrophysics Data System (ADS)
Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III
1998-12-01
Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.
Applications of wavelet-based compression to multidimensional earth science data
Bradley, J.N.; Brislawn, C.M.
1993-02-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Applications of wavelet-based compression to multidimensional earth science data
Bradley, J.N.; Brislawn, C.M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Study on the application of embedded zero-tree wavelet algorithm in still images compression
NASA Astrophysics Data System (ADS)
Zhang, Jing; Lu, Yanhe; Li, Taifu; Lei, Gang
2005-12-01
An image has directional selection capability with high frequency through wavelet transformation. It is coincident with the visual characteristics of human eyes. The most important visual characteristic in human eyes is the visual covering effect. The embedded Zero-tree Wavelet (EZW) coding method completes the same level coding for a whole image. In an image, important regions (regions of interest) and background regions (indifference regions) are coded through the same levels. On the basis of studying the human visual characteristics, that is, the visual covering effect, this paper employs an image-compressing method with regions of interest, i.e., an algorithm of Embedded Zero-tree Wavelet with Regions of Interest (EZWROI Algorism) to encode the regions of interest and regions of non-interest separately. In this way, the lost important information in the image is much less. It makes full use of channel resource and memory space, and improves the image quality in the regions of interest. Experimental study showed that a resumed image using an EZW_ROI algorithm is better in visual effects than that of EZW on condition of high compression ratio.
Li, F; Sone, S; Takashima, S; Kiyono, K; Yang, Z G; Hasegawa, M; Kawakami, S; Saito, A; Hanamura, K; Asakura, K
2001-03-01
To compare the effect of compression of spiral low-dose CT images by the Joint Photographic Experts Group (JPEG) and wavelet algorithms on detection of small lung cancers. Low-dose spiral CT images of 104 individuals (52 with peripheral lung cancers smaller than 20 mm and 52 control subjects) were used. The original images were compressed using JPEG or wavelet algorithms at a ratio of 10:1 or 20:1. Five radiologists interpreted these images and evaluated the image quality on a high-resolution CRT monitor. Observer performance was studied by receiver operating characteristic (ROC) analysis. There was no significant difference in the detection of cancers measuring 6 to 15 mm in uncompressed images and in those compressed by either of the algorithms, although the quality of images compressed at 20:1 with the wavelet algorithm was somewhat inferior. A lower diagnostic accuracy was noted using images compressed by the JPEG or wavelet algorithms at 20:1 in detecting lung cancers measuring 6 to 10 mm and cancers measuring from 6 to 15 mm with ground-glass opacity. Compression of low-dose CT images at a ratio of 10:1 using JPEG and wavelet algorithms does not compromise the detection rate of small lung cancers.
Ho, B.K.T.; Tsai, M.J.; Wei, J.; Ma, M.; Saipetch, P.
1996-12-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio ({approximately}20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group`s (MPEG`s) motion compensated prediction to take advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain cases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
Cao, Libo; Harrington, Peter de B; Harden, Charles S; McHugh, Vincent M; Thomas, Martin A
2004-02-15
Linear and nonlinear wavelet compression of ion mobility spectrometry (IMS) data are compared and evaluated. IMS provides low detection limits and rapid response for many compounds. Nonlinear wavelet compression of ion mobility spectra reduced the data to 4-5% of its original size, while eliminating artifacts in the reconstructed spectra that occur with linear compression, and the root-mean-square reconstruction error was 0.17-0.20% of the maximum intensity of the uncompressed spectra. Furthermore, nonlinear wavelet compression precisely preserves the peak location (i.e., drift time). Small variations in peak location may occur in the reconstructed spectra that were linearly compressed. A method was developed and evaluated for optimizing the compression. The compression method was evaluated with in-flight data recorded from ion mobility spectrometers mounted in an unmanned aerial vehicle (UAV). Plumes of dimethyl methylphosphonate were disseminated for interrogation by the UAV-mounted IMS system. The daublet 8 wavelet filter exhibited the best performance for these evaluations.
NASA Astrophysics Data System (ADS)
Wei, Shih-Chieh; Huang, Bormin
2004-10-01
Hyperspectral sounder data is used for retrieval of useful geophysical parameters which promise better weather prediction. It features two characteristics. First it is huge in size with 2D spatial coverage and high spectral resolution in the infrared region. Second it allows low tolerance of noise and error in retrieving the geophysical parameters where a mathematically ill-posed problem is involved. Therefore compression is better to be lossless or near lossless for data transfer and archive. Meanwhile medical data from X-ray computerized tomography (CT) or magnetic resonance imaging (MRI) techniques also possesses similar characteristics. It provides motivation to apply lossless compression schemes for medical data to the hyperspectral sounder data. In this paper, we explore the use of a wavelet-based lossless data compression scheme for the 3D hyperspectral data which uses in sequence a forward difference scheme, an integer wavelet transform, a Burrows-Wheeler transform and an arithmetic coder. Compared to previous work, our approach is shown to outperform the CALIC and 3D EZW schemes.
A Parallel Adaptive Wavelet Method for the Simulation of Compressible Reacting Flows
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel
2011-11-01
The Wavelet Adaptive Multiresolution Representation (WAMR) method provides a robust method for controlling spatial grid adaption--fine grid spacing in regions of a solution requiring high resolution (i.e. near steep gradients, singularities, or near- singularities) and using much coarser grid spacing where the solution is slowly varying. The sparse grids produced using the WAMR method exhibit very high compression ratios compared to uniform grids of equivalent resolution. Subsequently, a wide range of spatial scales often occurring in continuum physics models can be captured efficiently. Furthermore, the wavelet transform provides a direct measure of local error at each grid point, effectively producing automatically verified solutions. The algorithm is parallelized using an MPI-based domain decomposition approach suitable for a wide range of distributed-memory parallel architectures. The method is applied to the solution of the compressible, reactive Navier-Stokes equations and includes multi-component diffusive transport and chemical kinetics models. Results for the method's parallel performance are reported, and its effectiveness on several challenging compressible reacting flow problems is highlighted.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )
1993-01-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-05-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.
FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
NASA Astrophysics Data System (ADS)
Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas
1993-08-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
Spatial model of lifting scheme in wavelet transforms and image compression
NASA Astrophysics Data System (ADS)
Wu, Yu; Li, Gang; Wang, Guoyin
2002-03-01
Wavelet transforms via lifting scheme are called the second-generation wavelet transforms. However, in some lifting schemes the coefficients are transformed using mathematical method from the first-generation wavelets, so the filters with better performance using in lifting are limited. The spatial structures of lifting scheme are also simple. For example, the classical lifting scheme, predicting-updating, is two-stage, and most researchers simply adopt this structure. In addition, in most design results the lifting filters are not only hard to get and also fixed. In our former work, we had presented a new three-stage lifting scheme, predicting-updating-adapting, and the results of filter design are no more fixed. In this paper, we continue to research the spatial model of lifting scheme. A group of general multi-stage lifting schemes are achieved and designed. All lifting filters are designed in spatial domain and proper mathematical methods are selected. Our designed coefficients are flexible and can be adjusted according to different data. We give the mathematical design details in this paper. Finally, all designed model of lifting are used in image compression and satisfactory results are achieved.
Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben
2017-09-12
One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.
New image compression algorithm based on improved reversible biorthogonal integer wavelet transform
NASA Astrophysics Data System (ADS)
Zhang, Libao; Yu, Xianchuan
2012-10-01
The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
Wavelet compression of three-dimensional time-lapse biological image data.
Stefansson, H Narfi; Eliceiri, Kevin W; Thomas, Charles F; Ron, Amos; DeVore, Ron; Sharpley, Robert; White, John G
2005-02-01
The use of multifocal-plane, time-lapse recordings of living specimens has allowed investigators to visualize dynamic events both within ensembles of cells and individual cells. Recordings of such four-dimensional (4D) data from digital optical sectioning microscopy produce very large data sets. We describe a wavelet-based data compression algorithm that capitalizes on the inherent redunancies within multidimensional data to achieve higher compression levels than can be obtained from single images. The algorithm will permit remote users to roam through large 4D data sets using communication channels of modest bandwidth at high speed. This will allow animation to be used as a powerful aid to visualizing dynamic changes in three-dimensional structures.
Fast algorithm of byte-to-byte wavelet transform for image compression applications
NASA Astrophysics Data System (ADS)
Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.
2002-11-01
A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.
Fingerprint verification by correlation using wavelet compression of preprocessing digital images
NASA Astrophysics Data System (ADS)
Daza, Yaileth Morales; Torres, Cesar O.
2008-04-01
In this paper, we implement a practical digital security system that combines digital correlation, wavelet compression with preprocessing digital images; we applied these biometric system for fingerprint verification or autentication, that it has as pattern unique of recognition the Fingerprint. The system has a biometric sensor, it is charge to capture the images and to digitize them. These images are preprocessing using wavelet-based image compression and are skeleting for to stand out elements of the image that are considered of greater importance, eliminating the undesired signals, that because of the method of acquisition, or the conditions under which it was captured they appear in the same one. This images are stored in the data base, which to the being passed through diferent algorithms makes a compound filter, that serves as reference or pattern fingerprint for making the comparisons. These comparisons are showing in a graph where are observed amplitude diferent for each one from fingerprint, this make with help Fourier transformed and the correlation operations, that allows to quantify the degree of similarity between two images. Depending on the results thrown by the system at the time of comparing a new fingerprint with the compound filter the access will be allowed or not the user.
Kasaei, Shohreh; Deriche, Mohamed; Boashash, Boualem
2002-01-01
A novel compression algorithm for fingerprint images is introduced. Using wavelet packets and lattice vector quantization , a new vector quantization scheme based on an accurate model for the distribution of the wavelet coefficients is presented. The model is based on the generalized Gaussian distribution. We also discuss a new method for determining the largest radius of the lattice used and its scaling factor , for both uniform and piecewise-uniform pyramidal lattices. The proposed algorithms aim at achieving the best rate-distortion function by adapting to the characteristics of the subimages. In the proposed optimization algorithm, no assumptions about the lattice parameters are made, and no training and multi-quantizing are required. We also show that the wedge region problem encountered with sharply distributed random sources is resolved in the proposed algorithm. The proposed algorithms adapt to variability in input images and to specified bit rates. Compared to other available image compression algorithms, the proposed algorithms result in higher quality reconstructed images for identical bit rates.
NASA Astrophysics Data System (ADS)
Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong
2015-01-01
The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.
Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong
2015-06-01
Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds. Copyright © 2015 Elsevier Inc. All rights reserved.
Chui, C.K.
1992-01-01
The subject of wavelet analysis has recently drawn a great deal of attention from mathematical scientists in various disciplines. It is creating a common link between mathematicians, physicists, and electrical engineers. This book consist of both monographs and edited volumes on the theory and applications of this rapidly developing subject. Its objective is to meet the needs of academic industrial, and governmental researchers, as well as to provide instructional material for teaching at both the undergraduate and graduate levels.
Remote sensing image compression method based on lift scheme wavelet transform
NASA Astrophysics Data System (ADS)
Tao, Hongjiu; Tang, Xinjian; Liu, Jian; Tian, Jinwen
2003-06-01
Based on lifting scheme and the construction theorem of the integer Haar wavelet and biorthogonal wavelet, we propose a new integer wavelet transform construct method on the basis of lift scheme after introduciton of constructing specific-demand biorthogonal wavelet transform using Harr wavelet and Lazy wavelet. In this paper, we represent the method and algorithm of the lifting scheme, and we also give mathematical formulation on this method and experimental results as well.
Ricke, J; Maass, P; Lopez Hänninen, E; Liebig, T; Amthauer, H; Stroszczynski, C; Schauer, W; Boskamp, T; Wolf, M
1998-08-01
The aim of this study was to evaluate different lossy image compression algorithms in direct comparison. Computed radiographs were reviewed after compression with Wavelet, Fractal, and Joint Photographic Expert Group (JPEG) algorithms. For receiver operating characteristic (ROC) analysis, 54 thoracic computed radiographs (31 showing pulmonary nodules) were compressed with a ratio of 1:60. Five images of a test-phantom were coded at 1:13. All images were reviewed on a PC. Uncompressed images were reviewed at a PC and at a radiologic workstation (with image processing). For thorax images, decrease of diagnostic accuracy was significant with Wavelets. Fractal performed worse than Wavelets. No ROC curve was observed for JPEG due to poor image quality. No diagnostic loss was noted comparing PC and Workstation review. For low-contrast details of the phantom, results of Wavelet compression were equal to uncompressed images. Fewer true positives and increased true negatives were noted with Wavelets though. Wavelets were superior to JPEG, and JPEG images were superior to Fractal. Workstation review was superior to PC review. Only Wavelets provided accurate review of low-contrast details at a compression of 1:13. Frequency filtering of Wavelets affects contrast even at a low compression ratio. JPEG performed better than Fractal at low and worse at high compression ratio.
Duchaineau, M A; Porumbescu, S D; Bertram, M; Hamann, B; Joy, K I
2000-10-06
Currently, large physics simulations produce 3D fields whose individual surfaces, after conventional extraction processes, contain upwards of hundreds of millions of triangles. Detailed interactive viewing of these surfaces requires powerful compression to minimize storage, and fast view-dependent optimization of display triangulations to drive high-performance graphics hardware. In this work we provide an overview of an end-to-end multiresolution dataflow strategy whose goal is to increase efficiencies in practice by several orders of magnitude. Given recent advancements in subdivision-surface wavelet compression and view-dependent optimization, we present algorithms here that provide the ''glue'' that makes this strategy hold together. Shrink-wrapping converts highly detailed unstructured surfaces of arbitrary topology to the semi-structured form needed for wavelet compression. Remapping to triangle bintrees minimizes disturbing ''pops'' during real-time display-triangulation optimization and provides effective selective-transmission compression for out-of-core and remote access to these huge surfaces.
R-D optimized tree-structured compression algorithms with discrete directional wavelet transform
NASA Astrophysics Data System (ADS)
Liu, Hui; Ma, Siliang
2008-09-01
A new image coding method based on discrete directional wavelet transform (S-WT) and quad-tree decomposition is proposed here. The S-WT is a kind of transform proposed in [V. Velisavljevic, B. Beferull-Lozano, M. Vetterli, P.L. Dragotti, Directionlets: anisotropic multidirectional representation with separable filtering, IEEE Trans. Image Process. 15(7) (2006)], which is based on lattice theory, and with the difference with the standard wavelet transform is that the former allows more transform directions. Because the directional property in a small region is more regular than in a big block generally, in order to sufficiently make use of the multidirectionality and directional vanishing moment (DVM) of S-WT, the input image is divided into many small regions by means of the popular quad-tree segmentation, and the splitting criterion is on the rate-distortion sense. After the optimal quad-tree is obtained, by means of the embedded property of SPECK, a resource bit allocation algorithm is fast implemented utilizing the model proposed in [M. Rajpoot, Model based optimal bit allocation, in: IEEE Data Compression Conference, 2004, Proceedings, DCC 2004.19]. Experiment results indicate that our algorithms perform better compared to some state-of-the-art image coders.
Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.
Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong
2016-01-01
Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
NASA Astrophysics Data System (ADS)
Zhang, Xubing; Guan, Zequn; Yu, Xin
2007-11-01
The enormous volumes and valuable applications of MODIS (Moderate Resolution Imaging Spectroradiometer) multi-spectral images make it need to be compressed losslessly with effective method. To solve this problem, the optimal linear prediction and band ordering are adopted here to exploit abundant spectral redundancy, lifting wavelet transform and SPIHT algorithm are used to eliminate the spatial redundancy of MODIS data. The optimal inter-band prediction sequence is specified by band ordering, and except the first band, only residual error images of other bands need to be encoded after band prediction. To avoid information loss, the optimal linear predictor is improved by rounding the prediction coefficients into integers, and D5/3 lifting wavelet is used to implement integer-to-integer wavelet transform of the band image, which also accelerate the compression. Finally, some lossless compression methods such as "WinRAR", "3DSPIHT" are compared with this compression method, and the experimental results show that our method has more compression capability.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
Comparison of wavelet scalar quantization and JPEG for fingerprint image compression
NASA Astrophysics Data System (ADS)
Kidd, Robert C.
1995-01-01
An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at
Sriraam, N.
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238
Sriraam, N
2012-01-01
Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications.
NASA Astrophysics Data System (ADS)
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
Kihara, Y
2001-04-01
To compare observer performance on cathode-ray-tube(CRT) monitors for personal computers with that on conventional radiographs in the detection of small lung nodules. Fifty-eight normal chest radiographs and 58 chest radiographs with a small lung nodule were selected. Ten radiologists examined the original conventional films on a viewbox and digitized (8 bit) uncompressed and compressed images of the same patient on a color CRT monitor with a matrix of 1,600 x 1,200, and rated the presence of lung nodules with a five-level scale of confidence. The methods of compression used in this study were the JPEG and wavelet methods, with compression ratios of 6:1 and 15:1. Results were analyzed by receiver operating characteristic methods. There was no significant difference between film and digitized uncompressed and compressed images obtained by the JPEG and wavelet methods with a compression ratio of 6:1. No statistically significant difference was detected between film and digitized image with wavelet compression at 15:1. However, detection was less accurate on digitized images with JPEG compression at 15:1. Digitized (8 bit) uncompressed and compressed images with a compression ratio of 6:1 are acceptable for the detection of small lung nodules. Digitized compressed images at a compression ratio of 15:1 are also acceptable when the wavelet method is used.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
NASA Astrophysics Data System (ADS)
Cheng, Kai-jen; Dill, Jeffrey
2013-05-01
In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
High performance projectile seal development for non perfect railgun bores
Wolfe, T.R.; Vine, F.E. Le; Riedy, P.E.; Panlasigui, A.; Hawke, R.S.; Susoeff, A.R.
1997-01-01
The sealing of high pressure gas behind an accelerating projectile has been developed over centuries of use in conventional guns and cannons. The principal concern was propulsion efficiency and trajectory accuracy and repeatability. The development of guns for use as high pressure equation-of-state (EOS) research tools, increased the importance of better seals to prevent gas leakage from interfering with the experimental targets. The development of plasma driven railguns has further increased the need for higher quality seals to prevent gas and plasma blow-by. This paper summarizes more than a decade of effort to meet these increased requirements. In small bore railguns, the first improvement was prompted by the need to contain the propulsive plasma behind the projectile to avoid the initiation of current conducting paths in front of the projectile. The second major requirements arose from the development of a railgun to serve as an EOS tool where it was necessary to maintain an evacuated region in front of the projectile throughout the acceleration process. More recently, the techniques developed for the small bore guns have been applied to large bore railguns and electro-thermal chemical guns in order to maximize their propulsion efficiency. Furthermore, large bore railguns are often less rigid and less straight than conventional homogeneous material guns. Hence, techniques to maintain seals in non perfect, non homogeneous material launchers have been developed and are included in this paper.
NASA Astrophysics Data System (ADS)
Paolucci, Samuel; Zikoski, Zachary J.; Grenga, Temistocle
2014-09-01
The Wavelet Adaptive Multiresolution Representation (WAMR) algorithm is parallelized using a domain decomposition approach suitable to a wide range of distributed-memory parallel architectures. The method is applied to the solution of two unsteady, compressible, reactive flow problems and includes detailed diffusive transport and chemical kinetics models. The first problem is a cellular detonation in a hydrogen-oxygen-argon mixture. The second problem corresponds to the ignition and combustion of a hydrogen bubble by a shock wave in air. In both cases, results agree favorably with previous computational results.
NASA Astrophysics Data System (ADS)
Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.
2004-02-01
The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Hyperspectral sounder data is a particular class of data requiring high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Hence compression of these data sets is better to be lossless or near lossless. Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are processed with the 3D set partitioning in hierarchical trees (SPIHT) scheme followed by context-based arithmetic coding. SPIHT provides better coding efficiency than Shapiro's original embedded zerotree wavelet (EZW) algorithm. We extend the 3D SPIHT scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.
NASA Astrophysics Data System (ADS)
Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.
2003-09-01
Hyperspectral sounder data is a particular class of data that requires high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Therefore compression of these data sets is better to be lossless or near lossless. The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are then processed with the 3D embedded zerotree wavelet (EZW) algorithm followed by context-based arithmetic coding. We extend the 3D EZW scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.
Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas
2009-07-01
Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.
Mean square error approximation for wavelet-based semiregular mesh compression.
Payan, Frédéric; Antonini, Marc
2006-01-01
The objective of this paper is to propose an efficient model-based bit allocation process optimizing the performances of a wavelet coder for semiregular meshes. More precisely, this process should compute the best quantizers for the wavelet coefficient subbands that minimize the reconstructed mean square error for one specific target bitrate. In order to design a fast and low complex allocation process, we propose an approximation of the reconstructed mean square error relative to the coding of semiregular mesh geometry. This error is expressed directly from the quantization errors of each coefficient subband. For that purpose, we have to take into account the influence of the wavelet filters on the quantized coefficients. Furthermore, we propose a specific approximation for wavelet transforms based on lifting schemes. Experimentally, we show that, in comparison with a "naive" approximation (depending on the subband levels), using the proposed approximation as distortion criterion during the model-based allocation process improves the performances of a wavelet-based coder for any model, any bitrate, and any lifting scheme.
Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms
NASA Astrophysics Data System (ADS)
Gurkan, Hakan
2012-12-01
In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.
Kumar, Ranjeet; Kumar, A; Singh, G K
2016-06-01
In the field of biomedical, it becomes necessary to reduce data quantity due to the limitation of storage in real-time ambulatory system and telemedicine system. Research has been underway since very beginning for the development of an efficient and simple technique for longer term benefits. This paper, presents an algorithm based on singular value decomposition (SVD), and embedded zero tree wavelet (EZW) techniques for ECG signal compression which deals with the huge data of ambulatory system. The proposed method utilizes the low rank matrix for initial compression on two dimensional (2-D) ECG data array using SVD, and then EZW is initiated for final compression. Initially, 2-D array construction has key issue for the proposed technique in pre-processing. Here, three different beat segmentation approaches have been exploited for 2-D array construction using segmented beat alignment with exploitation of beat correlation. The proposed algorithm has been tested on MIT-BIH arrhythmia record, and it was found that it is very efficient in compression of different types of ECG signal with lower signal distortion based on different fidelity assessments. The evaluation results illustrate that the proposed algorithm has achieved the compression ratio of 24.25:1 with excellent quality of signal reconstruction in terms of percentage-root-mean square difference (PRD) as 1.89% for ECG signal Rec. 100 and consumes only 162bps data instead of 3960bps uncompressed data. The proposed method is efficient and flexible with different types of ECG signal for compression, and controls quality of reconstruction. Simulated results are clearly illustrate the proposed method can play a big role to save the memory space of health data centres as well as save the bandwidth in telemedicine based healthcare systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
The Performance of Wavelets for Data Compression in Selected Military Applications
1990-02-23
Ratio vs. Radial Distance [1 graph per (reference image, test patch) pair, 7 compressions per graph] Exhibit H-7 Graph: Laplacian Sidelobe to Peak...Ratio vs. Radial Distance [I graph per (reference image, test patch) pair, 7 compressions per graph] Exhibit 11-8 Graph: Laplacian vs. Radial Distance 11...of Al Wl (at various scalings) with A l (at a given compression) vs. Radial Distance [Two graphs per compression; seven scalings per graph] Exhibit II
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.;
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
Gravity inversion using wavelet-based compression on parallel hybrid CPU/GPU systems: application to southwest Ghana
NASA Astrophysics Data System (ADS)
Martin, Roland; Monteiller, Vadim; Komatitsch, Dimitri; Perrouty, Stéphane; Jessell, Mark; Bonvalot, Sylvain; Lindsay, Mark
2013-12-01
We solve the 3-D gravity inverse problem using a massively parallel voxel (or finite element) implementation on a hybrid multi-CPU/multi-GPU (graphics processing units/GPUs) cluster. This allows us to obtain information on density distributions in heterogeneous media with an efficient computational time. In a new software package called TOMOFAST3D, the inversion is solved with an iterative least-square or a gradient technique, which minimizes a hybrid L1-/L2-norm-based misfit function. It is drastically accelerated using either Haar or fourth-order Daubechies wavelet compression operators, which are applied to the sensitivity matrix kernels involved in the misfit minimization. The compression process behaves like a pre-conditioning of the huge linear system to be solved and a reduction of two or three orders of magnitude of the computational time can be obtained for a given number of CPU processor cores. The memory storage required is also significantly reduced by a similar factor. Finally, we show how this CPU parallel inversion code can be accelerated further by a factor between 3.5 and 10 using GPU computing. Performance levels are given for an application to Ghana, and physical information obtained after 3-D inversion using a sensitivity matrix with around 5.37 trillion elements is discussed. Using compression the whole inversion process can last from a few minutes to less than an hour for a given number of processor cores instead of tens of hours for a similar number of processor cores when compression is not used.
Application of region selective embedded zerotree wavelet coder in CT image compression.
Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping
2005-01-01
Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.
Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression
Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander
2016-01-01
By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.
Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation
Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan
2014-01-01
In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Morioka, Craig A.; Whiting, James S.; Eigler, Neal L.
1995-04-01
Image quality associated with image compression has been either arbitrarily evaluated through visual inspection, loosely defined in terms of some subjective criteria such as image sharpness or blockiness, or measured by arbitrary measures such as the mean square error between the uncompressed and compressed image. The present paper psychophysically evaluated the effect of three different compression algorithms (JPEG, full-frame, and wavelet) on human visual detection of computer-simulated low-contrast lesions embedded in real medical image noise from patient coronary angiogram. Performance identifying the signal present location as measure by d' index of detectability decreased for all three algorithms by approximately 30% and 62% for the 16:1 and 30:1 compression rations respectively. We evaluated the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm. The MSE predicted significantly higher image quality for the JPEG algorithm in the 16:1 compression ratio and for both JPEG and full-frame for the 30:1 compression ratio. The NNND predicted significantly high image quality for the full-frame algorithm for both compassion rations. These findings suggest that these two measures of image quality may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms.
Orthogonal wavelets for image transmission and compression schemes: implementation and results
NASA Astrophysics Data System (ADS)
Ahmadian, Alireza; Bharath, Anil A.
1996-10-01
Diagnostic quality medical images consume vast amounts of network time, system bandwidth and disk storage in current computer architectures. There are many ways in which the use of system and network resources may be optimize without compromising diagnostic image quality. One of these is in the choice of image representation, both for storage and transfer. In this paper, we show how a particularly flexible method of image representation, based on Mallat's algorithm, leads to efficient methods of both lossy image compression and progressive image transmission. We illustrate the application of a progressive transmission scheme to medical images, and provide some examples of image refinement in a multiscale fashion. We show how thumbnail images created by a multiscale orthogonal decomposition can be optimally interpolated, in a minimum square error sense, based on a generalized Moore-Penrose inverse operator. In the final part of this paper, we show that the representation can provide a framework for lossy image compression, with signal/noise ratios far superior to those provided by a standard JPEG algorithm. The approach can also accommodate precision based progressive coding. We show the results of increasing the priority of encoding a selected region of interest in a bit-stream describing a multiresolution image representation.
NASA Astrophysics Data System (ADS)
Wang, Jie; Tian, Yan; Meng, Xiangsheng; Liu, Tong
2017-02-01
The image obtained from space-based vision system has increasingly high frame frequency and resolution, and field of view is also growing. Due to the dramatic increase of data scale and the restriction of channel bandwidth between satellite and ground, on-orbit data compression becomes the core of on-satellite data processing. The paper analyzes the new generation static image compression standard JPEG2000 and the key two-dimensional (2D) discrete wavelet transform (DWT) technology. Then an FPGA (Field Programmable Gate Array)implement method for 2D integer wavelet transform is designed. It adopts the spatial combinative lifting algorithm (SCLA), which realizes the simultaneous transformation on rows and columns. On this basis, the paper realizes wavelet decomposition for images with a resolution of 6576*4384 (which is divided into 1024*1024) on the FPGA platform. In particular, the test platform is built in ISE14.7 simulation software, and the device model is xc5vfx100t. The design has passed the FPGA verification. In order to verify the correctness of the algorithm, the results are compared with that obtained by running matlab code. The experimental results show that the design is correct and the resource occupancy rate is low.
1994-07-29
Douglas (MDA). This has been extended to the use of local SVD methods and the use of wavelet packets to provide a controlled sparsening. The goal is to be...possibilities for segmenting, compression and denoising signals and one of us (GVW) is using these wavelets to study edge sets with Prof. B. Jawerth. The
Influence of non perfect impedance boundary on the bistable region in thermoacoustic interactions
NASA Astrophysics Data System (ADS)
Mohan, B.; Mariappan, S.
2017-04-01
We investigate the influence of non perfect impedance boundary on the bistable zone in thermoacoustic interactions of a horizontal Rijke tube. A wave based approach is used to obtain the nonlinear dispersion relation with frequency dependent impedance boundary condition. The location and the time delay in the response of the heater are considered as bifurcation parameters to obtain the stability boundaries. In the presence of non perfect impedance boundary condition, we find that the extent of globally unstable regime reduces and the bistable zone significantly increases. The quantitative changes in the stability boundaries and the bistable zone are investigated for different time lags. However, the nature of bifurcation remains sub critical and unaltered for the range of time delays considered in the present study.
Accurate triplet phase determination in non-perfect crystals--a general phasing procedure.
Morelhão, Sérgio L
2003-09-01
A completely different approach to the problem of physically measuring the invariant triplet phases by three-beam X-ray diffraction is proposed. Instead of simulating the three-beam diffraction process to reproduce the experimental intensity profiles, the proposed approach makes use of a general parametric equation for fitting the profiles and extracting the triplet phase values. The inherent flexibility of the parametric equation allows its applicability to be extended to non-perfect crystals. Exploitation of the natural linear polarization of synchrotron radiation is essential for eliminating systematic errors and to provide accurate triplet phase values. Phasing procedures are suggested and demonstrative examples from simulated data are given.
Investigation of dense dispersion management optical links with non-perfect dispersion maps
NASA Astrophysics Data System (ADS)
Burdin, Vladimir A.; Andreev, Vladimir A.; Dashkov, Mikhail V.; Volkov, Kirill A.
2010-01-01
There is no doubt that dispersion management soliton systems (DDMS) are the most applicable for the short-haul transport. However on practice it is unbelievable to realize an ideal coincidence between projected and installed lengths of dispersion map fibers, which can be explained, for example, by business problems or an optical closure placement. Here we present results of numerical simulations of optical pulse propagation and following bit-error-ratio estimation in a dense dispersion manage optical link with non-perfect dispersion maps.
Wavelets on Planar Tesselations
Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.
2000-02-25
We present a new technique for progressive approximation and compression of polygonal objects in images. Our technique uses local parameterizations defined by meshes of convex polygons in the plane. We generalize a tensor product wavelet transform to polygonal domains to perform multiresolution analysis and compression of image regions. The advantage of our technique over conventional wavelet methods is that the domain is an arbitrary tessellation rather than, for example, a uniform rectilinear grid. We expect that this technique has many applications image compression, progressive transmission, radiosity, virtual reality, and image morphing.
NASA Astrophysics Data System (ADS)
Salvador, Rubén; Vidal, Alberto; Moreno, Félix; Riesgo, Teresa; Sekanina, Lukáš
2011-05-01
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
General Dynamical Equations for Dingle's Space-Times Filled with a Charged Non-perfect Fluid
NASA Astrophysics Data System (ADS)
Hasmani, A. H.
2009-12-01
In this paper we have assumed charged non-perfect fluid as the material content of the space-time. The expression for the “ mass function- M( r, y, z, t)” is obtained for the general situation and the contributions from the Ricci tensor in the form of material energy density ρ, pressure anisotropy [p2+p3/2-p1] , electromagnetic field energy ℰ and the conformal Weyl tensor, viz. energy density of the free gravitational field ɛ (=-3Ψ2/4π) are made explicit. This work is an extension of the work obtained earlier by Rao and Hasmani (Math. Today XIIA:71, 1993; New Directions in Relativity and Cosmology, Hadronic Press, Nonantum, 1997) for deriving general dynamical equations for Dingle’s space-times described by this most general orthogonal metric, ds^2=exp(ν)dt^2-exp(λ)dr^2-exp(2α)dy^2-exp(2β)dz^2, where ν, λ, α and β are functions of all four space-time variables r, y, z and t.
Adaptive boxcar/wavelet transform
NASA Astrophysics Data System (ADS)
Sezer, Osman G.; Altunbasak, Yucel
2009-01-01
This paper presents a new adaptive Boxcar/Wavelet transform for image compression. Boxcar/Wavelet decomposition emphasizes the idea of average-interpolation representation which uses dyadic averages and their interpolation to explain a special case of biorthogonal wavelet transforms (BWT). This perspective for image compression together with lifting scheme offers the ability to train an optimum 2-D filter set for nonlinear prediction (interpolation) that will adapt to the context around the low-pass wavelet coefficients for reducing energy in the high-pass bands. Moreover, the filters obtained after training is observed to posses directional information with some textural clues that can provide better prediction performance. This work addresses a firrst step towards obtaining this new set of training-based fillters in the context of Boxcar/Wavelet transform. Initial experimental results show better subjective quality performance compared to popular 9/7-tap and 5/3-tap BWTs with comparable results in objective quality.
NASA Astrophysics Data System (ADS)
Wang, Yu-Ping
2005-08-01
Genetic image analysis is an interdisciplinary area, which combines microscope image processing techniques with the use of biochemical probes for the detection of genetic aberrations responsible for cancers and genetic diseases. Recent years have witnessed parallel and significant progress in both image processing and genetics. On one hand, revolutionary multiscale wavelet techniques have been developed in signal processing and applied mathematics in the last decade, providing sophisticated tools for genetic image analysis. On the other hand, reaping the fruit of genome sequencing, high resolution genetic probes have been developed to facilitate accurate detection of subtle and cryptic genetic aberrations. In the meantime, however, they bring about computational challenges for image analysis. In this paper, we review the fruitful interaction between wavelets and genetic imaging. We show how wavelets offer a perfect tool to address a variety of chromosome image analysis problems. In fact, the same word "subband" has been used in the nomenclature of cytogenetics to describe the multiresolution banding structure of the chromosome, even before its appearance in the wavelet literature. The application of wavelets to chromosome analysis holds great promise in addressing several computational challenges in genetics. A variety of real world examples such as the chromosome image enhancement, compression, registration and classification will be demonstrated. These examples are drawn from fluorescence in situ hybridization (FISH) and microarray (gene chip) imaging experiments, which indicate the impact of wavelets on the diagnosis, treatments and prognosis of cancers and genetic diseases.
Schlossnagle, G.; Restrepo, J.M.; Leaf, G.K.
1993-12-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and contrasted against their counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrate by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and several tabulated values are included.
Wavelet Transforms in Parallel Image Processing
1994-01-27
NUMBER OF PAGES Object Segmentation, Texture Segmentation, Image Compression, Image 137 Halftoning , Neural Network, Parallel Algorithms, 2D and 3D...Vector Quantization of Wavelet Transform Coefficients ........ ............................. 57 B.1.f Adaptive Image Halftoning based on Wavelet...application has been directed to the adaptive image halftoning . The gray information at a pixel, including its gray value and gradient, is represented by
Wavelets, signal processing and matrix computations
NASA Astrophysics Data System (ADS)
Suter, Bruce W.
1994-09-01
Key scientific results were found in the following four areas: (1) multidimensional Malvar wavelets; (2) time/spatial varying filter banks; (3) vector filter banks and vector-valued wavelets; and (4) multirate time-frequency. These results have opened the following new areas of research: nonseparable multidimensional Malvar wavelets, vector-valued wavelets and vector filter banks, and multirate time-frequency analysis. These results also provide fundamental tools in many Air Force and industrial applications, such as modeling of turbulence, compression of images/video images, etc.
Wavelet theory and its applications
Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.
1996-07-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.
Optical Wavelet Signals Processing and Multiplexing
NASA Astrophysics Data System (ADS)
Cincotti, Gabriella; Moreolo, Michela Svaluto; Neri, Alessandro
2005-12-01
We present compact integrable architectures to perform the discrete wavelet transform (DWT) and the wavelet packet (WP) decomposition of an optical digital signal, and we show that the combined use of planar lightwave circuits (PLC) technology and multiresolution analysis (MRA) can add flexibility to current multiple access optical networks. We furnish the design guidelines to synthesize wavelet filters as two-port lattice-form planar devices, and we give some examples of optical signal denoising and compression/decompression techniques in the wavelet domain. Finally, we present a fully optical wavelet packet division multiplexing (WPDM) scheme where data signals are waveform-coded onto wavelet atom functions for transmission, and numerically evaluate its performances.
Optical wavelet transform for fingerprint identification
NASA Astrophysics Data System (ADS)
MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.
1994-03-01
The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.
Predictive depth coding of wavelet transformed images
NASA Astrophysics Data System (ADS)
Lehtinen, Joonas
1999-10-01
In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.
Wavelet transform based watermark for digital images.
Xia, X G; Boncelet, C; Arce, G
1998-12-07
In this paper, we introduce a new multiresolution watermarking method for digital images. The method is based on the discrete wavelet transform (DWT). Pseudo-random codes are added to the large coefficients at the high and middle frequency bands of the DWT of an image. It is shown that this method is more robust to proposed methods to some common image distortions, such as the wavelet transform based image compression, image rescaling/stretching and image halftoning. Moreover, the method is hierarchical.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Wavelet-aided pavement distress image processing
NASA Astrophysics Data System (ADS)
Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen
2003-11-01
A wavelet-based pavement distress detection and evaluation method is proposed. This method consists of two main parts, real-time processing for distress detection and offline processing for distress evaluation. The real-time processing part includes wavelet transform, distress detection and isolation, and image compression and noise reduction. When a pavement image is decomposed into different frequency subbands by wavelet transform, the distresses, which are usually irregular in shape, appear as high-amplitude wavelet coefficients in the high-frequency details subbands, while the background appears in the low-frequency approximation subband. Two statistical parameters, high-amplitude wavelet coefficient percentage (HAWCP) and high-frequency energy percentage (HFEP), are established and used as criteria for real-time distress detection and distress image isolation. For compression of isolated distress images, a modified EZW (Embedded Zerotrees of Wavelet coding) is developed, which can simultaneously compress the images and reduce the noise. The compressed data are saved to the hard drive for further analysis and evaluation. The offline processing includes distress classification, distress quantification, and reconstruction of the original image for distress segmentation, distress mapping, and maintenance decision-making. The compressed data are first loaded and decoded to obtain wavelet coefficients. Then Radon transform is then applied and the parameters related to the peaks in the Radon domain are used for distress classification. For distress quantification, a norm is defined that can be used as an index for evaluating the severity and extent of the distress. Compared to visual or manual inspection, the proposed method has the advantages of being objective, high-speed, safe, automated, and applicable to different types of pavements and distresses.
Construction of compactly supported biorthogonal wavelet based on Human Visual System
NASA Astrophysics Data System (ADS)
Hu, Haiping; Hou, Weidong; Liu, Hong; Mo, Yu L.
2000-11-01
As an important analysis tool, wavelet transform has made a great development in image compression coding, since Daubechies constructed a kind of compact support orthogonal wavelet and Mallat presented a fast pyramid algorithm for wavelet decomposition and reconstruction. In order to raise the compression ratio and improve the visual quality of reconstruction, it becomes very important to find a wavelet basis that fits the human visual system (HVS). Marr wavelet, as it is known, is a kind of wavelet, so it is not suitable for implementation of image compression coding. In this paper, a new method is provided to construct a kind of compactly supported biorthogonal wavelet based on human visual system, we employ the genetic algorithm to construct compactly supported biorthogonal wavelet that can approximate the modulation transform function for HVS. The novel constructed wavelet is applied to image compression coding in our experiments. The experimental results indicate that the visual quality of reconstruction with the new kind of wavelet is equivalent to other compactly biorthogonal wavelets in the condition of the same bit rate. It has good performance of reconstruction, especially used in texture image compression coding.
Multiresolution With Super-Compact Wavelets
NASA Technical Reports Server (NTRS)
Lee, Dohyung
2000-01-01
The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Performance analysis of embedded-wavelet coders
NASA Astrophysics Data System (ADS)
Yang, Shih-Hsuan; Liao, Wu-Jie
2005-09-01
We analyze the design issues for the SPIHT (set partitioning in hierarchical trees) coding, one of the best-regarded embedded-wavelet-based algorithms in the literature. Equipped with the multiresolution decomposition, progressive scalar quantization, and adaptive arithmetic coding, SPIHT generates highly compact scalable bitstreams suitable for real-time multimedia applications. The design parameters at each stage of SPIHT greatly influence its performance in terms of compression efficiency and computational complexity. We first evaluate two important classes of wavelet filters, orthogonal and biorthogonal. Orthogonal filters are energy-preserving, while biorthogonal linear-phase filters allow symmetric extension across the boundary. Among the various properties of wavelets pertaining to coding, we investigate the effects of energy compaction, energy conservation, and symmetric extension, respectively. Second, the magnitude of biorthogonal wavelet coefficients may not faithfully reflect their actual significance. We explore a scaling scheme in quantization that minimizes the overall mean squared error. Finally, the contribution of entropy coding is measured.
Implementing a global DEM database on the sphere based on spherical wavelets
NASA Astrophysics Data System (ADS)
Zhao, Di; Zhao, Xuesheng; Shan, Shigang; Yao, Liangjun
2010-11-01
Wavelets have been proven to be an exceedingly powerful and highly efficient tool for fast computational algorithms in the fields of image data analysis and compression. Traditionally, the classical constructed wavelets are often employed to Euclidean infinite domains (such as the real line R and plane R2). In this paper, a spherical wavelet constructed for discrete DEM data based on the sphere is approached. Firstly, the discrete biorthogonal spherical wavelet with custom properties is constructed with the lifting scheme based on wavelet toolbox in Matlab. Then, the decomposition and reconstruction algorithms are proposed for efficient computation and the related wavelet coefficients are obtained. Finally, different precise images are displayed and analyzed at the different percentage of wavelet coefficients. The efficiency of this spherical wavelet algorithm is tested by using the GTOPO30 DEM data and the results show that at the same precision, the spherical wavelet algorithm consumes smaller storage volume. The results are good and acceptable.
Implementing a global DEM database on the sphere based on spherical wavelets
NASA Astrophysics Data System (ADS)
Zhao, Di; Zhao, Xuesheng; Shan, Shigang; Yao, Liangjun
2009-09-01
Wavelets have been proven to be an exceedingly powerful and highly efficient tool for fast computational algorithms in the fields of image data analysis and compression. Traditionally, the classical constructed wavelets are often employed to Euclidean infinite domains (such as the real line R and plane R2). In this paper, a spherical wavelet constructed for discrete DEM data based on the sphere is approached. Firstly, the discrete biorthogonal spherical wavelet with custom properties is constructed with the lifting scheme based on wavelet toolbox in Matlab. Then, the decomposition and reconstruction algorithms are proposed for efficient computation and the related wavelet coefficients are obtained. Finally, different precise images are displayed and analyzed at the different percentage of wavelet coefficients. The efficiency of this spherical wavelet algorithm is tested by using the GTOPO30 DEM data and the results show that at the same precision, the spherical wavelet algorithm consumes smaller storage volume. The results are good and acceptable.
Image encoding with triangulation wavelets
NASA Astrophysics Data System (ADS)
Hebert, D. J.; Kim, HyungJun
1995-09-01
We demonstrate some wavelet-based image processing applications of a class of simplicial grids arising in finite element computations and computer graphics. The cells of a triangular grid form the set of leaves of a binary tree and the nodes of a directed graph consisting of a single cycle. The leaf cycle of a uniform grid forms a pattern for pixel image scanning and for coherent computation of coefficients of splines and wavelets. A simple form of image encoding is accomplished with a 1D quadrature mirror filter whose coefficients represent an expansion of the image in terms of 2D Haar wavelets with triangular support. A combination the leaf cycle and an inherent quadtree structure allow efficient neighbor finding, grid refinement, tree pruning and storage. Pruning of the simplex tree yields a partially compressed image which requires no decoding, but rather may be rendered as a shaded triangulation. This structure and its generalization to n-dimensions form a convenient setting for wavelet analysis and computations based on simplicial grids.
Wavelet Analysis of Space Solar Telescope Images
NASA Astrophysics Data System (ADS)
Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian
2003-12-01
The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.
Visibility of Wavelet Quantization Noise
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)
1995-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Visibility of Wavelet Quantization Noise
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)
1995-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Application specific compression : final report.
Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.
2008-12-01
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
Optimization technology of 9/7 wavelet lifting scheme on DSP*
NASA Astrophysics Data System (ADS)
Chen, Zhengzhang; Yang, Xiaoyuan; Yang, Rui
2007-12-01
Nowadays wavelet transform has been one of the most effective transform means in the realm of image processing, especially the biorthogonal 9/7 wavelet filters proposed by Daubechies, which have good performance in image compression. This paper deeply studied the implementation and optimization technologies of 9/7 wavelet lifting scheme based on the DSP platform, including carrying out the fixed-point wavelet lifting steps instead of time-consuming floating-point operation, adopting pipelining technique to improve the iteration procedure, reducing the times of multiplication calculation by simplifying the normalization operation of two-dimension wavelet transform, and improving the storage format and sequence of wavelet coefficients to reduce the memory consumption. Experiment results have shown that these implementation and optimization technologies can improve the wavelet lifting algorithm's efficiency more than 30 times, which establish a technique foundation for successfully developing real-time remote sensing image compression system in future.
NASA Astrophysics Data System (ADS)
Jones, B. J. T.
Wavelet analysis has become a major tool in many aspects of data handling, whether it be statistical analysis, noise removal or image reconstruction. Wavelet analysis has worked its way into fields as diverse as economics, medicine, geophysics, music and cosmology.
Transient Detection Using Wavelets.
1995-03-01
signaL and transients are nonstationary. A new technique for the analysis of this type of signal, called the Wavelet Transform , was applied to artificial...and real signals. A brief theoretical comparison between the Short Time Fourier Transform and the Wavelet Transform is introduced A multisolution...analysis approach for implementing the transform was used. Computer code for the Discrete Wavelet Transform was implemented. Different types of wavelets to use as basis functions were evaluated. (KAR) P. 2
Szu, H.; Hsu, C.
1996-12-31
Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.
Subband image encoder using discrete wavelet transform
NASA Astrophysics Data System (ADS)
Seong, Hae Kyung; Rhee, Kang Hyeon
2004-03-01
Introduction of digital communication network such as Integrated Services Digital Networks (ISDN) and digital storage media have rapidly developed. Due to a large amount of image data, compression is the key techniques in still image and video using digital signal processing for transmitting and storing. Digital image compression provides solutions for various image applications that represent digital image requiring a large amount of data. In this paper, the proposed DWT (Discrete Wavelet Transform) filter bank is consisted of simple architecture, but it is efficiently designed that a user obtains a wanted compression rate as only input parameter. If it is implemented by FPGA chip, the designed encoder operates in 12 MHz.
Wavelet encoding and variable resolution progressive transmission
NASA Technical Reports Server (NTRS)
Blanford, Ronald P.
1993-01-01
Progressive transmission is a method of transmitting and displaying imagery in stages of successively improving quality. The subsampled lowpass image representations generated by a wavelet transformation suit this purpose well, but for best results the order of presentation is critical. Candidate data for transmission are best selected using dynamic prioritization criteria generated from image contents and viewer guidance. We show that wavelets are not only suitable but superior when used to encode data for progressive transmission at non-uniform resolutions. This application does not preclude additional compression using quantization of highpass coefficients, which to the contrary results in superior image approximations at low data rates.
NASA Astrophysics Data System (ADS)
van den Berg, J. C.
1999-08-01
A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.
NASA Astrophysics Data System (ADS)
van den Berg, J. C.
2004-03-01
A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.
Image coding based on energy-sorted wavelet packets
NASA Astrophysics Data System (ADS)
Kong, Lin-Wen; Lay, Kuen-Tsair
1995-04-01
The discrete wavelet transform performs multiresolution analysis, which effectively decomposes a digital image into components with different degrees of details. In practice, it is usually implemented in the form of filter banks. If the filter banks are cascaded and both the low-pass and the high-pass components are further decomposed, a wavelet packet is obtained. The coefficients of the wavelet packet effectively represent subimages in different resolution levels. In the energy-sorted wavelet- packet decomposition, all subimages in the packet are then sorted according to their energies. The most important subimages, as measured by the energy, are preserved and coded. By investigating the histogram of each subimage, it is found that the pixel values are well modelled by the Laplacian distribution. Therefore, the Laplacian quantization is applied to quantized the subimages. Experimental results show that the image coding scheme based on wavelet packets achieves high compression ratio while preserving satisfactory image quality.
Wavelet transforms and filter banks in digital communications
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.; Medley, Michael J.
1996-03-01
Within the past few years, wavelet transforms and filter banks have received considerable attention in the technical literature, prompting applications in a variety of disciplines including applied mathematics, speech and image processing and compression, medical imaging, geophysics, signal processing, and information theory. More recently, several researchers in the field of communications have developed theoretical foundations for applications of wavelets as well. The objective of this paper is to survey the connections of wavelets and filter banks to communication theory and summarize current research efforts.
A dual Fourier-wavelet domain authentication-identification watermark.
Ahmed, Farid
2007-04-16
A dual Fourier-Wavelet domain watermarking technique for authentication and identity verification is proposed. Discrete wavelet transform (DWT) domain spread spectrum is used for embedding identity (such as registration number, transaction ID etc.) information. While a blind detector detects an ID, it is important to validate with other ancillary data. To satisfy that requirement, we embed a robust signature and hide it in a mid-band wavelet subband using Fourier domain bit-embedding algorithm. Results are furnished to show the compression tolerance of the method.
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
Applications of continuous and orthogonal wavelet transforms to MHD and plasma turbulence
NASA Astrophysics Data System (ADS)
Farge, Marie; Schneider, Kai
2016-10-01
Wavelet analysis and compression tools are presented and different applications to study MHD and plasma turbulence are illustrated. We use the continuous and the orthogonal wavelet transform to develop several statistical diagnostics based on the wavelet coefficients. We show how to extract coherent structures out of fully developed turbulent flows using wavelet-based denoising and describe multiscale numerical simulation schemes using wavelets. Several examples for analyzing, compressing and computing one, two and three dimensional turbulent MHD or plasma flows are presented. Details can be found in M. Farge and K. Schneider. Wavelet transforms and their applications to MHD and plasma turbulence: A review. Support by the French Research Federation for Fusion Studies within the framework of the European Fusion Development Agreement (EFDA) is thankfully acknowledged.
Embedded wavelet video coding with error concealment
NASA Astrophysics Data System (ADS)
Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te
2000-04-01
We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.
Filtering, Coding, and Compression with Malvar Wavelets
1993-12-01
The vocal tract is made up of the lips, mouth, and tongue . These can not change nearly as quickly as the vocal cords can, therefore the vocal tract...fluctuates slowly in the frequency domain and has a spike in the low quefrency region. These spikes are called formant peaks and have a number of uses in...the formant corresponding to the pitch (2). The cepstrum is used to find the formants of the pitch so that this information can be removed from the
2001-03-01
A unique ASIC was designed implementing the Haar Wavelet transform for image compression/decompression. ASIC operations include performing the Haar... wavelet transform on a 512 by 512 square pixel image, preparing the image for transmission by quantizing and thresholding the transformed data, and...performing the inverse Haar wavelet transform , returning the original image with only minor degradation. The ASIC is based on an existing four-chip FPGA
Simultaneous denoising and compression of multispectral images
NASA Astrophysics Data System (ADS)
Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.
2013-01-01
A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.
Image coding by way of wavelets
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1993-01-01
The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
NASA Astrophysics Data System (ADS)
Elsner, Franz; Wandelt, Benjamin D.
2014-01-01
We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.
Splitting algorithms for the wavelet transform of first-degree splines on nonuniform grids
NASA Astrophysics Data System (ADS)
Shumilov, B. M.
2016-07-01
For the splines of first degree with nonuniform knots, a new type of wavelets with a biased support is proposed. Using splitting with respect to the even and odd knots, a new wavelet decomposition algorithm in the form of the solution of a three-diagonal system of linear algebraic equations with respect to the wavelet coefficients is proposed. The application of the proposed implicit scheme to the point prediction of time series is investigated for the first time. Results of numerical experiments on the prediction accuracy and the compression of spline wavelet decompositions are presented.
Wavelet filtering for data recovery
NASA Astrophysics Data System (ADS)
Schmidt, W.
2013-09-01
In case of electrical wave measurements in space instruments, digital filtering and data compression on board can significantly enhance the signal and reduce the amount of data to be transferred to Earth. While often the instrument's transfer function is well known making the application of an optimized wavelet algorithm feasible the computational power requirements may be prohibitive as normally complex floating point operations are needed. This article presents a simplified possibility implemented in low-power 16-bit integer processors used for plasma wave measurements in the SPEDE instrument on SMART-1 and for the Permittivity Probe measurements of the SESAME/PP instrument in Rosetta's Philae Lander on its way to comet 67P/Churyumov-Gerasimenko.
NASA Astrophysics Data System (ADS)
Yan, Jingwen; Chen, Jiazhen
2007-03-01
A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ), the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.
Block-based scalable wavelet image codec
NASA Astrophysics Data System (ADS)
Bao, Yiliang; Kuo, C.-C. Jay
1999-10-01
This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).
Wavelet analysis in neurodynamics
NASA Astrophysics Data System (ADS)
Pavlov, Aleksei N.; Hramov, Aleksandr E.; Koronovskii, Aleksei A.; Sitnikova, Evgenija Yu; Makarov, Valeri A.; Ovchinnikov, Alexey A.
2012-09-01
Results obtained using continuous and discrete wavelet transforms as applied to problems in neurodynamics are reviewed, with the emphasis on the potential of wavelet analysis for decoding signal information from neural systems and networks. The following areas of application are considered: (1) the microscopic dynamics of single cells and intracellular processes, (2) sensory data processing, (3) the group dynamics of neuronal ensembles, and (4) the macrodynamics of rhythmical brain activity (using multichannel EEG recordings). The detection and classification of various oscillatory patterns of brain electrical activity and the development of continuous wavelet-based brain activity monitoring systems are also discussed as possibilities.
Sparse imaging of cortical electrical current densities via wavelet transforms.
Liao, Ke; Zhu, Min; Ding, Lei; Valette, Sébastien; Zhang, Wenbo; Dickens, Deanna
2012-11-07
While the cerebral cortex in the human brain is of functional importance, functions defined on this structure are difficult to analyze spatially due to its highly convoluted irregular geometry. This study developed a novel L1-norm regularization method using a newly proposed multi-resolution face-based wavelet method to estimate cortical electrical activities in electroencephalography (EEG) and magnetoencephalography (MEG) inverse problems. The proposed wavelets were developed based on multi-resolution models built from irregular cortical surface meshes, which were realized in this study too. The multi-resolution wavelet analysis was used to seek sparse representation of cortical current densities in transformed domains, which was expected due to the compressibility of wavelets, and evaluated using Monte Carlo simulations. The EEG/MEG inverse problems were solved with the use of the novel L1-norm regularization method exploring the sparseness in the wavelet domain. The inverse solutions obtained from the new method using MEG data were evaluated by Monte Carlo simulations too. The present results indicated that cortical current densities could be efficiently compressed using the proposed face-based wavelet method, which exhibited better performance than the vertex-based wavelet method. In both simulations and auditory experimental data analysis, the proposed L1-norm regularization method showed better source detection accuracy and less estimation errors than other two classic methods, i.e. weighted minimum norm (wMNE) and cortical low-resolution electromagnetic tomography (cLORETA). This study suggests that the L1-norm regularization method with the use of face-based wavelets is a promising tool for studying functional activations of the human brain.
Sparse imaging of cortical electrical current densities via wavelet transforms
NASA Astrophysics Data System (ADS)
Liao, Ke; Zhu, Min; Ding, Lei; Valette, Sébastien; Zhang, Wenbo; Dickens, Deanna
2012-11-01
While the cerebral cortex in the human brain is of functional importance, functions defined on this structure are difficult to analyze spatially due to its highly convoluted irregular geometry. This study developed a novel L1-norm regularization method using a newly proposed multi-resolution face-based wavelet method to estimate cortical electrical activities in electroencephalography (EEG) and magnetoencephalography (MEG) inverse problems. The proposed wavelets were developed based on multi-resolution models built from irregular cortical surface meshes, which were realized in this study too. The multi-resolution wavelet analysis was used to seek sparse representation of cortical current densities in transformed domains, which was expected due to the compressibility of wavelets, and evaluated using Monte Carlo simulations. The EEG/MEG inverse problems were solved with the use of the novel L1-norm regularization method exploring the sparseness in the wavelet domain. The inverse solutions obtained from the new method using MEG data were evaluated by Monte Carlo simulations too. The present results indicated that cortical current densities could be efficiently compressed using the proposed face-based wavelet method, which exhibited better performance than the vertex-based wavelet method. In both simulations and auditory experimental data analysis, the proposed L1-norm regularization method showed better source detection accuracy and less estimation errors than other two classic methods, i.e. weighted minimum norm (wMNE) and cortical low-resolution electromagnetic tomography (cLORETA). This study suggests that the L1-norm regularization method with the use of face-based wavelets is a promising tool for studying functional activations of the human brain.
Global and Local Distortion Inference During Embedded Zerotree Wavelet Decompression
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.
1996-01-01
This paper presents algorithms for inferring global and spatially local estimates of the squared-error distortion measures for the Embedded Zerotree Wavelet (EZW) image compression algorithm. All distortion estimates are obtained at the decoder without significantly compromising EZW's rate-distortion performance. Two methods are given for propagating distortion estimates from the wavelet domain to the spatial domain, thus giving individual estimates of distortion for each pixel of the decompressed image. These local distortion estimates seem to provide only slight improvement in the statistical characterization of EZW compression error relative to the global measure, unless actual squared errors are propagated. However, they provide qualitative information about the asymptotic nature of the error that may be helpful in wavelet filter selection for low bit rate applications.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin; Clyne, John; Childs, Hank
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.
Fitting coding scheme for image wavelet representation
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur
1998-10-01
Efficient coding scheme for image wavelet representation in lossy compression scheme is presented. Spatial-frequency hierarchical structure of quantized coefficient and their statistics is analyzed to reduce any redundancy. We applied context-based linear magnitude predictor to fit 1st order conditional probability model in arithmetic coding of significant coefficients to local data characteristics and eliminate spatial and inter-scale dependencies. Sign information is also encoded by inter and intra-band prediction and entropy coding of prediction errors. But main feature of our algorithm deals with encoding way of zerotree structures. Additional symbol of zerotree root is included into magnitude data stream. Moreover, four neighbor zerotree roots with significant parent node are included in extended high-order context model of zerotrees. This significant parent is signed as significant zerotree root and information about these roots distribution is coded separately. The efficiency of presented coding scheme was tested in dyadic wavelet decomposition scheme with two quantization procedures. Simple scalar uniform quantizer and more complex space-frequency quantizer with adaptive data thresholding were used. The final results seem to be promising and competitive across the most effective wavelet compression methods.
Multiwavelet-transform-based image compression techniques
NASA Astrophysics Data System (ADS)
Rao, Sathyanarayana S.; Yoon, Sung H.; Shenoy, Deepak
1996-10-01
Multiwavelet transforms are a new class of wavelet transforms that use more than one prototype scaling function and wavelet in the multiresolution analysis/synthesis. The popular Geronimo-Hardin-Massopust multiwavelet basis functions have properties of compact support, orthogonality, and symmetry which cannot be obtained simultaneously in scalar wavelets. The performance of multiwavelets in still image compression is studied using vector quantization of multiwavelet subbands with a multiresolution codebook. The coding gain of multiwavelets is compared with that of other well-known wavelet families using performance measures such as unified coding gain. Implementation aspects of multiwavelet transforms such as pre-filtering/post-filtering and symmetric extension are also considered in the context of image compression.
Wavelet transforms with discrete-time continuous-dilation wavelets
NASA Astrophysics Data System (ADS)
Zhao, Wei; Rao, Raghuveer M.
1999-03-01
Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.
Wavelets and Multifractal Analysis
2004-07-01
distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001750, Wavelets and Multifractal Analysis (WAMA) Workshop held on 19-31 July 2004., The original...f)] . . . 16 2.5.4 Detrended Fluctuation Analysis [DFA(m)] . . . . . . . . . . . . . . . 17 2.6 Scale-Independent Measures...18 2.6.1 Detrended -Fluctuation- Analysis Power-Law Exponent (αD) . . . . . . 18 2.6.2 Wavelet-Transform Power-Law Exponent
The Discrete Wavelet Transform
1991-06-01
Split- Band Coding," Proc. ICASSP, May 1977, pp 191-195. 12. Vetterli, M. "A Theory of Multirate Filter Banks ," IEEE Trans. ASSP, 35, March 1987, pp 356...both special cases of a single filter bank structure, the discrete wavelet transform, the behavior of which is governed by one’s choice of filters . In...B-1 ,.iii FIGURES 1.1 A wavelet filter bank structure ..................................... 2 2.1 Diagram illustrating the dialation and
Wavelet despiking of fractographs
NASA Astrophysics Data System (ADS)
Aubry, Jean-Marie; Saito, Naoki
2000-12-01
Fractographs are elevation maps of the fracture zone of some broken material. The technique employed to create these maps often introduces noise composed of positive or negative 'spikes' that must be removed before further analysis. Since the roughness of these maps contains useful information, it must be preserved. Consequently, conventional denoising techniques cannot be employed. We use continuous and discrete wavelet transforms of these images, and the properties of wavelet coefficients related to pointwise Hoelder regularity, to detect and remove the spikes.
Entanglement Renormalization and Wavelets.
Evenbly, Glen; White, Steven R
2016-04-08
We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems.
Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram
Anant, K.S.
1997-06-01
In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the P as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the
Riesz wavelets and multiresolution structures
NASA Astrophysics Data System (ADS)
Larson, David R.; Tang, Wai-Shing; Weber, Eric
2001-12-01
Multiresolution structures are important in applications, but they are also useful for analyzing properties of associated wavelets. Given a nonorthogonal (multi-) wavelet in a Hilbert space, we construct a core subspace. Subsequently, the dilates of the core subspace defines a ladder of nested subspaces. Of fundamental importance are two questions: 1) when is the core subspace shift invariant; and if yes, then 2) when is the core subspace generated by shifts of a single vector, i.e. there exists a scaling vector. If the wavelet generates a Riesz basis then the answer to question 1) is yes if and only if the wavelet is a biorthogonal wavelet. Additionally, if the wavelet generates a tight frame of arbitrary frame constant, then the core subspace is shift invariant. Question 1) is still open in case the wavelet generates a non-tight frame. We also present some known results to question 2) and provide some preliminary improvements. Our analysis here arises from investigating the dimension function and the multiplicity function of a wavelet. These two functions agree if the wavelet is orthogonal. Finally, we discuss how these questions are important for considering linear perturbation of wavelets. Utilizing the idea of the local commutant of a unitary system developed by Dai and Larson, we show that nearly all linear perturbations of two orthonormal wavelets form a Riesz wavelet. If in fact these wavelets correspond to a von Neumann algebra in the local commutant of a base wavelet, then the interpolated wavelet is biorthogonal. Moreover, we demonstrate that in this case the interpolated wavelets have a scaling vector if the base wavelet has a scaling vector.
Integrated system for image storage, retrieval, and transmission using wavelet transform
NASA Astrophysics Data System (ADS)
Yu, Dan; Liu, Yawen; Mu, Ray Y.; Yang, Shi-Qiang
1998-12-01
Currently, much work has been done in the area of image storage and retrieval. However, the overall performance has been far from practical. A highly integrated wavelet-based image management system is proposed in this paper. By integrating wavelet-based solutions for image compression and decompression, content-based retrieval and progressive transmission, much higher performance can be achieved. The multiresolution nature of the wavelet transform has been proven to be a powerful tool to represent images. The wavelet transform decomposes the image into a set of subimages with different resolutions. From here three solutions for key aspects of image management are reached. The content-based image retrieval (CBIR) features of our system include the color, contour, texture, sample, keyword and topic information of images. The first four features can be naturally extracted from the wavelet transform coefficients. By scoring the similarity of users' requests with images in the database, those who have higher scores are noted and the user receives feedback. Image compression and decompression. Assuming that details at high resolution and diagonal directions are less visible to the human eye, a good compression ratio can be achieved. In each subimage, the wavelet coefficients are vector quantized (VQ), using the LGB algorithm, which is improved in our approach to accelerate the process. Higher compression ratio can be achieved with DPCM and entropy coding method applied together. With YIQ representation, color images can also be effectively compressed. There is a very low load on the network bandwidth by transmitting compressed image data across the network. Progressive transmission is possible by employment of the multiresolution nature of the wavelet, which makes the system respond faster and the user-interface more friendly. The system shows a high overall performance by exploring the excellent features of wavelet, and integrating key aspects of image management. An
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies.
New approach to the design of low complexity 9/7 tap wavelet filters with maximum vanishing moments.
Naik, Ameya Kishor; Holambe, Raghunath Sambhaji
2014-12-01
In this paper, we present a novel approach for the design of 9/7 near-perfect-reconstruction wavelets that are efficient for image compression. These wavelets have maximum vanishing moments for both decomposition and reconstruction filters. Among the existing 9/7 tap wavelet filters, the Cohen-Daubechies-Feauveau (CDF) 9/7 are known to have the largest regularity. However, these wavelets have irrational coefficients thus requiring infinite precision implementation. Unlike state- of-art designs that compromise vanishing moments for attaining low-complexity coefficients, our algorithm ensures both. We start with a spline function of length 5 and select the remaining factors to obtain wavelets with rationalized coefficients. By proper choice of design parameters, it is possible to find very low complexity dyadic wavelets with compact support.We suggest a near half band criterion to attain a suitable combination of low-pass analysis and decomposition filters. The designed filter bank is found to give significant hardware advantage as compared with existing filter pairs. Moreover, these low-complexity wavelets have characteristics similar to standard (CDF 9/7) wavelets. The designed wavelets are tested for their suitability in applications such as image compression. Simulations results depict that the designed wavelets give comparable performances on most of the benchmark images. Subsequently, they can be used in applications that require fewer computations and lesser hardware.
Significance-linked connected component analysis for wavelet image coding.
Chai, B B; Vass, J; Zhuang, X
1999-01-01
Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.
Multiresolution stereo algorithm via wavelet representations for autonomous navigation
NASA Astrophysics Data System (ADS)
Shim, Minbo; Kurtz, John J.; Laine, Andrew F.
1999-03-01
Many autonomous vehicle navigation systems have adopted area-based stereo image processing techniques that use correlation measures to construct disparity maps as a basic obstacle detection and avoidance mechanism. Although the intra-scale area-based techniques perform well in pyramid processing frameworks, significant performance enhancement and reliability improvement may be achievable using wavelet- based inter-scale correlation measures. This paper presents a novel framework, which can be facilitated in unmanned ground vehicles, to recover 3D depth information (disparity maps) from binocular stereo images. We propose a wavelet- based coarse-to-fine incremental scheme to build up refined disparity maps from coarse ones, and demonstrate that usable disparity maps can be generated from sparse (compressed) wavelet coefficients. Our approach is motivated by a biological mechanism of the human visual system where multiresolution is known feature for perceptional visual processing. Among traditional multiresolution approaches, wavelet analysis provides a mathematically coherent and precise definition to the concept of multiresolution. The variation of resolution enables the transform to identify image signatures of objects in scale space. We use these signatures embedded in the wavelet transform domain to construct more detailed disparity maps at finer levels. Inter-scale correlation measures within the framework are used to identify the signature at the next finer level, since wavelet coefficients contain well-characterized evolutionary information.
Wavelet-based coding of ultraspectral sounder data
NASA Astrophysics Data System (ADS)
Garcia-Vilchez, Fernando; Serra-Sagrista, Joan; Auli-Llinas, Francesc
2005-08-01
In this paper we provide a study concerning the suitability of well-known image coding techniques originally devised for lossy compression of still natural images when applied to lossless compression of ultraspectral sounder data. We present here the experimental results of six wavelet-based widespread coding techniques, namely EZW, IC, SPIHT, JPEG2000, SPECK and CCSDS-IDC. Since the considered techniques are 2-dimensional (2D) in nature but the ultraspectral data are 3D, a pre-processing stage is applied to convert the two spatial dimensions into a single spatial dimension. All the wavelet-based techniques are competitive when compared either to the benchmark prediction-based methods for lossless compression, CALIC and JPEG-LS, or to two common compression utilities, GZIP and BZIP2. EZW, SPIHT, SPECK and CCSDS-IDC provide a very similar performance, while IC and JPEG2000 improve the compression factor when compared to the other wavelet-based methods. Nevertheless, they are not competitive when compared to a fast precomputed vector quantizer. The benefits of applying a pre-processing stage, the Bias Adjusted Reordering, prior to the coding process in order to further exploit the spectral and/or spatial correlation when 2D techniques are employed, are also presented.
Wavelet-based zerotree coding of aerospace images
NASA Astrophysics Data System (ADS)
Franques, Victoria T.; Jain, Vijay K.
1996-06-01
This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph
2004-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph
2004-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.
Preliminary study of sudden commencements propagation using wavelet techniques analysis.
NASA Astrophysics Data System (ADS)
Klausner, Virginia; Stepanov, Rodion; Frick, Peter; Mendes, Odim; Domingues, Margarete
2014-05-01
We analyzed, in a exploratory way, the sudden commencements (SC) phenomena in response to the two interplanetary shock happened on the June 6, 2012. The goal of this work is detected the differences in SC arrival times from the ground location of the first magnetospheric compression to latitudinal and longitudinal directions. We selected 40 ground magnetic observatories distributed, around the world, to cover in a wide way - latitudinal and longitudinal - the compression of the magnetosphere effects. It is well-knowing that the compression of the magnetosphere produces MHD wave propagation from the source to all directions. As methodology, discrete wavelet analysis (DWT) was used to calculated the SC arrival time delay between the magnetic observatories. Also, we calculated the phase shift of their wavelet transforms in order to detected the MHD wave propagation trends. The time delay observed in the wavelet signatures of magnetic observatories can give precious information on the main processes responsible for the energy transfer from latitudinal and longitudinal directions. Some peculiarities detected by the wavelet analysis for each event are discussed.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received
NASA Astrophysics Data System (ADS)
Cai, De; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan; He, Qingsheng
2005-01-01
Iris, one important biometric feature, has unique advantages: it has complex texture and is almost unchanged for the lifespan. So iris recognition has been widely studied for intelligent personal identification. Most of researchers use wavelets as iris feature extractor. And their systems obtain high accuracy. But wavelet transform is time consuming, so the problem is to enhance the useful information but still keep high processing speed. This is the reason we propose an opto-electronic system for iris recognition because of high parallelism of optics. In this system, we use eigen-images generated corresponding to optimally chosen wavelet packets to compress the iris image bank. After optical correlation between eigen-images and input, the statistic features are extracted. Simulation shows that wavelet packets preprocessing of the input images results in higher identification rate. And this preprocessing can be fulfilled by optical wavelet packet transform (OWPT), a new optical transform introduced by us. To generate the approximations of 2-D wavelet packet basis functions for implementing OWPT, mother wavelet, which has scaling functions, is utilized. Using the cascade algorithm and 2-D separable wavelet transform scheme, an optical wavelet packet filter is constructed based on the selected best bases. Inserting this filter makes the recognition performance better.
Spectral Data Reduction via Wavelet Decomposition
NASA Technical Reports Server (NTRS)
Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)
2002-01-01
The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R.
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R.
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
Stationary wavelet transform for under-sampled MRI reconstruction.
Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M
2014-12-01
In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions.
Video coding with lifted wavelet transforms and complementary motion-compensated signals
NASA Astrophysics Data System (ADS)
Flierl, Markus H.; Vandergheynst, Pierre; Girod, Bernd
2004-01-01
This paper investigates video coding with wavelet transforms applied in the temporal direction of a video sequence. The wavelets are implemented with the lifting scheme in order to permit motion compensation between successive pictures. We improve motion compensation in the lifting steps and utilize complementary motion-compensated signals. Similar to superimposed predictive coding with complementary signals, this approach improves compression efficiency. We investigate experimentally and theoretically complementary motion-compensated signals for lifted wavelet transforms. Experimental results with the complementary motion-compensated Haar wavelet and frame-adaptive motion compensation show improvements in coding efficiency of up to 3 dB. The theoretical results demonstrate that the lifted Haar wavelet scheme with complementary motion-compensated signals is able to approach the bound for bit-rate savings of 2 bits per sample and motion-accuracy step when compared to optimum intra-frame coding of the input pictures.
A multiresolution analysis for tensor-product splines using weighted spline wavelets
NASA Astrophysics Data System (ADS)
Kapl, Mario; Jüttler, Bert
2009-09-01
We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.
Wavelet-based embedded zerotree extension to color coding
NASA Astrophysics Data System (ADS)
Franques, Victoria T.
1998-03-01
Recently, a new image compression algorithm was developed which employs wavelet transform and a simple binary linear quantization scheme with an embedded coding technique to perform data compaction. This new family of coder, Embedded Zerotree Wavelet (EZW), provides a better compression performance than the current JPEG coding standard for low bit rates. Since EZW coding algorithm emerged, all of the published coding results related to this coding technique are on monochrome images. In this paper the author has enhanced the original coding algorithm to yield a better compression ratio, and has extended the wavelet-based zerotree coding to color images. Color imagery is often represented by several components, such as RGB, in which each component is generally processed separately. With color coding, each component could be compressed individually in the same manner as a monochrome image, therefore requiring a threefold increase in processing time. Most image coding standards employ de-correlated components, such as YIQ or Y, CB, CR and subsampling of the 'chroma' components, such coding technique is employed here. Results of the coding, including reconstructed images and coding performance, will be presented.
Wavelet-based Poisson solver for use in particle-in-cell simulations.
Terzić, Balsa; Pogorelov, Ilya V
2005-06-01
We report on a successful implementation of a wavelet-based Poisson solver for use in three-dimensional particle-in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and additional compression of relevant data sets. We present and discuss preliminary results relating to the application of the new solver to test problems in accelerator physics and astrophysics.
NASA Astrophysics Data System (ADS)
Le, Minh Hung; Liyana-Pathirana, Ranjith
2003-06-01
The unequal error protection (UEP) codes with wavelet-based algorithm for video compression over wide-band code division multiple access (W-CDMA), additive white Gaussian noise (AWGN) and Rayleigh fading channels are analysed. The utilization of Wavelets has come out to be a powerful method for compress video sequence. The wavelet transform compression technique has shown to be more appropriate to high quality video applications, producing better quality output for the compressed frames of video. A spatially scalable video coding framework of MPEG2 in which motion correspondences between successive video frames are exploited in the wavelet transform domain. The basic motivation for our coder is that motion fields are typically smooth that can be efficiently captured through a multiresolutional framework. Wavelet decomposition is applied to video frames and the coefficients at each level are predicted from the coarser level through backward motion compensation. The proposed algorithms of the embedded zero-tree wavelet (EZW) coder and the 2-D wavelet packet transform (2-D WPT) are investigated.
Hyperspectral image data compression based on DSP
NASA Astrophysics Data System (ADS)
Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin
2010-11-01
The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.; ,
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
Wavelet Signal Processing for Transient Feature Extraction
1992-03-15
Research was conducted to evaluate the feasibility of applying Wavelets and Wavelet Transform methods to transient signal feature extraction problems... Wavelet transform techniques were developed to extract low dimensional feature data that allowed a simple classification scheme to easily separate
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Volume holographic wavelet correlation processor
NASA Astrophysics Data System (ADS)
Feng, Wenyi; Yan, Yingbai; Jin, Guofan; Wu, Minxian; He, Qingsheng
2000-09-01
A volume holographic wavelet correlation processor is proposed and constructed for correlation identification. It is based on the theory of wavelet transforms and the mechanism of angle-multiplexing volume holographic associative storage in a photorefractive crystal. High parallelism and discrimination are achieved with the system. Our research shows that cross-talk noise is significantly reduced with wavelet filtering preprocessing. Correlation outputs can be expanded from one dimension in a conventional system to two dimensions in our system. As a result, the parallelism is greatly enhanced. Furthermore, several advantages of wavelet transforms in improving the discrimination capability of the system are described. The conventional correlation between two images is replaced by wavelet correlation between main local features extracted by an appropriate wavelet filter, which provides a sharp peak with low sidelobes. Theoretical analysis and experimental results are both given to support our conclusions. Its preliminary application to human-face recognition is studied.
Wavelet Preprocessing of Acoustic Signals
1991-12-01
wavelet transform to preprocess acoustic broadband signals in a system that discriminates between different classes of acoustic bursts. This is motivated by the similarity between the proportional bandwidth filters provided by the wavelet transform and those found in biological hearing systems. The experiment involves comparing statistical pattern classifier effects of wavelet and FFT preprocessed acoustic signals. The data used was from the DARPA Phase I database, which consists of artificially generated signals with real ocean background. The
Compression of gray-scale fingerprint images
NASA Astrophysics Data System (ADS)
Hopper, Thomas
1994-03-01
The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.
Compressive imaging using fast transform coding
NASA Astrophysics Data System (ADS)
Thompson, Andrew; Calderbank, Robert
2016-10-01
We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.
ERIC Educational Resources Information Center
Bookstein, Abraham; Storer, James A.
1992-01-01
Introduces this issue, which contains papers from the 1991 Data Compression Conference, and defines data compression. The two primary functions of data compression are described, i.e., storage and communications; types of data using compression technology are discussed; compression methods are explained; and current areas of research are…
2007-11-02
Daubechies-DeVore (Cohen-Daubechies-Gulleryuz-Orchard) This encoder is optimal on all Besov classes compactly embedded into L2 EZW , Said-Pearlman...DeVore (Cohen-Daubechies-Gulleryuz-Orchard) This encoder is optimal on all Besov classes compactly embedded into L2 EZW , Said-Pearlman, Cargese – p.49...Cohen-Daubechies-Gulleryuz-Orchard) This encoder is optimal on all Besov classes compactly embedded into L2 EZW , Said-Pearlman, Cargese – p.49/49 Wavelet
Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach
NASA Technical Reports Server (NTRS)
Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.
2000-01-01
The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.
Wavelet phase synchronization and chaoticity.
Postnikov, E B
2009-11-01
It has been shown that the so-called "wavelet phase" (or "time-scale") synchronization of chaotic signals is actually synchronization of smoothed functions with reduced chaotic fluctuations. This fact is based on the representation of the wavelet transform with the Morlet wavelet as a solution of the Cauchy problem for a simple diffusion equation with initial condition in a form of harmonic function modulated by a given signal. The topological background of the resulting effect is discussed. It is argued that the wavelet phase synchronization provides information about the synchronization of an averaged motion described by bounding tori instead of the fine-level classical chaotic phase synchronization.
Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.
2012-07-17
The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.
NASA Astrophysics Data System (ADS)
Zahra, Noor e.; Sevindir, Hulya Kodal; Aslan, Zafer; Siddiqi, A. H.
2012-07-01
The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.
Atrial fibrillation detection on compressed sensed ECG.
Da Poian, Giulia; Liu, Chengyu; Bernardini, Riccardo; Rinaldo, Roberto; Clifford, Gari D
2017-06-27
Compressive sensing (CS) approaches to electrocardiogram (ECG) analysis provide efficient methods for real time encoding of cardiac activity. In doing so, it is important to assess the downstream effect of the compression on any signal processing and classification algorithms. CS is particularly suitable for low power wearable devices, thanks to its low-complex digital or hardware implementation that directly acquires a compressed version of the signal through random projections. In this work, we evaluate the impact of CS compression on atrial fibrillation (AF) detection accuracy. We compare schemes with data reconstruction based on wavelet and Gaussian models, followed by a P&T-based identification of beat-to-beat (RR) intervals on the MIT-BIH atrial fibrillation database. A state-of-the-art AF detector is applied to the RR time series and the accuracy of the AF detector is then evaluated under different levels of compression. We also consider a new beat detection procedure which operates directly in the compressed domain, avoiding costly signal reconstruction procedures. We demonstrate that for compression ratios up to 30[Formula: see text] the AF detector applied to RR intervals derived from the compressed signal exhibits results comparable to those achieved when employing a standard QRS detector on the raw uncompressed signals, and exhibits only a 2% accuracy drop at a compression ratio of 60%. We also show that the Gaussian-based reconstruction approach is superior in terms of AF detection accuracy, with a negligible drop in performance at a compression ratio ⩽75%, compared to a wavelet approach, which exhibited a significant drop in accuracy at a compression ratio ⩾65%. The results suggest that CS should be considered as a plausible methodology for both efficient real time ECG compression (at moderate compression rates) and for offline analysis (at high compression rates).
The berkeley wavelet transform: a biologically inspired orthogonal wavelet transform.
Willmore, Ben; Prenger, Ryan J; Wu, Michael C-K; Gallant, Jack L
2008-06-01
We describe the Berkeley wavelet transform (BWT), a two-dimensional triadic wavelet transform. The BWT comprises four pairs of mother wavelets at four orientations. Within each pair, one wavelet has odd symmetry, and the other has even symmetry. By translation and scaling of the whole set (plus a single constant term), the wavelets form a complete, orthonormal basis in two dimensions. The BWT shares many characteristics with the receptive fields of neurons in mammalian primary visual cortex (V1). Like these receptive fields, BWT wavelets are localized in space, tuned in spatial frequency and orientation, and form a set that is approximately scale invariant. The wavelets also have spatial frequency and orientation bandwidths that are comparable with biological values. Although the classical Gabor wavelet model is a more accurate description of the receptive fields of individual V1 neurons, the BWT has some interesting advantages. It is a complete, orthonormal basis and is therefore inexpensive to compute, manipulate, and invert. These properties make the BWT useful in situations where computational power or experimental data are limited, such as estimation of the spatiotemporal receptive fields of neurons.
An Evolved Wavelet Library Based on Genetic Algorithm
Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.
2014-01-01
As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225
Hill, Paul; Achim, Alin; Al-Mualla, Mohammed Ebrahim; Bull, David
2016-04-11
Accurate estimation of the contrast sensitivity of the human visual system is crucial for perceptually based image processing in applications such as compression, fusion and denoising. Conventional Contrast Sensitivity Functions (CSFs) have been obtained using fixed sized Gabor functions. However, the basis functions of multiresolution decompositions such as wavelets often resemble Gabor functions but are of variable size and shape. Therefore to use conventional contrast sensitivity functions in such cases is not appropriate. We have therefore conducted a set of psychophysical tests in order to obtain the contrast sensitivity function for a range of multiresolution transforms: the Discrete Wavelet Transform (DWT), the Steerable Pyramid, the Dual-Tree Complex Wavelet Transform (DT-CWT) and the Curvelet Transform. These measures were obtained using contrast variation of each transforms' basis functions in a 2AFC experiment combined with an adapted version of the QUEST psychometric function method. The results enable future image processing applications that exploit these transforms such as signal fusion, super-resolution processing, denoising and motion estimation, to be perceptually optimised in a principled fashion. The results are compared to an existing vision model (HDR-VDP2) and are used to show quantitative improvements within a denoising application compared to using conventional CSF values.
Neural network wavelet technology: A frontier of automation
NASA Technical Reports Server (NTRS)
Szu, Harold
1994-01-01
Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.
NASA Technical Reports Server (NTRS)
Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.
1996-01-01
Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.
NASA Technical Reports Server (NTRS)
Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.
1996-01-01
Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.
NASA Astrophysics Data System (ADS)
Chevrot, Sébastien; Martin, Roland; Komatitsch, Dimitri
2012-12-01
Wavelets are extremely powerful to compress the information contained in finite-frequency sensitivity kernels and tomographic models. This interesting property opens the perspective of reducing the size of global tomographic inverse problems by one to two orders of magnitude. However, introducing wavelets into global tomographic problems raises the problem of computing fast wavelet transforms in spherical geometry. Using a Cartesian cubed sphere mapping, which grids the surface of the sphere with six blocks or 'chunks', we define a new algorithm to implement fast wavelet transforms with the lifting scheme. This algorithm is simple and flexible, and can handle any family of discrete orthogonal or bi-orthogonal wavelets. Since wavelet coefficients are local in space and scale, aliasing effects resulting from a parametrization with global functions such as spherical harmonics are avoided. The sparsity of tomographic models expanded in wavelet bases implies that it is possible to exploit the power of compressed sensing to retrieve Earth's internal structures optimally. This approach involves minimizing a combination of a ℓ2 norm for data residuals and a ℓ1 norm for model wavelet coefficients, which can be achieved through relatively minor modifications of the algorithms that are currently used to solve the tomographic inverse problem.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
A generalized wavelet extrema representation
Lu, Jian; Lades, M.
1995-10-01
The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.
Wavelet preprocessing of acoustic signals
NASA Astrophysics Data System (ADS)
Huang, W. Y.; Solorzano, M. R.
1991-12-01
This paper describes results using the wavelet transform to preprocess acoustic broadband signals in a system that discriminates between different classes of acoustic bursts. This is motivated by the similarity between the proportional bandwidth filters provided by the wavelet transform and those found in biological hearing systems. The experiment involves comparing statistical pattern classifier effects of wavelet and FFT preprocessed acoustic signals. The data used was from the DARPA Phase 1 database, which consists of artificially generated signals with real ocean background. The results show that the wavelet transform did provide improved performance when classifying in a frame-by-frame basis. The DARPA Phase 1 database is well matched to proportional bandwidth filtering; i.e., signal classes that contain high frequencies do tend to have shorter duration in this database. It is also noted that the decreasing background levels at high frequencies compensate for the poor match of the wavelet transform for long duration (high frequency) signals.
Wavelet Analysis of Bioacoustic Scattering and Marine Mammal Vocalizations
2005-09-01
17 B. DISCRETE WAVELET TRANSFORM .....................................................17 1. Mother Wavelet ...LEFT BLANK 11 III. WAVELET THEORY There are two distinct classes of wavelet transforms : the continuous wavelet transform (CWT) and the discrete ... wavelet transform (DWT). The discrete wavelet transform is a compact representation of the data and is particularly useful for noise reduction and
Wavelets and spacetime squeeze
NASA Technical Reports Server (NTRS)
Han, D.; Kim, Y. S.; Noz, Marilyn E.
1993-01-01
It is shown that the wavelet is the natural language for the Lorentz covariant description of localized light waves. A model for covariant superposition is constructed for light waves with different frequencies. It is therefore possible to construct a wave function for light waves carrying a covariant probability interpretation. It is shown that the time-energy uncertainty relation (Delta(t))(Delta(w)) is approximately 1 for light waves is a Lorentz-invariant relation. The connection between photons and localized light waves is examined critically.
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Wavelet Packets in Wideband Multiuser Communications
2004-11-01
developed doubly orthogonal CDMA user spreading waveforms based on wavelet packets. We have also developed and evaluated a wavelet packet based ...inter symbol interferences. Compared with the existing DFT based multicarrier CDMA systems, better performance is achieved with the wavelet packet...23 3.4 Over Loaded Waveform Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4. Wavelet Packet Based Time-Varying
NASA Astrophysics Data System (ADS)
Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua
1997-04-01
Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
ECG data compression using Jacobi polynomials.
Tchiotsop, Daniel; Wolf, Didier; Louis-Dorr, Valérie; Husson, René
2007-01-01
Data compression is a frequent signal processing operation applied to ECG. We present here a method of ECG data compression utilizing Jacobi polynomials. ECG signals are first divided into blocks that match with cardiac cycles before being decomposed in Jacobi polynomials bases. Gauss quadratures mechanism for numerical integration is used to compute Jacobi transforms coefficients. Coefficients of small values are discarded in the reconstruction stage. For experimental purposes, we chose height families of Jacobi polynomials. Various segmentation approaches were considered. We elaborated an efficient strategy to cancel boundary effects. We obtained interesting results compared with ECG compression by wavelet decomposition methods. Some propositions are suggested to improve the results.
An Introduction to Wavelet Theory and Analysis
Miner, N.E.
1998-10-01
This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.
Multimedia data authentication in wavelet domain
NASA Astrophysics Data System (ADS)
Wang, Jinwei; Lian, Shiguo; Liu, Zhongxuan; Zhen, Ren; Dai, Yuewei
2006-04-01
With the wide application of multimedia data, multimedia content protection becomes urgent. Till now, various means have been reported, which can be classified into several types according to their functionalities, such as data encryption, digital watermarking or data authentication. They are used to protect multimedia data's confidentiality, ownership and integrity, respectively. For multimedia data authentication, some approaches have been proposed. In this paper, a wavelet-based multi-feature semi-fragile authentication scheme is proposed. According to the approximation component and the energy relationship between the subbands of the detail component, global feature and local feature are both generated. Then, the global watermark and local watermark are generated from global feature and local feature, respectively. The watermarks are then embedded into the multimedia data themselves in the wavelet domain. Both the feature extraction and embedding processes are controlled by secret keys to improve the security of the proposed scheme. In the receiver end, the extracted watermark and the one generated from the received image are compared to determine the tampered location. A new authentication method is designed and it is proved valid in the experiments. This authentication scheme is robust to general compression, sensitive to cutting, pasting or modification, efficient in real-time operation, and secure for practical applications.
Wavelet-based rotationally invariant target classification
NASA Astrophysics Data System (ADS)
Franques, Victoria T.; Kerr, David A.
1997-07-01
In this paper, a novel approach to feature extraction for rotationally invariant object classification is proposed based directly on a discrete wavelet transformation. This form of feature extraction is equivalent to retaining information features while eliminating redundant features from images, which is a critical property when analyzing large, high dimensional images. Usually, researchers have resorted to a data pre-processing method to reduce the size of the feature space prior to classification. The proposed method employs statistical features extracted directly from the wavelet coefficients generated from a three-level subband decomposition system using a set of orthogonal and regular Quadrature Mirror Filters. This algorithm has two desirable properties: (1) It reduces the number of dimensions of the feature space necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; (2) Regardless of the target orientation, the algorithm can perform classification with low error rates. Furthermore, the filters used have performed well in the image compression regime, but they have not been applied to applications in target classification which will be demonstrated in this paper. The results of several classification experiments on variously oriented samples of the visible wavelength targets will be presented.
Dual multichannel optical wavelet transform processor
NASA Astrophysics Data System (ADS)
Feng, Wenyi; Yan, Yingbai; Jin, Guofan; Wu, Minxian; He, Qingsheng
1999-10-01
Based on the theory of volume holographic associative storage in a photorefractive crystal and that of binary optics, a compact dual multichannel optical wavelet transform processor is proposed and constructed. Both wavelet correlation and wavelet transform can be complemented by the system. Multi-pattern channels are achieved by the inherent parallelism of volume holographic storage. Angle multiplexed holograms of wavelet filtered pattern images are recorded in the crystal Multi-wavelet channels are accomplished by a Dammann grating, which is a binary optical element for spectrum duplication. The grating is adopted to generate a set of channels with different wavelet filters. Wavelet correlation peaks in different wavelet channels are synthesized to improve the recognition accuracy by multiplication pixel by pixel. Wavelet transform results in different wavelet channels are stored in the crystal and can be restored for recognition or segmentation. The application of the system in human face recognition is studied.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization
Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.
2000-01-05
We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.
Wavelet Transform Signal Processing Applied to Ultrasonics.
1995-05-01
THE WAVELET TRANSFORM IS APPLIED TO THE ANALYSIS OF ULTRASONIC WAVES FOR IMPROVED SIGNAL DETECTION AND ANALYSIS OF THE SIGNALS. In instances where...the mother wavelet is well defined, the wavelet transform has relative insensitivity to noise and does not need windowing. Peak detection of...ultrasonic pulses using the wavelet transform is described and results show good detection even when large white noise was added. The use of the wavelet
Wavelet networks for face processing.
Krüger, V; Sommer, G
2002-06-01
Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.
Wavelet networks for face processing
NASA Astrophysics Data System (ADS)
Krüger, V.; Sommer, G.
2002-06-01
Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
An introduction to wavelet theory and application for the radiological physicist.
Harpen, M D
1998-10-01
The wavelet transform, part of a rapidly advancing new area of mathematics, has become an important technique for image compression, noise suppression, and feature extraction. As a result, the radiological physicist can expect to be confronted with elements of wavelet theory as diagnostic radiology advances into teleradiology, PACS, and computer aided feature extraction and diagnosis. With this in mind we present a primer on wavelet theory geared specifically for the radiological physicist. The mathematical treatment is free of the details of mathematical rigor, which are found in most other treatments of the subject and which are of little interest to physicists, yet is sufficient to convey a reasonably deep working knowledge of wavelet theory.
An overview of the quantum wavelet transform, focused on earth science applications
NASA Astrophysics Data System (ADS)
Shehab, O.; LeMoigne, J.; Lomonaco, S.; Halem, M.
2015-12-01
Registering the images from the MODIS system and the OCO-2 satellite is currently being done by classical image registration techniques. One such technique is wavelet transformation. Besides image registration, wavelet transformation is also used in other areas of earth science, for example, processinga and compressing signal variation, etc. In this talk, we investigate the applicability of few quantum wavelet transformation algorithms to perform image registration on the MODIS and OCO-2 data. Most of the known quantum wavelet transformation algorithms are data agnostic. We investigate their applicability in transforming Flexible Representation for Quantum Images. Similarly, we also investigate the applicability of the algorithms in signal variation analysis. We also investigate the transformation of the models into pseudo-boolean functions to implement them on commercially available quantum annealing computers, such as the D-Wave computer located at NASA Ames.
Spherical 3D isotropic wavelets
NASA Astrophysics Data System (ADS)
Lanusse, F.; Rassat, A.; Starck, J.-L.
2012-04-01
Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html
Peak finding using biorthogonal wavelets
Tan, C.Y.
2000-02-01
The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.
Tailoring wavelets for chaos control.
Wei, G W; Zhan, Meng; Lai, C-H
2002-12-31
Chaos is a class of ubiquitous phenomena and controlling chaos is of great interest and importance. In this Letter, we introduce wavelet controlled dynamics as a new paradigm of dynamical control. We find that by modifying a tiny fraction of the wavelet subspaces of a coupling matrix, we could dramatically enhance the transverse stability of the synchronous manifold of a chaotic system. Wavelet controlled Hopf bifurcation from chaos is observed. Our approach provides a robust strategy for controlling chaos and other dynamical systems in nature.
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).
Efficient wavelet-based voice/data discriminator for telephone networks
NASA Astrophysics Data System (ADS)
Quirk, Patrick J.; Tseng, Yi-Chyun; Adhami, Reza R.
1996-06-01
A broad array of applications in the Public Switched Telephone Network (PSTN) require detailed information about type of call being carried. This information can be used to enhance service, diagnose transmission impairments, and increase available call capacity. The increase in data rates of modems and the increased usage of speech compression in the PSTN has rendered existing detection algorithms obsolete. Wavelets, specifically the Discrete Wavelet Transform (DWT), are a relatively new analysis tool in Digital Signal Processing. The DWT has been applied to signal processing problems ranging from speech compression to astrophysics. In this paper, we present a wavelet-based method of categorizing telephony traffic by call type. Calls are categorized as Voice or Data. Data calls, primarily modem and fax transmissions, are further divided by the International Telecommunications Union-Telephony (ITU-T), formerly CCITT, V-series designations (V.22bis, V.32, V.32bis, and V.34).
Wavelet domain textual coding of Ottoman script images
NASA Astrophysics Data System (ADS)
Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.
1996-02-01
Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.
A New Approach for Fingerprint Image Compression
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.
Birdsong Denoising Using Wavelets
Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal
2016-01-01
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Wavelet Analysis of Protein Motion
BENSON, NOAH C.
2014-01-01
As high-throughput molecular dynamics simulations of proteins become more common and the databases housing the results become larger and more prevalent, more sophisticated methods to quickly and accurately mine large numbers of trajectories for relevant information will have to be developed. One such method, which is only recently gaining popularity in molecular biology, is the continuous wavelet transform, which is especially well-suited for time course data such as molecular dynamics simulations. We describe techniques for the calculation and analysis of wavelet transforms of molecular dynamics trajectories in detail and present examples of how these techniques can be useful in data mining. We demonstrate that wavelets are sensitive to structural rearrangements in proteins and that they can be used to quickly detect physically relevant events. Finally, as an example of the use of this approach, we show how wavelet data mining has led to a novel hypothesis related to the mechanism of the protein γδ resolvase. PMID:25484480
A new fractional wavelet transform
NASA Astrophysics Data System (ADS)
Dai, Hongzhe; Zheng, Zhibao; Wang, Wei
2017-03-01
The fractional Fourier transform (FRFT) is a potent tool to analyze the time-varying signal. However, it fails in locating the fractional Fourier domain (FRFD)-frequency contents which is required in some applications. A novel fractional wavelet transform (FRWT) is proposed to solve this problem. It displays the time and FRFD-frequency information jointly in the time-FRFD-frequency plane. The definition, basic properties, inverse transform and reproducing kernel of the proposed FRWT are considered. It has been shown that an FRWT with proper order corresponds to the classical wavelet transform (WT). The multiresolution analysis (MRA) associated with the developed FRWT, together with the construction of the orthogonal fractional wavelets are also presented. Three applications are discussed: the analysis of signal with time-varying frequency content, the FRFD spectrum estimation of signals that involving noise, and the construction of fractional Harr wavelet. Simulations verify the validity of the proposed FRWT.
Wavelet entropy of stochastic processes
NASA Astrophysics Data System (ADS)
Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.
2007-06-01
We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.
A wavelet phase filter for emission tomography
Olsen, E.T.; Lin, B.
1995-07-01
The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2{pi}). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods.
Gearbox Fault Diagnosis Using Adaptive Wavelet Filter
NASA Astrophysics Data System (ADS)
LIN, J.; ZUO, M. J.
2003-11-01
Vibration signals from a gearbox are usually noisy. As a result, it is difficult to find early symptoms of a potential failure in a gearbox. Wavelet transform is a powerful tool to disclose transient information in signals. An adaptive wavelet filter based on Morlet wavelet is introduced in this paper. The parameters in the Morlet wavelet function are optimised based on the kurtosis maximisation principle. The wavelet used is adaptive because the parameters are not fixed. The adaptive wavelet filter is found to be very effective in detection of symptoms from vibration signals of a gearbox with early fatigue tooth crack. Two types of discrete wavelet transform (DWT), the decimated with DB4 wavelet and the undecimated with harmonic wavelet, are also used to analyse the same signals for comparison. No periodic impulses appear on any scale in either DWT decomposition.
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Optical HAAR Wavelet Transforms using Computer Generated Holography
1992-12-17
This research introduces an optical implementation of the continuous wavelet transform to filter images. The wavelet transform is modeled as a...continuous wavelet transform was performed and that the results compared favorably to digital simulation. Wavelets, Holography, Optical correlators.
Heart Disease Detection Using Wavelets
NASA Astrophysics Data System (ADS)
González S., A.; Acosta P., J. L.; Sandoval M., M.
2004-09-01
We develop a wavelet based method to obtain standardized gray-scale chart of both healthy hearts and of hearts suffering left ventricular hypertrophy. The hypothesis that early bad functioning of heart can be detected must be tested by comparing the wavelet analysis of the corresponding ECD with the limit cases. Several important parameters shall be taken into account such as age, sex and electrolytic changes.
Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets
Duchaineau, M A; Bertram, M; Porumbescu, S; Hamann, B; Joy, K I
2001-10-03
Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitable approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.
A wavelet watermarking algorithm based on a tree structure
NASA Astrophysics Data System (ADS)
Guitart Pla, Oriol; Lin, Eugene T.; Delp, Edward J., III
2004-06-01
We describe a blind watermarking technique for digital images. Our technique constructs an image-dependent watermark in the discrete wavelet transform (DWT) domain and inserts the watermark in the most signifcant coefficients of the image. The watermarked coefficients are determined by using the hierarchical tree structure induced by the DWT, similar in concept to embedded zerotree wavelet (EZW) compression. If the watermarked image is attacked or manipulated such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronization. Our technique also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. The visual adaptive scheme also takes advantage of the tree structure. Finally, a template is inserted into the watermark to provide robustness against geometric attacks. The template detection uses the cross-ratio of four collinear points.
An Attack on Wavelet Tree Shuffling Encryption Schemes
NASA Astrophysics Data System (ADS)
Assegie, Samuel; Salama, Paul; King, Brian
With the ubiquity of the internet and advances in technology, especially digital consumer electronics, demand for online multimedia services is ever increasing. While it's possible to achieve a great reduction in bandwidth utilization of multimedia data such as image and video through compression, security still remains a great concern. Traditional cryptographic algorithms/systems for data security are often not fast enough to process the vast amounts of data generated by the multimedia applications to meet the realtime constraints. Selective encryption is a new scheme for multimedia content protection. It involves encrypting only a portion of the data to reduce computational complexity(the amount of data to encrypt)while preserving a sufficient level of security. To achieve this, many selective encryption schemes are presented in different literatures. One of them is Wavelet Tree Shuffling. In this paper we assess the security of a wavelet tree shuffling encryption scheme.
Tilt correction method of text image based on wavelet pyramid
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Zhu, Qiguo
2017-04-01
Text images captured by camera may be tilted and distorted, which is unfavorable for document character recognition. Therefore,a method of text image tilt correction based on wavelet pyramid is proposed in this paper. The first step is to convert the text image captured by cameras to binary images. After binarization, the images are layered by wavelet transform to achieve noise reduction, enhancement and compression of image. Afterwards,the image would bedetected for edge by Canny operator, and extracted for straight lines by Radon transform. In the final step, this method calculates the intersection of straight lines and gets the corrected text images according to the intersection points and perspective transformation. The experimental result shows this method can correct text images accurately.
Multiresolution Distance Volumes for Progressive Surface Compression
Laney, D E; Bertram, M; Duchaineau, M A; Max, N L
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.
Complex Wavelet Transform-Based Face Recognition
NASA Astrophysics Data System (ADS)
Eleyan, Alaa; Özkaramanli, Hüseyin; Demirel, Hasan
2009-12-01
Complex approximately analytic wavelets provide a local multiscale description of images with good directional selectivity and invariance to shifts and in-plane rotations. Similar to Gabor wavelets, they are insensitive to illumination variations and facial expression changes. The complex wavelet transform is, however, less redundant and computationally efficient. In this paper, we first construct complex approximately analytic wavelets in the single-tree context, which possess Gabor-like characteristics. We, then, investigate the recently developed dual-tree complex wavelet transform (DT-CWT) and the single-tree complex wavelet transform (ST-CWT) for the face recognition problem. Extensive experiments are carried out on standard databases. The resulting complex wavelet-based feature vectors are as discriminating as the Gabor wavelet-derived features and at the same time are of lower dimension when compared with that of Gabor wavelets. In all experiments, on two well-known databases, namely, FERET and ORL databases, complex wavelets equaled or surpassed the performance of Gabor wavelets in recognition rate when equal number of orientations and scales is used. These findings indicate that complex wavelets can provide a successful alternative to Gabor wavelets for face recognition.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Discrete wavelet transform core for image processing applications
NASA Astrophysics Data System (ADS)
Savakis, Andreas E.; Carbone, Richard
2005-02-01
This paper presents a flexible hardware architecture for performing the Discrete Wavelet Transform (DWT) on a digital image. The proposed architecture uses a variation of the lifting scheme technique and provides advantages that include small memory requirements, fixed-point arithmetic implementation, and a small number of arithmetic computations. The DWT core may be used for image processing operations, such as denoising and image compression. For example, the JPEG2000 still image compression standard uses the Cohen-Daubechies-Favreau (CDF) 5/3 and CDF 9/7 DWT for lossless and lossy image compression respectively. Simple wavelet image denoising techniques resulted in improved images up to 27 dB PSNR. The DWT core is modeled using MATLAB and VHDL. The VHDL model is synthesized to a Xilinx FPGA to demonstrate hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons. The execution time for performing both DWTs is nearly identical at approximately 14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is around 15,000 gates using only 5% of the Xilinx FPGA hardware area, at 2.185 MHz max clock speed and 24 mW power consumption.
Wavelet Transforms using VTK-m
Li, Shaomeng; Sewell, Christopher Meyer
2016-09-27
These are a set of slides that deal with the topics of wavelet transforms using VTK-m. First, wavelets are discussed and detailed, then VTK-m is discussed and detailed, then wavelets and VTK-m are looked at from a performance comparison, then from an accuracy comparison, and finally lessons learned, conclusion, and what is next. Lessons learned are the following: Launching worklets is expensive; Natural logic of performing 2D wavelet transform: Repeat the same 1D wavelet transform on every row, repeat the same 1D wavelet transform on every column, invoke the 1D wavelet worklet every time: num_rows x num_columns; VTK-m approach of performing 2D wavelet transform: Create a worklet for 2D that handles both rows and columns, invoke this new worklet only one time; Fast calculation, but cannot reuse 1D implementations.
A Progressive Image Compression Method Based on EZW Algorithm
NASA Astrophysics Data System (ADS)
Du, Ke; Lu, Jianming; Yahagi, Takashi
A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these
Edge Detection Using a Complex Wavelet
1993-12-01
A complex wavelet of the form Psi(x, y) = C(x jy)exp(-p(x-sq+y-sq))) is used in the continuous wavelet transform to obtain edges from a digital image...and x and y are position variables. The square root of the sum of the squares of the real and imaginary parts of the wavelet transform are used to...radar images and the resulting images are shown. Continuous wavelet transform , Digital image.
Momentum flux in breaking wavelets
NASA Astrophysics Data System (ADS)
Csanady, G. T.
1990-08-01
A breaking wavelet is taken to consist of a roller and a trailing turbulent wake, both riding on an irrotational wave. The shear stress force on the separation streamline between the roller and the underlying flow is balanced mainly by the horizontal pressure force on the same streamline. The pressure force acts on the underlying flow and reduces wavelet momentum; the shear force generates the momentum deficit of the wake. In this manner, wavelet momentum is turned into shear flow momentum. In wind-driven wavelets the shear force of the wind aids roller formation: rollers form at a relatively low approach momentum from boundary layer fluid generated by surface shear. Breaking wavelets have been modeled by superimposing the surface disturbance generated by a roller on a sinusoidal wave. The phase relationship of the two components determines how much momentum is extracted from the wave. The models show the characteristic asymmetric, forward leaning shape of breakers. The wave under the roller is shortened, so that the steepness of the breaker is even greater than it would be on account of the roller's presence alone. Ahead of the roller's toe, capillary waves are generated. On short waves these are of easily visible amplitude and serve to identify the presence of a roller.
Texture analysis using Gabor wavelets
NASA Astrophysics Data System (ADS)
Naghdy, Golshah A.; Wang, Jian; Ogunbona, Philip O.
1996-04-01
Receptive field profiles of simple cells in the visual cortex have been shown to resemble even- symmetric or odd-symmetric Gabor filters. Computational models employed in the analysis of textures have been motivated by two-dimensional Gabor functions arranged in a multi-channel architecture. More recently wavelets have emerged as a powerful tool for non-stationary signal analysis capable of encoding scale-space information efficiently. A multi-resolution implementation in the form of a dyadic decomposition of the signal of interest has been popularized by many researchers. In this paper, Gabor wavelet configured in a 'rosette' fashion is used as a multi-channel filter-bank feature extractor for texture classification. The 'rosette' spans 360 degrees of orientation and covers frequencies from dc. In the proposed algorithm, the texture images are decomposed by the Gabor wavelet configuration and the feature vectors corresponding to the mean of the outputs of the multi-channel filters extracted. A minimum distance classifier is used in the classification procedure. As a comparison the Gabor filter has been used to classify the same texture images from the Brodatz album and the results indicate the superior discriminatory characteristics of the Gabor wavelet. With the test images used it can be concluded that the Gabor wavelet model is a better approximation of the cortical cell receptive field profiles.
A watermarking algorithm for anti JPEG compression attacks
NASA Astrophysics Data System (ADS)
Han, Baoru; Huang, Guo
2017-07-01
JPEG compression is a compression standard, which is widely used in image processing. A new medical image watermarking algorithm against Joint Photographic Experts Group (JPEG) compression is proposed. The original watermarking image is made scrambled by Legendre chaotic neural network to enhance watermarking security. The watermarking algorithm uses three-dimensional discrete wavelet transform (3D DWT), three-dimensional discrete cosine transform (3D DCT) properties and differences hashing to produce watermarking. Experimental results show that the watermarking algorithm has good transparency and robustness against JPEG compression attack.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Block-based wavelet transform coding of mammograms with region-adaptive quantization
NASA Astrophysics Data System (ADS)
Moon, Nam Su; Song, Jun S.; Kwon, Musik; Kim, JongHyo; Lee, ChoongWoong
1998-06-01
To achieve both high compression ratio and information preserving, it is an efficient way to combine segmentation and lossy compression scheme. Microcalcification in mammogram is one of the most significant sign of early stage of breast cancer. Therefore in coding, detection and segmentation of microcalcification enable us to preserve it well by allocating more bits to it than to other regions. Segmentation of microcalcification is performed both in spatial domain and in wavelet transform domain. Peak error controllable quantization step, which is off-line designed, is suitable for medical image compression. For region-adaptive quantization, block- based wavelet transform coding is adopted and different peak- error-constrained quantizers are applied to blocks according to the segmentation result. In view of preservation of microcalcification, the proposed coding scheme shows better performance than JPEG.
Wavelet-Based Multiresolution Analyses of Signals
1992-06-01
classification. Some signals, notably those of a transient nature, are inherently difficult to analyze with these traditional tools. The Discrete Wavelet Transform has...scales. This thesis investigates dyadic discrete wavelet decompositions of signals. A new multiphase wavelet transform is proposed and investigated. The
Multichannel ECG data compression based on multiscale principal component analysis.
Sharma, L N; Dandapat, S; Mahanta, Anil
2012-07-01
In this paper, multiscale principal component analysis (MSPCA) is proposed for multichannel electrocardiogram (MECG) data compression. In wavelet domain, principal components analysis (PCA) of multiscale multivariate matrices of multichannel signals helps reduce dimension and remove redundant information present in signals. The selection of principal components (PCs) is based on average fractional energy contribution of eigenvalue in a data matrix. Multichannel compression is implemented using uniform quantizer and entropy coding of PCA coefficients. The compressed signal quality is evaluated quantitatively using percentage root mean square difference (PRD), and wavelet energy-based diagnostic distortion (WEDD) measures. Using dataset from CSE multilead measurement library, multichannel compression ratio of 5.98:1 is found with PRD value 2.09% and the lowest WEDD value of 4.19%. Based on, gold standard subjective quality measure, the lowest mean opinion score error value of 5.56% is found.
Double-density complex wavelet cartoon-texture decomposition
NASA Astrophysics Data System (ADS)
Hewer, Gary A.; Kuo, Wei; Hanson, Grant
2007-09-01
Both the Kingsbury dual-tree and the subsequent Selesnick double-density dual-tree complex wavelet transform approximate an analytic function. The classification of the phase dependency across scales is largely unexplored except by Romberg et al.. Here we characterize the sub-band dependency of the orientation of phase gradients by applying the Helmholtz principle to bivariate histograms to locate meaningful modes. A further characterization using the Earth Mover's Distance with the fundamental Rudin-Osher-Meyer Banach space decomposition into cartoon and texture elements is presented. Possible applications include image compression and invariant descriptor selection for image matching.
Speech Coding and Compression Using Wavelets and Lateral Inhibitory Networks
1990-12-01
in the form of Eq (20) found by Hodgkin and Huxley [34], which describes the membrane potential V of neurons . d V = (VP - V)g p + (V + - V)g + + (V...non-linear differential equations in the form of the neuronal cell membrane equations first found by Hodgkin and Huxley . Thus, the LIN can be...membrane potential saturation points respectively. In the Hodgkin and Huxley model, these saturation points indicate the equilibrium potential of specific
NASA Astrophysics Data System (ADS)
Lim, Se Hoon
Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Wavelet filtering of chaotic data
NASA Astrophysics Data System (ADS)
Grzesiak, M.
Satisfactory method of removing noise from experimental chaotic data is still an open problem. Normally it is necessary to assume certain properties of the noise and dynamics, which one wants to extract, from time series. The wavelet based method of denoising of time series originating from low-dimensional dynamical systems and polluted by the Gaussian white noise is considered. Its efficiency is investigated by comparing the correlation dimension of clean and noisy data generated for some well-known dynamical systems. The wavelet method is contrasted with the singular value decomposition (SVD) and finite impulse response (FIR) filter methods.
Recent advances in wavelet technology
NASA Technical Reports Server (NTRS)
Wells, R. O., Jr.
1994-01-01
Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.
Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.
ERIC Educational Resources Information Center
Culik, Karel II; Kari, Jarkko
1994-01-01
Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…
Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.
ERIC Educational Resources Information Center
Culik, Karel II; Kari, Jarkko
1994-01-01
Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…
Exploiting prior knowledge in compressed sensing wireless ECG systems.
Polanía, Luisa F; Carrillo, Rafael E; Blanco-Velasco, Manuel; Barner, Kenneth E
2015-03-01
Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet-based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.
Wavelet-based face verification for constrained platforms
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2005-03-01
Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
An evaluation of lightweight JPEG2000 encryption with anisotropic wavelet packets
NASA Astrophysics Data System (ADS)
Engel, Dominik; Uhl, Andreas
2007-02-01
In this paper we evaluate a lightweight encryption scheme for JPEG2000 which relies on a secret transform domain constructed with anisotropic wavelet packets. The pseudo-random selection of the bases used for transformation takes compression performance into account, and discards a number of possible bases which lead to poor compression performance. Our main focus in this paper is to answer the important question of how many bases remain to construct the keyspace. In order to determine the trade-off between compression performance and keyspace size, we compare the approach to a method that selects bases from the whole set of anisotropic wavelet packet bases following a pseudo-random uniform distribution. The compression performance of both approaches is compared to get an estimate of the range of compression quality in the set of all bases. We then analytically investigate the number of bases that are discarded for the sake of retaining compression performance in the compression-oriented approach as compared to selection by uniform distribution. Finally, the question of keyspace quality is addressed, i.e. how much similarity between the basis used for analysis and the basis used for synthesis is tolerable from a security point of view and how this affects the lightweight encryption scheme.
Denoising and robust nonlinear wavelet analysis
NASA Astrophysics Data System (ADS)
Bruce, Andrew G.; Donoho, David L.; Gao, Hong-Ye; Martin, R. D.
1994-03-01
In a series of papers, Donoho and Johnstone develop a powerful theory based on wavelets for extracting non-smooth signals from noisy data. Several nonlinear smoothing algorithms are presented which provide high performance for removing Gaussian noise from a wide range of spatially inhomogeneous signals. However, like other methods based on the linear wavelet transform, these algorithms are very sensitive to certain types of non-Gaussian noise, such as outliers. In this paper, we develop outlier resistant wavelet transforms. In these transforms, outliers and outlier patches are localized to just a few scales. By using the outlier resistant wavelet transform, we improve upon the Donoho and Johnstone nonlinear signal extraction methods. The outlier resistant wavelet algorithms are included with the 'S+WAVELETS' object-oriented toolkit for wavelet analysis.
Wavelets based on Hermite cubic splines
NASA Astrophysics Data System (ADS)
Cvejnová, Daniela; Černá, Dana; Finěk, Václav
2016-06-01
In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.
NASA Astrophysics Data System (ADS)
Mautz, R.; Ping, J.; Heki, K.; Schaffrin, B.; Shum, C.; Potts, L.
2005-05-01
Wavelet expansion has been demonstrated to be suitable for the representation of spatial functions. Here we propose the so-called B-spline wavelets to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth’s surface to the mean altitudes of GPS satellites, over Japan. The scalar-valued B-spline wavelets can be defined in a two-dimensional, but not necessarily planar, domain. Generated by a sequence of knots, different degrees of B-splines can be implemented: degree 1 represents the Haar wavelet; degree 2, the linear B-spline wavelet, or degree 4, the cubic B-spline wavelet. A non-uniform version of these wavelets allows us to handle data on a bounded domain without any edge effects. B-splines are easily extended with great computational efficiency to domains of arbitrary dimensions, while preserving their properties. This generalization employs tensor products of B-splines, defined as linear superposition of products of univariate B-splines in different directions. The data and model may be identical at the locations of the data points if the number of wavelet coefficients is equal to the number of grid points. In addition, data compression is made efficient by eliminating the wavelet coefficients with negligible magnitudes, thereby reducing the observational noise. We applied the developed methodology to the representation of the spatial and temporal variations of GIM from an extremely dense GPS network, the GPS Earth Observation Network (GEONET) in Japan. Since the sampling of the TEC is registered regularly in time, we use a two-dimensional B-spline wavelet representation in space and a one-dimensional spline interpolation in time. Over the Japan region, the B-spline wavelet method can overcome the problem of bias for the spherical harmonic model at the boundary, caused by the non-compact support. The hierarchical decomposition not only allows an inexpensive calculation, but also
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.
Wavelet Representations for Digital Mammography.
1994-12-15
Gaussian low-pass filtering. In addition, a digital image editor is described that allows radiologists to interactively indicate on computer screens...masses. spicules and microcalcifications. Finally, we report on the development of objective ways to assess the performance of wavelet image
Lossless compression for three-dimensional images
NASA Astrophysics Data System (ADS)
Tang, Xiaoli; Pearlman, William A.
2004-01-01
We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zerotrees of Wavelet coefficients (3D-CB-EZW), and JPEG2000 Part II for multi-component images. Two kinds of images are investigated in our study -- 8-bit CT and MR medical images and 16-bit AVIRIS hyperspectral images. First, the performances by using different size of coding units are compared. It shows that increasing the size of coding unit improves the performance somewhat. Second, the performances by using different integer wavelet transforms are compared for AT-3DSPIHT, 3D-SPECK and 3D-CB-EZW. None of the considered filters always performs the best for all data sets and algorithms. At last, we compare the different lossless compression algorithms by applying integer wavelet transform on the entire image volumes. For 8-bit medical image volumes, AT-3DSPIHT performs the best almost all the time, achieving average of 12% decreases in file size compared with JPEG2000 multi-component, the second performer. For 16-bit hyperspectral images, AT-3DSPIHT always performs the best, yielding average 5.8% and 8.9% decreases in file size compared with 3D-SPECK and JPEG2000 multi-component, respectively. Two 2D compression algorithms, JPEG2000 and UNIX zip, are also included for reference, and all 3D algorithms perform much better than 2D algorithms.
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
Lossless Compression on MRI Images Using SWT.
Anusuya, V; Raghavan, V Srinivasa; Kavitha, G
2014-10-01
Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.
Senthil Kumar, T. K.; Ganesh, E. N.
2013-01-01
In this paper we are going to discuss and analyze the different methods which are developed to detect the Lung nodules which cause the lung cancer. At the end of analyzing different methods, the new methodology of detecting the lung nodules using Spline Wavelet technique has been proposed in this paper. Continuous modeling of data often required in medical imaging, Polynomial Splines are especially useful to consider image data as continuum rather than discrete array of pixels. The multi resolution property of Splines makes them prime candidates for constructing wavelet bases. Wavelet tool also let us to compress the original CT image to greater factor without any sacrifice in accuracy of nodule detection. Different Algorithms for segmentation/ detection of lung nodules from CT image is discussed in this paper. PMID:23675284
Next gen wavelets down-sampling preserving statistics
NASA Astrophysics Data System (ADS)
Szu, Harold; Miao, Lidan; Chanyagon, Pornchai; Cader, Masud
2007-04-01
We extend the 2 nd Gen Discrete Wavelet Transform (DWT) of Swelden to the Next Generations (NG) Digital Wavelet Transform (DWT) preserving the statistical salient features. The lossless NG DWT accomplishes the data compression of "wellness baseline profiles (WBP)" of aging population at homes. For medical monitoring system at home fronts we translate the military experience to dual usage of veterans & civilian alike with the following three requirements: (i) Data Compression: The necessary down sampling reduces the immense amount of data of individual WBP from hours to days and to weeks for primary caretakers in terms of moments, e.g. mean value, variance, etc., without the artifacts caused by FFT arbitrary windowing. (ii) Lossless: our new NG_DWT must preserve the original data sets. (iii) Phase Transition: NG_DWT must capture the critical phase transition of the wellness toward the sickness with simultaneous display of local statistical moments. According to the Nyquist sampling theory, assuming a band-limited wellness physiology, we must sample the WBP at least twice per day since it is changing diurnally and seasonally. Since NG_DWT, like the 2 nd Gen, is lossless, we can reconstruct the original time series for the physicians' second looks. This technique of NG_DWT can also help stock market day-traders monitoring the volatility of multiple portfolios without artificial horizon artifacts.
On the improved correlative prediction scheme for aliased electrocardiogram (ECG) data compression.
Gao, Xin
2012-01-01
An improved scheme for aliased electrocardiogram (ECG) data compression has been constructed, where the predictor exploits the correlative characteristics of adjacent QRS waveforms. The twin-R correlation prediction and lifting wavelet transform (LWT) for periodical ECG waves exhibits feasibility and high efficiency to achieve lower distortion rates with realizable compression ratio (CR); grey predictions via GM(1, 1) model have been adopted to evaluate the parametric performance for ECG data compression. Simulation results illuminate the validity of our approach.
NASA Astrophysics Data System (ADS)
Preda, Radu O.; Vizireanu, Dragos Nicolae
2011-01-01
The development of the information technology and computer networks facilitates easy duplication, manipulation, and distribution of digital data. Digital watermarking is one of the proposed solutions for effectively safeguarding the rightful ownership of digital images and video. We propose a public digital watermarking technique for video copyright protection in the discrete wavelet transform domain. The scheme uses binary images as watermarks. These are embedded in the detail wavelet coefficients of the middle wavelet subbands. The method is a combination of spread spectrum and quantization-based watermarking. Every bit of the watermark is spread over a number of wavelet coefficients with the use of a secret key by means of quantization. The selected wavelet detail coefficients from different subbands are quantized using an optimal quantization model, based on the characteristics of the human visual system (HVS). Our HVS-based scheme is compared to a non-HVS approach. The resilience of the watermarking algorithm is tested against a series of different spatial, temporal, and compression attacks. To improve the robustness of the algorithm, we use error correction codes and embed the watermark with spatial and temporal redundancy. The proposed method achieves a good perceptual quality and high resistance to a large spectrum of attacks.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
The FBI compression standard for digitized fingerprint images
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.
2012-12-01
We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia
Wavelet transform application in human face recognition
NASA Astrophysics Data System (ADS)
Meng, Qiang; Thompson, Wiley E.; Flachs, Gerald M.; Jordan, Jay B.
1997-07-01
A wavelet transformation is introduced as a new method to extract sideview face features in human face recognition. Utilizing the wavelet transformation, a sideview profile is decomposed as high frequency and low frequency components. Signal reconstruction, autocorrelation and energy distribution are used to decide a optimal decomposition level in the wavelet transformation without losing sideview features. To evaluate the feasibility of the wavelet transformation features in human sideview face recognition, the tie statistic is used to compute the complexity of the wavelet transform features. Using wavelet transformation, the sideview data size is reduced. The reduced features have almost the same ability as the original sideview face profile data in terms of distinguishing different people. The computational expense is greatly decreased. The results of the experiments are also shown in this paper.
Gabor analysis as contraction of wavelets analysis
NASA Astrophysics Data System (ADS)
Subag, Eyal M.; Baruch, Ehud Moshe; Birman, Joseph L.; Mann, Ady
2017-08-01
We use the method of group contractions to relate wavelet analysis and Gabor analysis. Wavelet analysis is associated with unitary irreducible representations of the affine group while the Gabor analysis is associated with unitary irreducible representations of the Heisenberg group. We obtain unitary irreducible representations of the Heisenberg group as contractions of representations of the extended affine group. Furthermore, we use these contractions to relate the two analyses, namely, we contract coherent states, resolutions of the identity, and tight frames. In order to obtain the standard Gabor frame, we construct a family of time localized wavelet frames that contract to that Gabor frame. Starting from a standard wavelet frame, we construct a family of frequency localized wavelet frames that contract to a nonstandard Gabor frame. In particular, we deform Gabor frames to wavelet frames.
Wavelet analysis of internal gravity waves
NASA Astrophysics Data System (ADS)
Hawkins, J.; Warn-Varnas, A.; Chin-Bing, S.; King, D.; Smolarkiewicsz, P.
2005-05-01
A series of model studies of internal gravity waves (igw) have been conducted for several regions of interest. Dispersion relations from the results have been computed using wavelet analysis as described by Meyers (1993). The wavelet transform is repeatedly applied over time and the components are evaluated with respect to their amplitude and peak position (Torrence and Compo, 1998). In this sense we have been able to compute dispersion relations from model results and from measured data. Qualitative agreement has been obtained in some cases. The results from wavelet analysis must be carefully interpreted because the igw models are fully nonlinear and wavelet analysis is fundamentally a linear technique. Nevertheless, a great deal of information describing igw propagation can be obtained from the wavelet transform. We address the domains over which wavelet analysis techniques can be applied and discuss the limits of their applicability.
Progressive image data compression with adaptive scale-space quantization
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur
1999-12-01
Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.
Science-based Region-of-Interest Image Compression
NASA Technical Reports Server (NTRS)
Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.
2004-01-01
As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.
2001-10-25
We evaluate a combined discrete wavelet transform (DWT) and wavelet packet algorithm to improve the homogeneity of magnetic resonance imaging when a...image and uses this information to normalize the image intensity variations. Estimation of the coil sensitivity profile based on the wavelet transform of
Wavelet analysis in two-dimensional tomography
NASA Astrophysics Data System (ADS)
Burkovets, Dimitry N.
2002-02-01
The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.
Application and Development of Wavelet Analysis
1992-08-15
found that optics is quite suitable to generate and display both the direct and the inverse wavelet transforms in parallel. Unlike the digital...toward identifying the suitability of using optics for the multichannel signal analysis. Both the Gabor and the wavelet transforms were studied in terms...inverse wavelet transforms . This is the case for processing both the one and two dimensional signals. A detail comparison of the space-bandwidth
Develop, Apply and Evaluate Wavelet Technology.
1992-10-20
Eddington (1928), A. S . The Nature of the Physical World, Cambridge: Cambridge University Press. [11] Einstein , A. (155), The Meaning of Relativity...Albequerque, NM, 1990. [9] R. A. Gopinath and C. S . Burrus, "Wavelet transforms and filter banks," pp. 603-654 in Wavelets: A Tutorial in Theory and...Resnikoff, "Multidimensional wavelet bases," Aware Technical Report, Aware, Inc., Cambridge, MA 1991. [25] S . G. Mallat, "A Theory for multiresolution
Wavelet transform of neural spike trains
NASA Astrophysics Data System (ADS)
Kim, Youngtae; Jung, Min Whan; Kim, Yunbok
2000-02-01
Wavelet transform of neural spike trains recorded with a tetrode in the rat primary somatosensory cortex is described. Continuous wavelet transform (CWT) of the spike train clearly shows singularities hidden in the noisy or chaotic spike trains. A multiresolution analysis of the spike train is also carried out using discrete wavelet transform (DWT) for denoising and approximating at different time scales. Results suggest that this multiscale shape analysis can be a useful tool for classifying the spike trains.
Wavelet Features Based Fingerprint Verification
NASA Astrophysics Data System (ADS)
Bagadi, Shweta U.; Thalange, Asha V.; Jain, Giridhar P.
2010-11-01
In this work; we present a automatic fingerprint identification system based on Level 3 features. Systems based only on minutiae features do not perform well for poor quality images. In practice, we often encounter extremely dry, wet fingerprint images with cuts, warts, etc. Due to such fingerprints, minutiae based systems show poor performance for real time authentication applications. To alleviate the problem of poor quality fingerprints, and to improve overall performance of the system, this paper proposes fingerprint verification based on wavelet statistical features & co-occurrence matrix features. The features include mean, standard deviation, energy, entropy, contrast, local homogeneity, cluster shade, cluster prominence, Information measure of correlation. In this method, matching can be done between the input image and the stored template without exhaustive search using the extracted feature. The wavelet transform based approach is better than the existing minutiae based method and it takes less response time and hence suitable for on-line verification, with high accuracy.
Multidimensional signaling via wavelet packets
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.
1995-04-01
This work presents a generalized signaling strategy for orthogonally multiplexed communication. Wavelet packet modulation (WPM) employs the basis functions from an arbitrary pruning of a full dyadic tree structured filter bank as orthogonal pulse shapes for conventional QAM symbols. The multi-scale modulation (MSM) and M-band wavelet modulation (MWM) schemes which have been recently introduced are handled as special cases, with the added benefit of an entire library of potentially superior sets of basis functions. The figures of merit are derived and it is shown that the power spectral density is equivalent to that for QAM (in fact, QAM is another special case) and hence directly applicable in existing systems employing this standard modulation. Two key advantages of this method are increased flexibility in time-frequency partitioning and an efficient all-digital filter bank implementation, making the WPM scheme more robust to a larger set of interferences (both temporal and sinusoidal) and computationally attractive as well.
Wavelet analysis of epileptic spikes
NASA Astrophysics Data System (ADS)
Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.
2003-05-01
Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.
Wavelet-Based Adaptive Denoising of Phonocardiographic Records
2007-11-02
the approximated signal, and d the signal details at the given scale; h and g are biorthogonal filters, corresponding to the selected mother wavelet ...dyadic scale can be written as: where is the orthogonal mother wavelet , and: The discrete version of the dyadic wavelet transform can be based on... wavelet with 4 moments equal to zero (Coiflet-2) as the mother wavelet . The two channels were wavelet decomposed up to the 9th order (i = 0, 1 ... 8
Multiresolution Distance Volumes for Progressive Surface Compression
Laney, D; Bertram, M; Duchaineau, M; Max, N
2002-01-14
Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.
NASA Astrophysics Data System (ADS)
Lord, Jesse W.; Rast, Mark P.; Mckinlay, Christopher; Clyne, John; Mininni, Pablo D.
2012-02-01
We examine the decomposition of forced Taylor-Green and Arn'old-Beltrami-Childress (ABC) flows into coherent and incoherent components using an orthonormal wavelet decomposition. We ask whether wavelet coefficient thresholding based on the Donoho-Johnstone criterion can extract a coherent vortex signal while leaving behind Gaussian random noise. We find that no threshold yields a strictly Gaussian incoherent component, and that the most Gaussian incoherent flow is found for data compression lower than that achieved with the fully iterated Donoho-Johnstone threshold. Moreover, even at such low compression, the incoherent component shows clear signs of large-scale spatial correlations that are signatures of the forcings used to drive the flows.
A 64-channel neural signal processor/ compressor based on Haar wavelet transform.
Shaeri, Mohammad Ali; Sodagar, Amir M; Abrishami-Moghaddam, Hamid
2011-01-01
A signal processor/compressor dedicated to implantable neural recording microsystems is presented. Signal compression is performed based on Haar wavelet. It is shown in this paper that, compared to other mathematical transforms already used for this purpose, compression of neural signals using this type of wavelet transform can be of almost the same quality, while demanding less circuit complexity and smaller silicon area. Designed in a 0.13-μm standard CMOS process, the 64-channel 8-bit signal processor reported in this paper occupies 113 μm x 110 μm of silicon area. It operates under a 1.8-V supply voltage at a master clock frequency of 3.2 MHz.
Optimization and implementation of the integer wavelet transform for image coding.
Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella
2002-01-01
This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.
An Implementation of Wavelet Analysis
1993-07-01
CLASSIFIED AD-A273 162 * DISTO High Frequency, RAdar Pivilsion TECHNICAL REPORT SRL-0122-TR AN IMPLEMENTATION OF WAVELET ANALYSIS by DTIC C.J...Coleman ELECTE 0 EE1 I II, 44 H 93-28032 Tilldl I ~APPRO VED FOR PUBLIC RELEASE I UNCLASSIFIED l UNCLASSIFIED DSTOAV A U S T R A L I A SURVEILLANCE...5 6 SIGNAL STRUCTURE .................................................. 9 7 CO NCLUSIO N S
Wavelet methods in data mining
NASA Astrophysics Data System (ADS)
Manchanda, P.
2012-07-01
Data mining (knowledge discovery in data base) is comparatively new interdisciplinary field developed by joint efforts of mathematicians, statisticians, computer scientists and engineers. There are twelve important ingredients of this field along with their applications in real world problems. In this chapter, we have reviewed application of wavelet methods to data mining, particularly denoising, dimension reduction, similarity search, feature extraction and prediction. Meteorological data of Saudi Arabia and Stock market data of India are considered for illustration.
Multiplexing of volume holographic wavelet correlation processor
NASA Astrophysics Data System (ADS)
Feng, Wenyi; Yan, Yingbai; Jin, Guofan; Wu, Minxian; He, Qingsheng
2000-03-01
Volume holographic associative memory in a photorefractive crystal provides an inherent mechanism to develop a multi-channel correlation identification system with high parallelism. Wavelet transform is introduced to improve discrimination of the system. We first investigate parameters of the system for parallelism enhancement, and then study multiplexing of the system on input objects and wavelet filters. A general volume holographic wavelet correlation processor has a single input-object channel and a single wavelet-filtering channel. In other words, it can only process one input object with one wavelet filter at a same time. Based on the fact that a volume holographic correlator is not a shift-invariant system, multiplexing of input objects is proposed to improve parallelism of the processor. As a result, several input objects can be recognized simultaneously. Multiplexing of wavelet filters with different wavelet parameters is also achieved by a Dammann grating. Wavelet correlation outputs with different filters are synthesized to improve recognition accuracy of the processor. Corresponding experimental results in human face recognition are given. The combination of the input object multiplexing and the wavelet filter multiplexing is also described.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising
NASA Astrophysics Data System (ADS)
Fan, W. J.; Lu, Y.
2006-10-01
Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
Video compression based on enhanced EZW scheme and Karhunen-Loeve transform
NASA Astrophysics Data System (ADS)
Soloveyko, Olexandr M.; Musatenko, Yurij S.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.
2000-06-01
The paper presents a new method for video compression based on the enhanced embedded zerotree wavelet (EZW) scheme. Recently, video codecs from the EZW family which use a 3D version of EZW or SPIHT algorithms showed better performance than the MPEG-2 compression algorithm. These algorithms have many advantages inherent for wavelet based schemes and EZW- like coders. Their most important advantages are good compression performance and scalability. However, they still allow improvement in several ways. First, as we recently showed, using Karhunen-Loeve (KL) transform instead of wavelet transform along the time axis improves compression ratio. Second, instead of the 3D EZW quantization scheme, we offer to use a convenient 2D quantization for every decorrelated frame adding one symbol `Strong Zero Tree', which means that every frame from a chosen set has a zero tree in the same location. The suggested compression algorithm based on KL transform, wavelet transform, and a new quantization scheme with strong zerotrees is free from some drawbacks of the plain 3D EZW codec. The presented codec shows 1 - 6 dB better results compared to the MPEG-2 compression algorithm on video sequences with small and medium motion.
Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes.
Mamaghanian, Hossein; Khaled, Nadia; Atienza, David; Vandergheynst, Pierre
2011-09-01
Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric telecardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility, and safety. However, state-of-the-art WBSN-enabled ECG monitors still fall short of the required functionality, miniaturization, and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37.1% extension in node lifetime relative to its DWT-based counterpart for "good" reconstruction quality.
NASA Astrophysics Data System (ADS)
Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.
2017-01-01
Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.
NASA Astrophysics Data System (ADS)
Ng, J.; Kingsbury, N. G.
2004-02-01
wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The
... knee bend. Compression Stockings Can Be Hard to Put on If it's hard for you to put on the stockings, try these tips: Apply lotion ... your legs, but let it dry before you put on the stockings. Use a little baby powder ...
Methods and limits of digital image compression of retinal images for telemedicine.
Eikelboom, R H; Yogesan, K; Barry, C J; Constable, I J; Tay-Kearney, M L; Jitskaia, L; House, P H
2000-06-01
To investigate image compression of digital retinal images and the effect of various levels of compression on the quality of the images. JPEG (Joint Photographic Experts Group) and Wavelet image compression techniques were applied in five different levels to 11 eyes with subtle retinal abnormalities and to 4 normal eyes. Image quality was assessed by four different methods: calculation of the root mean square (RMS) error between the original and compressed image, determining the level of arteriole branching, identification of retinal abnormalities by experienced observers, and a subjective assessment of overall image quality. To verify the techniques used and findings, a second set of retinal images was assessed by calculation of RMS error and overall image quality. Plots and tabulations of the data as a function of the final image size showed that when the original image size of 1.5 MB was reduced to 29 KB using JPEG compression, there was no serious degradation in quality. The smallest Wavelet compressed images in this study (15 KB) were generally still of acceptable quality. For situations where digital image transmission time and costs should be minimized, Wavelet image compression to 15 KB is recommended, although there is a slight cost of computational time. Where computational time should be minimized, and to remain compatible with other imaging systems, the use of JPEG compression to 29 KB is an excellent alternative.
Wavelet=Galerkin discretization of hyperbolic equations
Restrepo, J.M.; Leaf, G.K.
1994-12-31
The relative merits of the wavelet-Galerkin solution of hyperbolic partial differential equations, typical of geophysical problems, are quantitatively and qualitatively compared to traditional finite difference and Fourier-pseudo-spectral methods. The wavelet-Galerkin solution presented here is found to be a viable alternative to the two conventional techniques.
Vector coding of wavelet-transformed images
NASA Astrophysics Data System (ADS)
Zhou, Jun; Zhi, Cheng; Zhou, Yuanhua
1998-09-01
Wavelet, as a brand new tool in signal processing, has got broad recognition. Using wavelet transform, we can get octave divided frequency band with specific orientation which combines well with the properties of Human Visual System. In this paper, we discuss the classified vector quantization method for multiresolution represented image.
Optical Wavelet Transform for Fingerprint Identification
1993-12-15
requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical... wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial
Wavelet Local Extrema Applied to Image Processing
1992-12-01
The research project had two components. In the first part, we developed a numerical method, based on the wavelet transform , for the solution of...on the orthogonal wavelet transform , that adapts the computational resolution in space and time to the regularity of the solution. This scheme saves
Improvements of embedded zerotree wavelet (EZW) coding
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-04-01
In this research, we investigate several improvements of embedded zerotree wavelet (EZW) coding. Several topics addressed include: the choice of wavelet transforms and boundary conditions, the use of arithmetic coder and arithmetic context and the design of encoding order for effective embedding. The superior performance of our improvements is demonstrated with extensive experimental results.
3D steerable wavelets in practice.
Chenouard, Nicolas; Unser, Michael
2012-11-01
We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems.
Feature preserving compression of high resolution SAR images
NASA Astrophysics Data System (ADS)
Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing
2006-10-01
Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Robust object tracking in compressed image sequences
NASA Astrophysics Data System (ADS)
Mujica, Fernando; Murenzi, Romain; Smith, Mark J.; Leduc, Jean-Pierre
1998-10-01
Accurate object tracking is important in defense applications where an interceptor missile must hone into a target and track it through the pursuit until the strike occurs. The expense associated with an interceptor missile can be reduced through a distributed processing arrangement where the computing platform on which the tracking algorithm is run resides on the ground, and the interceptor need only carry the sensor and communications equipment as part of its electronics complement. In this arrangement, the sensor images are compressed, transmitted to the ground, and compressed to facilitate real-time downloading of the data over available bandlimited channels. The tracking algorithm is run on a ground-based computer while tracking results are transmitted back to the interceptor as soon as they become available. Compression and transmission in this scenario introduce distortion. If severe, these distortions can lead to erroneous tracking results. As a consequence, tracking algorithms employed for this purpose must be robust to compression distortions. In this paper we introduced a robust object racking algorithm based on the continuous wavelet transform. The algorithm processes image sequence data on a frame-by-frame basis, implicitly taking advantage of temporal history and spatial frame filtering to reduce the impact of compression artifacts. Test results show that tracking performance can be maintained at low transmission bit rates and can be used reliably in conjunction with many well-known image compression algorithms.
Finite element wavelets with improved quantitative properties
NASA Astrophysics Data System (ADS)
Nguyen, Hoang; Stevenson, Rob
2009-08-01
In [W. Dahmen, R. Stevenson, Element-by-element construction of wavelets satisfying stability and moment conditions, SIAM J. Numer. Anal. 37 (1) (1999) 319-352 (electronic)], finite element wavelets were constructed on polygonal domains or Lipschitz manifolds that are piecewise parametrized by mappings with constant Jacobian determinants. The wavelets could be arranged to have any desired order of cancellation properties, and they generated stable bases for the Sobolev spaces Hs for (or s<=1 on manifolds). Unfortunately, it appears that the quantitative properties of these wavelets are rather disappointing. In this paper, we modify the construction from the above-mentioned work to obtain finite element wavelets which are much better conditioned.
Using wavelets to learn pattern templates
NASA Astrophysics Data System (ADS)
Scott, Clayton D.; Nowak, Robert D.
2002-07-01
Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.
NASA Astrophysics Data System (ADS)
Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda
2016-03-01
Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.
Seamless multiresolution isosurfaces using wavelets
Udeshi, T.; Hudson, R.; Papka, M. E.
2000-04-11
Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.
Maiolo, M.; Vancheri, A.; Krause, R.; Danani, A.
2015-11-01
In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficient orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.
Iterative wavelet thresholding for rapid MRI reconstruction
NASA Astrophysics Data System (ADS)
Kayvanrad, Mohammad H.; McKenzie, Charles A.; Peters, Terry M.
2011-03-01
According to the developments in the field of compressed sampling and and sparse recovery, one might take advantage of the sparsity of an object, as an additional a priori knowledge about the object, to reconstruct it from fewer samples than that needed by the traditional sampling strategies. Since most magnetic resonance (MR) images are sparse in some domain, in this work we consider the problem of MR reconstruction and how one could apply this idea to accelerate the process of MR image/map acquisition. In particular, based on the Paupolis-Gerchgerg algorithm, an iterative thresholding algorithm for reconstruction of MR images from limited k-space observations is proposed. The proposed method takes advantage of the sparsity of most MR images in the wavelet domain. Initializing with a minimum-energy reconstruction, the object of interest is reconstructed by going through a sequence of thresholding and recovery iterations. Furthermore, MR studies often involve acquisition of multiple images in time that are highly correlated. This correlation can be used as additional knowledge on the object beside the sparsity to further reduce the reconstruction time. The performance of the proposed algorithms is experimentally evaluated and compared to other state-of-the-art methods. In particular, we show that the quality of reconstruction is increased compared to total variation (TV) regularization, and the conventional Papoulis-Gerchberg algorithm both in the absence and in the presence of noise. Also, phantom experiments show good accuracy in the reconstruction of relaxation maps from a set of highly undersampled k-space observations.
Adaptive three-dimensional motion-compensated wavelet transform for image sequence coding
NASA Astrophysics Data System (ADS)
Leduc, Jean-Pierre
1994-09-01
This paper describes a 3D spatio-temporal coding algorithm for the bit-rate compression of digital-image sequences. The coding scheme is based on different specificities namely, a motion representation with a four-parameter affine model, a motion-adapted temporal wavelet decomposition along the motion trajectories and a signal-adapted spatial wavelet transform. The motion estimation is performed on the basis of four-parameter affine transformation models also called similitude. This transformation takes into account translations, rotations and scalings. The temporal wavelet filter bank exploits bi-orthogonal linear-phase dyadic decompositions. The 2D spatial decomposition is based on dyadic signal-adaptive filter banks with either para-unitary or bi-orthogonal bases. The adaptive filtering is carried out according to a performance criterion to be optimized under constraints in order to eventually maximize the compression ratio at the expense of graceful degradations of the subjective image quality. The major principles of the present technique is, in the analysis process, to extract and to separate the motion contained in the sequences from the spatio-temporal redundancy and, in the compression process, to take into account of the rate-distortion function on the basis of the spatio-temporal psycho-visual properties to achieve the most graceful degradations. To complete this description of the coding scheme, the compression procedure is therefore composed of scalar quantizers which exploit the spatio-temporal 3D psycho-visual properties of the Human Visual System and of entropy coders which finalize the bit rate compression.
Wavelet Neural Network Using Multiple Wavelet Functions in Target Threat Assessment
Guo, Lihong; Duan, Hong
2013-01-01
Target threat assessment is a key issue in the collaborative attack. To improve the accuracy and usefulness of target threat assessment in the aerial combat, we propose a variant of wavelet neural networks, MWFWNN network, to solve threat assessment. How to select the appropriate wavelet function is difficult when constructing wavelet neural network. This paper proposes a wavelet mother function selection algorithm with minimum mean squared error and then constructs MWFWNN network using the above algorithm. Firstly, it needs to establish wavelet function library; secondly, wavelet neural network is constructed with each wavelet mother function in the library and wavelet function parameters and the network weights are updated according to the relevant modifying formula. The constructed wavelet neural network is detected with training set, and then optimal wavelet function with minimum mean squared error is chosen to build MWFWNN network. Experimental results show that the mean squared error is 1.23 × 10−3, which is better than WNN, BP, and PSO_SVM. Target threat assessment model based on the MWFWNN has a good predictive ability, so it can quickly and accurately complete target threat assessment. PMID:23509436
Image compression with embedded multiwavelet coding
NASA Astrophysics Data System (ADS)
Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay
1996-03-01
An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.
Face recognition based on wavelet transform and variance similarity
NASA Astrophysics Data System (ADS)
Zheng, Dezhong; Cui, Fayi
2008-12-01
The image match for face recognition is studied. Variances of sequences in relation to facial images are computed, and the weights used for computation of similarity are obtained by a certain transform between the variance and weight. The weights based on the better theoretical derivation have good stability. And the variance similarity calculated by these weights is of great adaptability, weakening the impact of interferences including the noise and deformation of images. Wavelet transform is a very good method about image compression, by which redundancies of the image are removed and original features of the image are reserved. Whereas pixels of a facial image are usually larger, wavelet transform is used to extract the low-frequency images. And then each facial variance similarity is computed based on the matrix of the low-frequency image. Finally, the image match is carried out for face recognition. The experiments show that the proposed method has the characteristics of simple realization, rapid recognition speed and high recognition rate.
Optimal Compressed Sensing and Reconstruction of Unstructured Mesh Datasets
Salloum, Maher; Fabian, Nathan D.; Hensinger, David M.; ...
2017-08-09
Exascale computing promises quantities of data too large to efficiently store and transfer across networks in order to be able to analyze and visualize the results. We investigate compressed sensing (CS) as an in situ method to reduce the size of the data as it is being generated during a large-scale simulation. CS works by sampling the data on the computational cluster within an alternative function space such as wavelet bases and then reconstructing back to the original space on visualization platforms. While much work has gone into exploring CS on structured datasets, such as image data, we investigate itsmore » usefulness for point clouds such as unstructured mesh datasets often found in finite element simulations. We sample using a technique that exhibits low coherence with tree wavelets found to be suitable for point clouds. We reconstruct using the stagewise orthogonal matching pursuit algorithm that we improved to facilitate automated use in batch jobs. We analyze the achievable compression ratios and the quality and accuracy of reconstructed results at each compression ratio. In the considered case studies, we are able to achieve compression ratios up to two orders of magnitude with reasonable reconstruction accuracy and minimal visual deterioration in the data. Finally, our results suggest that, compared to other compression techniques, CS is attractive in cases where the compression overhead has to be minimized and where the reconstruction cost is not a significant concern.« less
Maximally Localized Radial Profiles for Tight Steerable Wavelet Frames.
Pad, Pedram; Uhlmann, Virginie; Unser, Michael
2016-03-22
A crucial component of steerable wavelets is the radial profile of the generating function in the frequency domain. In this work, we present an infinite-dimensional optimization scheme that helps us find the optimal profile for a given criterion over the space of tight frames. We consider two classes of criteria that measure the localization of the wavelet. The first class specifies the spatial localization of the wavelet profile, and the second that of the resulting wavelet coefficients. From these metrics and the proposed algorithm, we construct tight wavelet frames that are optimally localized and provide their analytical expression. In particular, one of the considered criterion helps us finding back the popular Simoncelli wavelet profile. Finally, the investigation of local orientation estimation, image reconstruction from detected contours in the wavelet domain, and denoising, indicate that optimizing wavelet localization improves the performance of steerable wavelets, since our new wavelets outperform the traditional ones.
Maximally Localized Radial Profiles for Tight Steerable Wavelet Frames.
Pad, Pedram; Uhlmann, Virginie; Unser, Michael
2016-05-01
A crucial component of steerable wavelets is the radial profile of the generating function in the frequency domain. In this paper, we present an infinite-dimensional optimization scheme that helps us find the optimal profile for a given criterion over the space of tight frames. We consider two classes of criteria that measure the localization of the wavelet. The first class specifies the spatial localization of the wavelet profile, and the second that of the resulting wavelet coefficients. From these metrics and the proposed algorithm, we construct tight wavelet frames that are optimally localized and provide their analytical expression. In particular, one of the considered criterion helps us finding back the popular Simoncelli wavelet profile. Finally, the investigation of local orientation estimation, image reconstruction from detected contours in the wavelet domain, and denoising indicate that optimizing wavelet localization improves the performance of steerable wavelets, since our new wavelets outperform the traditional ones.
[Lossless compression of hyperspectral image for space-borne application].
Li, Jin; Jin, Long-xu; Li, Guo-ning
2012-08-01
In order to resolve the difficulty in hardware implementation, lower compression ratio and time consuming for the whole hyperspectral image lossless compression algorithm based on the prediction, transform, vector quantization and their combination, a hyperspectral image lossless compression algorithm for space-borne application was proposed in the present paper. Firstly, intra-band prediction is used only for the first image along the spectral line using a median predictor. And inter- band prediction is applied to other band images. A two-step and bidirectional prediction algorithm is proposed for the inter-band prediction. In the first step prediction, a bidirectional and second order predictor proposed is used to obtain a prediction reference value. And a improved LUT prediction algorithm proposed is used to obtain four values of LUT prediction. Then the final prediction is obtained through comparison between them and the prediction reference. Finally, the verification experiments for the compression algorithm proposed using compression system test equipment of XX-X space hyperspectral camera were carried out. The experiment results showed that compression system can be fast and stable work. The average compression ratio reached 3.05 bpp. Compared with traditional approaches, the proposed method could improve the average compression ratio by 0.14-2.94 bpp. They effectively improve the lossless compression ratio and solve the difficulty of hardware implementation of the whole wavelet-based compression scheme.
Image coding with geometric wavelets.
Alani, Dror; Averbuch, Amir; Dekel, Shai
2007-01-01
This paper describes a new and efficient method for low bit-rate image coding which is based on recent development in the theory of multivariate nonlinear piecewise polynomial approximation. It combines a binary space partition scheme with geometric wavelet (GW) tree approximation so as to efficiently capture curve singularities and provide a sparse representation of the image. The GW method successfully competes with state-of-the-art wavelet methods such as the EZW, SPIHT, and EBCOT algorithms. We report a gain of about 0.4 dB over the SPIHT and EBCOT algorithms at the bit-rate 0.0625 bits-per-pixels (bpp). It also outperforms other recent methods that are based on "sparse geometric representation." For example, we report a gain of 0.27 dB over the Bandelets algorithm at 0.1 bpp. Although the algorithm is computationally intensive, its time complexity can be significantely reduced by collecting a "global" GW n-term approximation to the image from a collection of GW trees, each constructed separately over tiles of the image.
Optimization of integer wavelet transforms based on difference correlation structures.
Li, Hongliang; Liu, Guizhong; Zhang, Zhongwei
2005-11-01
In this paper, a novel lifting integer wavelet transform based on difference correlation structure (DCCS-LIWT) is proposed. First, we establish a relationship between the performance of a linear predictor and the difference correlations of an image. The obtained results provide a theoretical foundation for the following construction of the optimal lifting filters. Then, the optimal prediction lifting coefficients in the sense of least-square prediction error are derived. DCCS-LIWT puts heavy emphasis on image inherent dependence. A distinct feature of this method is the use of the variance-normalized autocorrelation function of the difference image to construct a linear predictor and adapt the predictor to varying image sources. The proposed scheme also allows respective calculations of the lifting filters for the horizontal and vertical orientations. Experimental evaluation shows that the proposed method produces better results than the other well-known integer transforms for the lossless image compression.
Discrete directional wavelet bases and frames: analysis and applications
NASA Astrophysics Data System (ADS)
Dragotti, Pier Luigi; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar
2003-11-01
The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show interesting gains compared to the standard two-dimensional analysis.
Detecting the BAO using Discrete Wavelet Packets
NASA Astrophysics Data System (ADS)
Garcia, Noel Anthony; Wu, Yunyun; Kadowaki, Kevin; Pando, Jesus
2017-01-01
We use wavelet packets to investigate the clustering of matter on galactic scales in search of the Baryon Acoustic Oscillations. We do so in two ways. We develop a wavelet packet approach to measure the power spectrum and apply this method to the CMASS galaxy catalogue from the Sloan Digital Sky Survey (SDSS). We compare the resulting power spectrum to published BOSS results by measuring a parameter β that compares our wavelet detected oscillations to the results from the SDSS collaboration. We find that β=1 indicating that our wavelet packet methods are detecting the BAO at a similar level as traditional Fourier techniques. We then use wavelet packets to decompose, denoise, and then reconstruct the galaxy density field. Using this denoised field, we compute the standard two-point correlation function. We are able to successfully detect the BAO at r ≈ 105 h-1 Mpc in line with previous SDSS results. We conclude that wavelet packets do reproduce the results of the key clustering statistics computed by other means. The wavelet packets show distinct advantages in suppressing high frequency noise and in keeping information localized.
NASA Astrophysics Data System (ADS)
Hejazialhosseini, Babak; Rossinelli, Diego; Bergdorf, Michael; Koumoutsakos, Petros
2010-11-01
We present a space-time adaptive solver for single- and multi-phase compressible flows that couples average interpolating wavelets with high-order finite volume schemes. The solver introduces the concept of wavelet blocks, handles large jumps in resolution and employs local time-stepping for efficient time integration. We demonstrate that the inherently sequential wavelet-based adaptivity can be implemented efficiently in multicore computer architectures using task-based parallelism and introducing the concept of wavelet blocks. We validate our computational method on a number of benchmark problems and we present simulations of shock-bubble interaction at different Mach numbers, demonstrating the accuracy and computational performance of the method.
Compressive Fresnel digital holography using Fresnelet based sparse representation
NASA Astrophysics Data System (ADS)
Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith
2015-04-01
Compressive sensing (CS) in digital holography requires only very less number of pixel level detections in hologram plane for accurate image reconstruction and this is achieved by exploiting the sparsity of the object wave. When the input object fields are non-sparse in spatial domain, CS demands a suitable sparsification method like wavelet decomposition. The Fresnelet, a suitable wavelet basis for processing Fresnel digital holograms is an efficient sparsifier for the complex Fresnel field obtained by the Fresnel transform of the object field and minimizes the mutual coherence between sensing and sparsifying matrices involved in CS. The paper demonstrates the merits of Fresnelet based sparsification in compressive digital Fresnel holography over conventional method of sparsifying the input object field. The phase shifting digital Fresnel holography (PSDH) is used to retrieve the complex Fresnel field for the chosen problem. The results are presented from a numerical experiment to show the proof of the concept.
Applications of a fast, continuous wavelet transform
Dress, W.B.
1997-02-01
A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.
Three-dimensional wavelet transform and multiresolution surface reconstruction from volume data
NASA Astrophysics Data System (ADS)
Wang, Yun; Sloan, Kenneth R., Jr.
1995-04-01
Multiresolution surface reconstruction from volume data is very useful in medical imaging, data compression and multiresolution modeling. This paper presents a hierarchical structure for extracting multiresolution surfaces from volume data by using a 3-D wavelet transform. The hierarchical scheme is used to visualize different levels of detail of the surface and allows a user to explore different features of the surface at different scales. We use 3-D surface curvature as a smoothness condition to control the hierarchical level and the distance error between the reconstructed surface and the original data as the stopping criteria. A 3-D wavelet transform provides an appropriate hierarchical structure to build the volume pyramid. It can be constructed by the tensor products of 1-D wavelet transforms in three subspaces. We choose the symmetric and smoothing filters such as Haar, linear, pseudoCoiflet, cubic B-spline and their corresponding orthogonal wavelets to build the volume pyramid. The surface is reconstructed at each level of volume data by using the cell interpolation method. Some experimental results are shown through the comparison of the different filters based on the distance errors of the surfaces.
Transionospheric signal detection with chirped wavelets
Doser, A.B.; Dunham, M.E.
1997-11-01
Chirped wavelets are utilized to detect dispersed signals in the joint time scale domain. Specifically, pulses that become dispersed by transmission through the ionosphere and are received by satellites as nonlinear chirps are investigated. Since the dispersion greatly lowers the signal to noise ratios, it is difficult to isolate the signals in the time domain. Satellite data are examined with discrete wavelet expansions. Detection is accomplished via a template matching threshold scheme. Quantitative experimental results demonstrate that the chirped wavelet detection scheme is successful in detecting the transionospheric pulses at very low signal to noise ratios.
Optical Planar Discrete Fourier and Wavelet Transforms
NASA Astrophysics Data System (ADS)
Cincotti, Gabriella; Moreolo, Michela Svaluto; Neri, Alessandro
2007-10-01
We present all-optical architectures to perform discrete wavelet transform (DWT), wavelet packet (WP) decomposition and discrete Fourier transform (DFT) using planar lightwave circuits (PLC) technology. Any compact-support wavelet filter can be implemented as an optical planar two-port lattice-form device, and different subband filtering schemes are possible to denoise, or multiplex optical signals. We consider both parallel and serial input cases. We design a multiport decoder/decoder that is able to generate/process optical codes simultaneously and a flexible logarithmic wavelength multiplexer, with flat top profile and reduced crosstalk.
Wavelet Applications for Flight Flutter Testing
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.
1999-01-01
Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.
Wavelet and Gabor transforms for detection
NASA Astrophysics Data System (ADS)
Casasent, David P.; Smokelin, John-Scott; Ye, Anqi
1992-09-01
We consider wavelet and Gabor transforms for detection of candidate regions of interest in a 2-D scene. We generate wavelet and Gabor coefficients for each spatial region of a scene using new linear combination optical filters to reduce the output dimensionality and to simplify postprocessing. We use two sets of wavelet coefficients as indicators of edge activity to suppress background clutter. The Gabor coefficients are found to be excellent for object detection and robust to object distortions and contrast differences. We provide insight into the selection of the Gabor parameters.
Wavelet frames and admissibility in higher dimensions
NASA Astrophysics Data System (ADS)
Führ, Hartmut
1996-12-01
This paper is concerned with the relations between discrete and continuous wavelet transforms on k-dimensional Euclidean space. We start with the construction of continuous wavelet transforms with the help of square-integrable representations of certain semidirect products, thereby generalizing results of Bernier and Taylor. We then turn to frames of L2(Rk) and to the question, when the functions occurring in a given frame are admissible for a given continuous wavelet transform. For certain frames we give a characterization which generalizes a result of Daubechies to higher dimensions.
Wavelet-based multispectral face recognition
NASA Astrophysics Data System (ADS)
Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen
2008-09-01
This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.
Wavelet analysis of fusion plasma transients
Dose, V.; Venus, G.; Zohm, H.
1997-02-01
Analysis of transient signals in the diagnostic of fusion plasmas often requires the simultaneous consideration of their time and frequency information. The newly emerging technique of wavelet analysis contains both time and frequency domains. Therefore it can be a valuable tool for the analysis of transients. In this paper the basic method of wavelet analysis is described. As an example, wavelet analysis is applied to the well-known phenomena of mode locking and fishbone instability. The results quantify the current qualitative understanding of these events in terms of instantaneous frequencies and amplitudes and encourage applications of the method to other problems. {copyright} {ital 1997 American Institute of Physics.}
Dinç, Erdal; Baleanu, Dumitru
2004-01-01
The discrete and continuous wavelet transforms were applied to the overlapping signal analysis of the ratio data signal for simultaneous quantitative determination of the title subject compounds in samples. The ratio spectra data of the binary mixtures containing benazepril (BE) and hydrochlorothiazide (HCT) were transferred as data vectors into the wavelet domain. Signal compression, followed by a 1-dimension continuous wavelet transform (CWT), was used to obtain coincident transformed signals for pure BE and HCT and their mixtures. The coincident transformed amplitudes corresponding to both maximum and minimum points allowed construction of calibration graphs for each compound in the binary mixture. The validity of CWT calibrations was tested by analyzing synthetic mixtures of the investigated compounds, and successful results were obtained. All calculations were performed within EXCEL, C++, and MATLAB6.5 softwares. The obtained results indicated that our approach was flexible and applicable for the binary mixture analysis.
Lossless compression of 3D seismic data using a horizon displacement compensated 3D lifting scheme
NASA Astrophysics Data System (ADS)
Meftah, Anis; Antonini, Marc; Ben Amar, Chokri
2010-01-01
In this paper we present a method to optimize the computation of the wavelet transform for the 3D seismic data while reducing the energy of coefficients to the minimum. This allow us to reduce the entropy of the signal and so increase the compression ratios. The proposed method exploits the geometrical information contained in the seismic 3D data to optimize the computation of the wavelet transform. Indeed, the classic filtering is replaced by a filtering following the horizons contained in the 3D seismic images. Applying this approach in two dimensions permits us to obtain wavelets coefficients with lowest energy. The experiments show that our method permits to save extra 8% of the size of the object compared to the classic wavelet transform.
Destounis, Stamatia; Somerville, Patricia; Murphy, Philip; Seifert, Posy
2011-02-01
Problems associated with the large file sizes of digital mammograms have impeded the integration of digital mammography with picture archiving and communications systems. Digital mammograms irreversibly compressed by the novel wavelet Access Over Network (AON) compression algorithm were compared with lossless-compressed digital mammograms in a blinded reader study to evaluate the perceived sufficiency of irreversibly compressed images for comparison with next-year mammograms. Fifteen radiologists compared the same 100 digital mammograms in three different comparison modes: lossless-compressed vs 20:1 irreversibly compressed images (mode 1), lossless-compressed vs 40:1 irreversibly compressed images (mode 2), and 20:1 irreversibly compressed images vs 40:1 irreversibly compressed images (mode 3). Compression levels were randomly assigned between monitors. For each mode, the less compressed of the two images was correctly identified no more frequently than would occur by chance if all images were identical in compression. Perceived sufficiency for comparison with next-year mammograms was achieved by 97.37% of the lossless-compressed images and 97.37% of the 20:1 irreversibly compressed images in mode 1, 97.67% of the lossless-compressed images and 97.67% of the 40:1 irreversibly compressed images in mode 2, and 99.33% of the 20:1 irreversibly compressed images and 99.19% of the 40:1 irreversibly compressed images in mode 3. In a random-effect analysis, the irreversibly compressed images were found to be noninferior to the lossless-compressed images. Digital mammograms irreversibly compressed by the wavelet AON compression algorithm were as frequently judged sufficient for comparison with next-year mammograms as lossless-compressed digital mammograms.
Significance tests for the wavelet cross spectrum and wavelet linear coherence
NASA Astrophysics Data System (ADS)
Ge, Z.
2008-12-01
This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007). Conventional approaches that are used by Torrence and Compo (1998) based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [-π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As with simulated
Image and video compression for HDR content
NASA Astrophysics Data System (ADS)
Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.
2012-10-01
High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.
Video compression transmission via FM radio
NASA Astrophysics Data System (ADS)
Do, Chat C.; Szu, Harold H.
2001-03-01
At this moment of technology, video still represents the most effective communication in the world. In recent study from Dr. Charles Hsu and Dr. Harold Szu, the video can be compressed highly using feature-preserving but lossy discrete wavelet transform (DWT) technology. The processes of DWT technology are to improve the video compression level, storage capacity, filtering, and restoration techniques. This technology would allow running real time video through radio with fairly quality performance due to their compression and computational complexity techniques. After the compression, the video can be stored and transmitted at 16kbps through any reliable media and still retain a reasonable video quality. Hsu and Szu have done serious simulations and successfully implemented in the brassboards. The main objective of this paper is to present how to transmit this highly compressed video to the users via FM radio link interactively by using special technique. This application can enable many radio users receive video through their radio receiver box. This application has more interested in developing countries where television transmission is hardly afforded for education, distance learning, telemedicine, low cost sports, one-way videoconference and entertainment broadcasting.
Compressed sensing for STEM tomography.
Donati, Laurène; Nilchian, Masih; Trépout, Sylvain; Messaoudi, Cédric; Marco, Sergio; Unser, Michael
2017-08-01
A central challenge in scanning transmission electron microscopy (STEM) is to reduce the electron radiation dosage required for accurate imaging of 3D biological nano-structures. Methods that permit tomographic reconstruction from a reduced number of STEM acquisitions without introducing significant degradation in the final volume are thus of particular importance. In random-beam STEM (RB-STEM), the projection measurements are acquired by randomly scanning a subset of pixels at every tilt view. In this work, we present a tailored RB-STEM acquisition-reconstruction framework that fully exploits the compressed sensing principles. We first demonstrate that RB-STEM acquisition fulfills the "incoherence" condition when the image is expressed in terms of wavelets. We then propose a regularized tomographic reconstruction framework to recover volumes from RB-STEM measurements. We demonstrate through simulations on synthetic and real projection measurements that the proposed framework reconstructs high-quality volumes from strongly downsampled RB-STEM data and outperforms existing techniques at doing so. This application of compressed sensing principles to STEM paves the way for a practical implementation of RB-STEM and opens new perspectives for high-quality reconstructions in STEM tomography. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Energy and Quality Evaluation for Compressive Sensing of Fetal Electrocardiogram Signals
Da Poian, Giulia; Brandalise, Denis; Bernardini, Riccardo; Rinaldo, Roberto
2016-01-01
This manuscript addresses the problem of non-invasive fetal Electrocardiogram (ECG) signal acquisition with low power/low complexity sensors. A sensor architecture using the Compressive Sensing (CS) paradigm is compared to a standard compression scheme using wavelets in terms of energy consumption vs. reconstruction quality, and, more importantly, vs. performance of fetal heart beat detection in the reconstructed signals. We show in this paper that a CS scheme based on reconstruction with an over-complete dictionary has similar reconstruction quality to one based on wavelet compression. We also consider, as a more important figure of merit, the accuracy of fetal beat detection after reconstruction as a function of the sensor power consumption. Experimental results with an actual implementation in a commercial device show that CS allows significant reduction of energy consumption in the sensor node, and that the detection performance is comparable to that obtained from original signals for compression ratios up to about 75%. PMID:28025510
NASA Astrophysics Data System (ADS)
Schmanske, Brian M.; Loew, Murray H.
2003-05-01
A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
NASA Astrophysics Data System (ADS)
Gains, David
2009-05-01
Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.
High-frequency subband compressed sensing MRI using quadruplet sampling.
Sung, Kyunghyun; Hargreaves, Brian A
2013-11-01
To present and validate a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, compressed sensing can be used for high-spatial-frequency regions, whereas parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for compressed sensing and regular undersampling for parallel imaging. Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution three-dimensional breast imaging with a net acceleration of 11-12. The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. Copyright © 2012 Wiley Periodicals, Inc.
Nonuniform spatially adaptive wavelet packets
NASA Astrophysics Data System (ADS)
Carre, Philippe; Fernandez-Maloigne, Christine
2000-12-01
In this paper, we propose a new decomposition scheme for spatially adaptive wavelet packets. Contrary to the double tree algorithm, our method is non-uniform and shift- invariant in the time and frequency domains, and is minimal for an information cost function. We prose some-restrictions to our algorithm to reduce the complexity and permitting us to provide some time-frequency partitions of the signal in agreement with its structure. This new 'totally' non-uniform transform, more adapted than Malvar, Packets or dyadic double-tree decomposition, allows the study of all possible time-frequency partitions with the only restriction that the blocks are rectangular. It permits one to obtain a satisfying Time-Frequency representation, and is applied for the study of EEG signals.
Wavelet differential neural network observer.
Chairez, Isaac
2009-09-01
State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown.
Digital transceiver implementation for wavelet packet modulation
NASA Astrophysics Data System (ADS)
Lindsey, Alan R.; Dill, Jeffrey C.
1998-03-01
Current transceiver designs for wavelet-based communication systems are typically reliant on analog waveform synthesis, however, digital processing is an important part of the eventual success of these techniques. In this paper, a transceiver implementation is introduced for the recently introduced wavelet packet modulation scheme which moves the analog processing as far as possible toward the antenna. The transceiver is based on the discrete wavelet packet transform which incorporates level and node parameters for generalized computation of wavelet packets. In this transform no particular structure is imposed on the filter bank save dyadic branching, and a maximum level which is specified a priori and dependent mainly on speed and/or cost considerations. The transmitter/receiver structure takes a binary sequence as input and, based on the desired time- frequency partitioning, processes the signal through demultiplexing, synthesis, analysis, multiplexing and data determination completely in the digital domain - with exception of conversion in and out of the analog domain for transmission.
Wavelet Analysis for Acoustic Phased Array
NASA Astrophysics Data System (ADS)
Kozlov, Inna; Zlotnick, Zvi
2003-03-01
Wavelet spectrum analysis is known to be one of the most powerful tools for exploring quasistationary signals. In this paper we use wavelet technique to develop a new Direction Finding (DF) Algorithm for the Acoustic Phased Array (APA) systems. Utilising multi-scale analysis of libraries of wavelets allows us to work with frequency bands instead of individual frequency of an acoustic source. These frequency bands could be regarded as features extracted from quasistationary signals emitted by a noisy object. For detection, tracing and identification of a sound source in a noisy environment we develop smart algorithm. The essential part of this algorithm is a special interacting procedure of the above-mentioned DF-algorithm and the wavelet-based Identification (ID) algorithm developed in [4]. Significant improvement of the basic properties of a receiving APA pattern is achieved.
Signal Approximation with a Wavelet Neural Network
1992-12-01
specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .
Discrete multiscale wavelet shrinkage and integrodifferential equations
NASA Astrophysics Data System (ADS)
Didas, S.; Steidl, G.; Weickert, J.
2008-04-01
We investigate the relation between discrete wavelet shrinkage and integrodifferential equations in the context of simplification and denoising of one-dimensional signals. In the continuous setting, strong connections between these two approaches were discovered in 6 (see references). The key observation is that the wavelet transform can be understood as derivative operator after the convolution with a smoothing kernel. In this paper, we extend these ideas to the practically relevant discrete setting with both orthogonal and biorthogonal wavelets. In the discrete case, the behaviour of the smoothing kernels for different scales requires additional investigation. The results of discrete multiscale wavelet shrinkage and related discrete versions of integrodifferential equations are compared with respect to their denoising quality by numerical experiments.
Wavelet-based acoustic recognition of aircraft
Dress, W.B.; Kercel, S.W.
1994-09-01
We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.
A channel differential EZW coding scheme for EEG data compression.
Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice
2011-11-01
In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.
Spectral Ringing Artifacts in Hyperspectral Image Data Compression
NASA Astrophysics Data System (ADS)
Klimesh, M.; Kiely, A.; Xie, H.; Aranki, N.
2005-02-01
When a three-dimensional wavelet decomposition is used for compression of hyperspectral images, spectral ringing artifacts can arise, manifesting themselves as systematic biases in some reconstructed spectral bands. More generally, systematic differences in signal level in different spectral bands can hurt compression effectiveness of spatially low-pass subbands. The mechanism by which this occurs is described in the context of ICER-3D, a hyperspectral imagery extension of the ICER image compressor. Methods of mitigating or eliminating the detrimental effects of systematic band-dependent signal levels are proposed and discussed, and results are presented.
The wavelet response as a multiscale NDT method.
Le Gonidec, Y; Conil, F; Gibert, D
2003-08-01
We analyze interfaces by using reflected waves in the framework of the wavelet transform. First, we introduce the wavelet transform as an efficient method to detect and characterize a discontinuity in the acoustical impedance profile of a material. Synthetic examples are shown for both an isolated reflector and multiscale clusters of nearby defects. In the second part of the paper we present the wavelet response method as a natural extension of the wavelet transform when the velocity profile to be analyzed can only be remotely probed by propagating wavelets through the medium (instead of being directly convolved as in the wavelet transform). The wavelet response is constituted by the reflections of the incident wavelets on the discontinuities and we show that both transforms are equivalent when multiple scattering is neglected. We end this paper by experimentally applying the wavelet response in an acoustic tank to characterize planar reflectors with finite thicknesses.
Applications of a fast continuous wavelet transform
NASA Astrophysics Data System (ADS)
Dress, William B.
1997-04-01
A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice
EEG Multiresolution Analysis Using Wavelet Transform
2007-11-02
Wavelet transform (WT) is a new multiresolution time-frequency analysis method. WT possesses well localization feature both in tine and frequency...plays a key role in the diagnosing diseases and is useful for both physiological research and medical applications. Using the dyadic wavelet ... transform the EEG signals are successfully decomposed to the alpha rhythm (8-13Hz) beta rhythm (14-30Hz) theta rhythm (4-7Hz) and delta rhythm (0.3-3Hz) and
Wavelet Characterizations of Multi-Directional Regularity
NASA Astrophysics Data System (ADS)
Slimane, Mourad Ben
2011-05-01
The study of d dimensional traces of functions of m several variables leads to directional behaviors. The purpose of this paper is two-fold. Firstly, we extend the notion of one direction pointwise Hölder regularity introduced by Jaffard to multi-directions. Secondly, we characterize multi-directional pointwise regularity by Triebel anisotropic wavelet coefficients (resp. leaders), and also by Calderón anisotropic continuous wavelet transform.
Contour detection based on wavelet differentiation
NASA Astrophysics Data System (ADS)
Bezuglov, D.; Kuzin, A.; Voronin, V.
2016-05-01
This work proposes a novel algorithm for contour detection based on high-performance algorithm of wavelet analysis for multimedia applications. To solve the noise effect on the result of peaking in this paper we consider the direct and inverse wavelet differentiation. Extensive experimental evaluation on noisy images demonstrates that our contour detection method significantly outperform competing algorithms. The proposed algorithm provides a means of coupling our system to recognition application such as detection and identification of vehicle number plate.
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
Multichannel compressive sensing MRI using noiselet encoding.
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.
Fast wavelet estimation of weak biosignals.
Causevic, Elvir; Morley, Robert E; Wickerhauser, M Victor; Jacquin, Arnaud E
2005-06-01
Wavelet-based signal processing has become commonplace in the signal processing community over the past decade and wavelet-based software tools and integrated circuits are now commercially available. One of the most important applications of wavelets is in removal of noise from signals, called denoising, accomplished by thresholding wavelet coefficients in order to separate signal from noise. Substantial work in this area was summarized by Donoho and colleagues at Stanford University, who developed a variety of algorithms for conventional denoising. However, conventional denoising fails for signals with low signal-to-noise ratio (SNR). Electrical signals acquired from the human body, called biosignals, commonly have below 0 dB SNR. Synchronous linear averaging of a large number of acquired data frames is universally used to increase the SNR of weak biosignals. A novel wavelet-based estimator is presented for fast estimation of such signals. The new estimation algorithm provides a faster rate of convergence to the underlying signal than linear averaging. The algorithm is implemented for processing of auditory brainstem response (ABR) and of auditory middle latency response (AMLR) signals. Experimental results with both simulated data and human subjects demonstrate that the novel wavelet estimator achieves superior performance to that of linear averaging.
Wavelet adaptation for automatic voice disorders sorting.
Erfanian Saeedi, Nafise; Almasganj, Farshad
2013-07-01
Early diagnosis of voice disorders and abnormalities by means of digital speech processing is a subject of interest for many researchers. Various methods are introduced in the literature, some of which are able to extensively discriminate pathological voices from normal ones. Voice disorders sorting, on the other hand, has received less attention due to the complexity of the problem. Although, previous publications show satisfactory results in classifying one type of disordered voice from normal cases, or two different types of abnormalities from each other, no comprehensive approach for automatic sorting of vocal abnormalities has been offered yet. In this paper, a solution for this problem is suggested. We create a powerful wavelet feature extraction approach, in which, instead of standard wavelets, adaptive wavelets are generated and applied to the voice signals. Orthogonal wavelets are parameterized via lattice structure and then, the optimal parameters are investigated through an iterative process, using the genetic algorithm (GA). GA is guided by the classifier results. Based on the generated wavelet, a wavelet-filterbank is constructed and the voice signals are decomposed to compute eight energy-based features. A support vector machine (SVM) then classifies the signals using the extracted features. Experimental results show that six various types of vocal disorders: paralysis, nodules, polyps, edema, spasmodic dysphonia and keratosis are fully sorted via the proposed method. This could be a successful step toward sorting a larger number of abnormalities associated with the vocal system. Copyright © 2013 Elsevier Ltd. All rights reserved.
Image restoration based on wavelets and curvelet
NASA Astrophysics Data System (ADS)
Yang, Yang; Chen, Bo
2014-11-01
The performance of high-resolution imaging with large optical instruments is severely limited by atmospheric turbulence. Adaptive optics (AO) offers a real-time compensation for turbulence. However, the correction is often only partial, and image restoration is required for reaching or nearing to the diffraction limit. Wavelet-based techniques have been applied in atmospheric turbulencedegraded image restoration. However, wavelets do not restore long edges with high fidelity while curvelets are challenged by small features. Loosely speaking, each transform has its own area of expertise and this complementarity may be of great potential. So, we expect that the combination of different transforms can improve the quality of the result. In this paper, a novel deconvolution algorithm, based on both the wavelet transform and the curvelet transform (NDbWC). It extends previous results which were obtained for the image wavelet-based restoration. Using these two different transformations in the same algorithm allows us to optimally detect in tire same time isotropic features, well represented by the wavelet transform, and edges better represented by the curvelet transform. The NDbWC algorithm works better than classical wavelet-regularization method in deconvolution of the turbulence-degraded image with low SNR.
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
Shift products and factorizations of wavelet matrices
NASA Astrophysics Data System (ADS)
Turcajová, Radka; Kautsky, Jaroslav
1994-03-01
A class of so-called shift products of wavelet matrices is introduced. These products are based on circulations of columns of orthogonal banded block circulant matrices arising in applications of discrete orthogonal wavelet transforms (or paraunitary multirate filter banks) or, equivalently, on augmentations of wavelet matrices by zero columns (shifts). A special case is no shift; a product which is closely related to the Pollen product is then obtained. Known decompositions using factors formed by two blocks are described and additional conditions such that uniqueness of the factorization is guaranteed are given. Next it is shown that when nonzero shifts are used, an arbitrary wavelet matrix can be factorized into a sequence of shift products of square orthogonal matrices. Such a factorization, as well as those mentioned earlier, can be used for the parameterization and construction of wavelet matrices, including the costruction from the first row. Moreover, it is also suitable for efficient implementations of discrete orthogonal wavelet transforms and paraunitary filter banks.
Trabecular bone texture classification using wavelet leaders
NASA Astrophysics Data System (ADS)
Zou, Zilong; Yang, Jie; Megalooikonomou, Vasileios; Jennane, Rachid; Cheng, Erkang; Ling, Haibin
2016-03-01
In this paper we propose to use the Wavelet Leader (WL) transformation for studying trabecular bone patterns. Given an input image, its WL transformation is defined as the cross-channel-layer maximum pooling of an underlying wavelet transformation. WL inherits the advantage of the original wavelet transformation in capturing spatial-frequency statistics of texture images, while being more robust against scale and orientation thanks to the maximum pooling strategy. These properties make WL an attractive alternative to replace wavelet transformations which are used for trabecular analysis in previous studies. In particular, in this paper, after extracting wavelet leader descriptors from a trabecular texture patch, we feed them into two existing statistic texture characterization methods, namely the Gray Level Co-occurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). The most discriminative features, Energy of GLCM and Gray Level Non-Uniformity of GLRLM, are retained to distinguish two different populations between osteoporotic patients and control subjects. Receiver Operating Characteristics (ROC) curves are used to measure performance of classification. Experimental results on a recently released benchmark dataset show that WL significantly boosts the performance of baseline wavelet transformations by 5% in average.
Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman
2012-01-01
This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT). The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n × n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping.
Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman
2012-01-01
This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT). The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n × n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping. PMID:23243463
Wavelet Kernels on a DSP: A Comparison between Lifting and Filter Banks for Image Coding
NASA Astrophysics Data System (ADS)
Gnavi, Stefano; Penna, Barbara; Grangetto, Marco; Magli, Enrico; Olmo, Gabriella
2002-12-01
We develop wavelet engines on a digital signal processors (DSP) platform, the target application being image and intraframe video compression by means of the forthcoming JPEG2000 and Motion-JPEG2000 standards. We describe two implementations, based on the lifting scheme and the filter bank scheme, respectively, and we present experimental results on code profiling. In particular, we address the following problems: (1) evaluating the execution speed of a wavelet engine on a modern DSP; (2) comparing the actual execution speed of the lifting scheme and the filter bank scheme with the theoretical results; (3) using the on-board direct memory access (DMA) to possibly optimize the execution speed. The results allow to assess the performance of a modern DSP in the image coding task, as well as to compare the lifting and filter bank performance in a realistic application scenario. Finally, guidelines for optimizing the code efficiency are provided by investigating the possible use of the on-board DMA.
Wavelet-based Poisson Solver for use in Particle-In-CellSimulations
Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.
2005-05-13
We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.
Use of Fresnelets for phase-shifting digital hologram compression.
Darakis, Emmanouil; Soraghan, John J
2006-12-01
Fresnelets are wavelet-like base functions specially tailored for digital holography applications. We introduce their use in phase-shifting interferometry (PSI) digital holography for the compression of such holographic data. Two compression methods are investigated. One uses uniform quantization of the Fresnelet coefficients followed by lossless coding, and the other uses set portioning in hierarchical trees (SPIHT) coding. Quantization and lossless coding of the original data is used to compare the performance of the proposed algorithms. The comparison reveals that the Fresnelet transform of phase-shifting holograms in combination with SPIHT or uniform quantization can be used very effectively for the compression of holographic data. The performance of the new compression schemes is demonstrated on real PSI digital holographic data.
Compression of M-FISH images using 3D SPIHT
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xiong, Zixiang; Castleman, Kenneth R.
2001-12-01
With the recent development of the use of digital media for cytogenetic imaging applications, efficient compression techniques are highly desirable to accommodate the rapid growth of image data. This paper introduces a lossy to lossless coding technique for compression of multiplex fluorescence in situ hybridization (M-FISH) images, based on 3-D set partitioning in hierarchical trees (3-D SPIHT). Using a lifting-based integer wavelet decomposition, the 3-D SPIHT achieves both embedded coding and substantial improvement in lossless compression over the Lempel-Ziv (WinZip) coding which is the current method for archiving M-FISH images. The lossy compression performance of the 3-D SPIHT is also significantly better than that of the 2-D based JPEG-2000.
Region segmentation techniques for object-based image compression: a review
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
2004-10-01
Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region"s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step. Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.
Hyperspectral images lossless compression using the 3D binary EZW algorithm
NASA Astrophysics Data System (ADS)
Cheng, Kai-jen; Dill, Jeffrey
2013-02-01
This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.
Efficient coding of wavelet trees and its applications in image coding
NASA Astrophysics Data System (ADS)
Zhu, Bin; Yang, En-hui; Tewfik, Ahmed H.; Kieffer, John C.
1996-02-01
We propose in this paper a novel lossless tree coding algorithm. The technique is a direct extension of the bisection method, the simplest case of the complexity reduction method proposed recently by Kieffer and Yang, that has been used for lossless data string coding. A reduction rule is used to obtain the irreducible representation of a tree, and this irreducible tree is entropy-coded instead of the input tree itself. This reduction is reversible, and the original tree can be fully recovered from its irreducible representation. More specifically, we search for equivalent subtrees from top to bottom. When equivalent subtrees are found, a special symbol is appended to the value of the root node of the first equivalent subtree, and the root node of the second subtree is assigned to the index which points to the first subtree, an all other nodes in the second subtrees are removed. This procedure is repeated until it cannot be reduced further. This yields the irreducible tree or irreducible representation of the original tree. The proposed method can effectively remove the redundancy in an image, and results in more efficient compression. It is proved that when the tree size approaches infinity, the proposed method offers the optimal compression performance. It is generally more efficient in practice than direct coding of the input tree. The proposed method can be directly applied to code wavelet trees in non-iterative wavelet-based image coding schemes. A modified method is also proposed for coding wavelet zerotrees in embedded zerotree wavelet (EZW) image coding. Although its coding efficiency is slightly reduced, the modified version maintains exact control of bit rate and the scalability of the bit stream in EZW coding.
Mining wavelet transformed boiler data sets
NASA Astrophysics Data System (ADS)
Letsche, Terry Lee
Accurate combustion models provide information that allows increased boiler efficiency optimization, saving money and resources while reducing waste. Boiler combustion processes are noted for being complex, nonstationary and nonlinear. While numerous methods have been used to model boiler processes, data driven approaches reflect actual operating conditions within a particular boiler and do not depend on idealized, complex, or expensive empirical models. Boiler and combustion processes vary in time, requiring a denoising technique that preserves the temporal and frequency nature of the data. Moving average, a common technique, smoothes data---low frequency noise is not removed. This dissertation examines models built with wavelet denoising techniques that remove low and high frequency noise in both time and frequency domains. The denoising process has a number of parameters, including choice of wavelet, threshold value, level of wavelet decomposition, and disposition of attributes that appear to be significant at multiple thresholds. A process is developed to experimentally evaluate the predictive accuracy of these models and compares this result against two benchmarks. The first research hypothesis compares the performance of these wavelet denoised models to the model generated from the original data. The second research hypothesis compares the performance of the models generated with this denoising approach to the most effective model generated from a moving average process. In both experiments it was determined that the Daubechies 4 wavelet was a better choice than the more typically chosen Haar wavelet, wavelet packet decomposition outperforms other levels of wavelet decomposition, and discarding all but the lowest threshold repeating attributes produces superior results. The third research hypothesis examined using a two-dimensional wavelet transform on the data. Another parameter for handling the boundary condition was introduced. In the two-dimensional case
CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC
NASA Astrophysics Data System (ADS)
Poupat, Jean-Luc; Vitulli, Raffaele
2013-08-01
The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.
Interactive video compression for remote sensing
NASA Astrophysics Data System (ADS)
Maleh, Ray; Boyle, Frank A.; Deignan, Paul B.; Yancey, Jerry W.
2011-05-01
Modern day remote video cameras enjoy the ability of producing quality video streams at extremely high resolutions. Unfortunately, the benefit of such technology cannot be realized when the channel between the sensor and the operator restricts the bit-rate of incoming data. In order to cram more information into the available bandwidth, video technologies typically employ compression schemes (e.g. H.264/MPEG 4 standard) which exploit spatial and temporal redundancies. We present an alternative method utilizing region of interest (ROI) based compression. Each region in the incoming scene is assigned a score measuring importance to the operator. Scores may be determined based on the manual selection of one or more objects which are then automatically tracked by the system; or alternatively, listeners may be pre-assigned to various areas that trigger high scores upon the occurrence of customizable events. A multi-resolution wavelet expansion is then used to optimally transmit important regions at higher resolutions and frame rates than less interesting peripheral background objects subject to bandwidth constraints. We show that our methodology makes it possible to obtain high compression ratios while ensuring no loss in overall situational awareness. If combined with modules from traditional video codecs, compression ratios of 100:1 to 1000:1, depending on ROI size, can easily be achieved.
Application of Hermitian wavelet to crack fault detection in gearbox
NASA Astrophysics Data System (ADS)
Li, Hui; Zhang, Yuping; Zheng, Haiqi
2011-05-01
The continuous wavelet transform enables one to look at the evolution in the time scale joint representation plane. This advantage makes it very suitable for the detection of singularity generated by localized defects in the mechanical system. However, most of the applications of the continuous wavelet transform have widely focused on the use of Morlet wavelet transform. The complex Hermitian wavelet is constructed based on the first and the second derivatives of the Gaussian function to detect signal singularities. The Fourier spectrum of Hermitian wavelet is real; therefore, Hermitian wavelet does not affect the phase of a signal in the complex domain. This gives a desirable ability to extract the singularity characteristic of a signal precisely. In this study, Hermitian wavelet is used to diagnose the gear localized crack fault. The simulative and experimental results show that Hermitian wavelet can extract the transients from strong noise signals and can effectively diagnose the localized gear fault.
Wavelet transforms as solutions of partial differential equations
Zweig, G.
1997-10-01
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.
Functional calculus using wavelet transforms
NASA Astrophysics Data System (ADS)
Holschneider, Matthias
1994-07-01
It is shown how the wavelet transform may be used to compute for a function s the symbol s(A) for any (not necessarily) self-adjoint operator A whose spectrum is contained in the upper half plane. For self-adjoint operators it is shown that this functional calculus coincides with the usual one. In particular it is shown how the exponential eitA can be written in terms of the resolvent Rz=(A-z)-1 of A as follows: eitA=(1/c) ∫0∞da an-2∫-∞+∞ dbĝ¯ (at)eitbRb-ian(A), with c=-2iπ×∫0∞(dω/ω) (-iω)n-1ĝ¯(ω)e-ω, and n∈N, and the integral is understood as the Cesaro limit. This shows explicitly how the behavior for large t is determined by the behavior of Rz at Iz ≂1/t.
Image wavelet decomposition and applications
NASA Technical Reports Server (NTRS)
Treil, N.; Mallat, S.; Bajcsy, R.
1989-01-01
The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.
White, M.A.; Schmidt, J.C.; Topping, D.J.
2005-01-01
Wavelet analysis is a powerful tool with which to analyse the hydrologic effects of dam construction and operation on river systems. Using continuous records of instantaneous discharge from the Lees Ferry gauging station and records of daily mean discharge from upstream tributaries, we conducted wavelet analyses of the hydrologic structure of the Colorado River in Grand Canyon. The wavelet power spectrum (WPS) of daily mean discharge provided a highly compressed and integrative picture of the post-dam elimination of pronounced annual and sub-annual flow features. The WPS of the continuous record showed the influence of diurnal and weekly power generation cycles, shifts in discharge management, and the 1996 experimental flood in the post-dam period. Normalization of the WPS by local wavelet spectra revealed the fine structure of modulation in discharge scale and amplitude and provides an extremely efficient tool with which to assess the relationships among hydrologic cycles and ecological and geomorphic systems. We extended our analysis to sections of the Snake River and showed how wavelet analysis can be used as a data mining technique. The wavelet approach is an especially promising tool with which to assess dam operation in less well-studied regions and to evaluate management attempts to reconstruct desired flow characteristics. Copyright ?? 2005 John Wiley & Sons, Ltd.