Science.gov

Sample records for non-perfect wavelet compression

  1. Wavelet and wavelet packet compression of electrocardiograms.

    PubMed

    Hilton, M L

    1997-05-01

    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  2. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  3. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  4. LIDAR data compression using wavelets

    NASA Astrophysics Data System (ADS)

    Pradhan, B.; Mansor, Shattri; Ramli, Abdul Rahman; Mohamed Sharif, Abdul Rashid B.; Sandeep, K.

    2005-10-01

    The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the LIDAR data compression. A newly developed data compression approach to approximate the LIDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become a case in point for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original LIDAR data. The results show that this method can be used for significant reduction of data set.

  5. Wavelet transform approach to video compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-04-01

    In this research, we propose a video compression scheme that uses the boundary-control vectors to represent the motion field and the embedded zerotree wavelet (EZW) to compress the displacement frame difference. When compared to the DCT-based MPEG, the proposed new scheme achieves a better compression performance in terms of the MSE (mean square error) value and visual perception for the same given bit rate.

  6. Novel wavelet coder for color image compression

    NASA Astrophysics Data System (ADS)

    Wang, Houng-Jyh M.; Kuo, C.-C. Jay

    1997-10-01

    A new still image compression algorithm based on the multi-threshold wavelet coding (MTWC) technique is proposed in this work. It is an embedded wavelet coder in the sense that its compression ratio can be controlled depending on the bandwidth requirement of image transmission. At low bite rates, MTWC can avoid the blocking artifact from JPEG to result in a better reconstructed image quality. An subband decision scheme is developed based on the rate-distortion theory to enhance the image fidelity. Moreover, a new quantization sequence order is introduced based on our analysis of error energy reduction in significant and refinement maps. Experimental results are given to demonstrate the superior performance of the proposed new algorithm in its high reconstructed quality for color and gray level image compression and low computational complexity. Generally speaking, it gives a better rate- distortion tradeoff and performs faster than most existing state-of-the-art wavelet coders.

  7. Image compression algorithm using wavelet transform

    NASA Astrophysics Data System (ADS)

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory

    2016-09-01

    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  8. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  9. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  10. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  11. Lossless wavelet compression on medical image

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong

    2006-09-01

    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  12. Steady-State and Dynamic Myoelectric Signal Compression Using Embedded Zero-Tree Wavelets

    DTIC Science & Technology

    2001-10-25

    MES compression. This research investigates static and dynamic MES compression using the embedded zero- tree wavelet ( EZW ) compression algorithm and...compression using a modified version of Shapiro’s [5] embedded zero-tree wavelet ( EZW ) compression algorithm. This research investigates static...and transient MES compression using the EZW compression algorithm and compares its performance to a standard wavelet compression technique. For

  13. 3-D wavelet compression and progressive inverse wavelet synthesis rendering of concentric mosaic.

    PubMed

    Luo, Lin; Wu, Yunnan; Li, Jin; Zhang, Ya-Qin

    2002-01-01

    Using an array of photo shots, the concentric mosaic offers a quick way to capture and model a realistic three-dimensional (3-D) environment. We compress the concentric mosaic image array with a 3-D wavelet transform and coding scheme. Our compression algorithm and bitstream syntax are designed to ensure that a local view rendering of the environment requires only a partial bitstream, thereby eliminating the need to decompress the entire compressed bitstream before rendering. By exploiting the ladder-like structure of the wavelet lifting scheme, the progressive inverse wavelet synthesis (PIWS) algorithm is proposed to maximally reduce the computational cost of selective data accesses on such wavelet compressed datasets. Experimental results show that the 3-D wavelet coder achieves high-compression performance. With the PIWS algorithm, a 3-D environment can be rendered in real time from a compressed dataset.

  14. Compression of biomedical signals with mother wavelet optimization and best-basis wavelet packet selection.

    PubMed

    Brechet, Laurent; Lucas, Marie-Françoise; Doncarli, Christian; Farina, Dario

    2007-12-01

    We propose a novel scheme for signal compression based on the discrete wavelet packet transform (DWPT) decompositon. The mother wavelet and the basis of wavelet packets were optimized and the wavelet coefficients were encoded with a modified version of the embedded zerotree algorithm. This signal dependant compression scheme was designed by a two-step process. The first (internal optimization) was the best basis selection that was performed for a given mother wavelet. For this purpose, three additive cost functions were applied and compared. The second (external optimization) was the selection of the mother wavelet based on the minimal distortion of the decoded signal given a fixed compression ratio. The mother wavelet was parameterized in the multiresolution analysis framework by the scaling filter, which is sufficient to define the entire decomposition in the orthogonal case. The method was tested on two sets of ten electromyographic (EMG) and ten electrocardiographic (ECG) signals that were compressed with compression ratios in the range of 50%-90%. For 90% compression ratio of EMG (ECG) signals, the percent residual difference after compression decreased from (mean +/- SD) 48.6 +/- 9.9% (21.5 +/- 8.4%) with discrete wavelet transform (DWT) using the wavelet leading to poorest performance to 28.4 +/- 3.0% (6.7 +/- 1.9%) with DWPT, with optimal basis selection and wavelet optimization. In conclusion, best basis selection and optimization of the mother wavelet through parameterization led to substantial improvement of performance in signal compression with respect to DWT and randon selection of the mother wavelet. The method provides an adaptive approach for optimal signal representation for compression and can thus be applied to any type of biomedical signal.

  15. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  16. Discrete directional wavelet bases for image compression

    NASA Astrophysics Data System (ADS)

    Dragotti, Pier L.; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar

    2003-06-01

    The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.

  17. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  18. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  19. Myoelectric signal compression using zero-trees of wavelet coefficients.

    PubMed

    Norris, Jason A; Englehart, Kevin B; Lovely, Dennis F

    2003-11-01

    Recent progress in the diagnostic use of the myoelectric signal for neuromuscular diseases, coupled with increasing interests in telemedicine applications, mandate the need for an effective compression technique. The efficacy of the embedded zero-tree wavelet compression algorithm is examined with respect to some important analysis parameters (the length of the analysis segment and wavelet type) and measurement conditions (muscle type and contraction type). It is shown that compression performance improves with segment length, and that good choices of wavelet type include the Meyer wavelet and the fifth order biorthogonal wavelet. The effects of different muscle sites and contraction types on compression performance are less conclusive.A comparison of a number of lossy compression techniques has revealed that the EZW algorithm exhibits superior performance to a hard thresholding wavelet approach, but falls short of adaptive differential pulse code modulation. The bit prioritization capability of the EZW algorithm allows one to specify the compression factor online, making it an appealing technique for streaming data applications, as often encountered in telemedicine.

  20. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  1. Improved zerotree coding algorithm for wavelet image compression

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Li, Yunsong; Wu, Chengke

    2000-12-01

    A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.

  2. Three-dimensional compression scheme based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Yang, Wu; Xu, Hui; Liao, Mengyang

    1999-03-01

    In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.

  3. MR image compression using a wavelet transform coding algorithm.

    PubMed

    Angelidis, P A

    1994-01-01

    We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.

  4. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  5. Wavelets for approximate Fourier transform and data compression

    NASA Astrophysics Data System (ADS)

    Guo, Haitao

    This dissertation has two parts. In the first part, we develop a wavelet-based fast approximate Fourier transform algorithm. The second part is devoted to the developments of several wavelet-based data compression techniques for image and seismic data. We propose an algorithm that uses the discrete wavelet transform (DWT) as a tool to compute the discrete Fourier transform (DFT). The classical Cooley-Tukey FFT is shown to be a special case of the proposed algorithm when the wavelets in use are trivial. The main advantage of our algorithm is that the good time and frequency localization of wavelets can be exploited to approximate the Fourier transform for many classes of signals, resulting in much less computation. Thus the new algorithm provides an efficient complexity versus accuracy tradeoff. When approximations are allowed, under certain sparsity conditions, the algorithm can achieve linear complexity, i.e. O(N). The proposed algorithm also has built-in noise reduction capability. For waveform and image compression, we propose a novel scheme using the recently developed Burrows-Wheeler transform (BWT). We show that the discrete wavelet transform (DWT) should be used before the Burrows-Wheeler transform to improve the compression performance for many natural signals and images. We demonstrate that the simple concatenation of the DWT and BWT coding performs comparably as the embedded zerotree wavelet (EZW) compression for images. Various techniques that significantly improve the performance of our compression scheme are also discussed. The phase information is crucial for seismic data processing. However, traditional compression schemes do not pay special attention to preserving the phase of the seismic data, resulting in the loss of critical information. We propose a lossy compression method that preserves the phase as much as possible. The method is based on the self-adjusting wavelet transform that adapts to the locations of the significant signal components

  6. Image compression with embedded wavelet coding via vector quantization

    NASA Astrophysics Data System (ADS)

    Katsavounidis, Ioannis; Kuo, C.-C. Jay

    1995-09-01

    In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.

  7. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  8. Wavelet-based image compression using fixed residual value

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2000-12-01

    Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.

  9. Wavelet Compression of Complex SAR Imagery Using Complex- and Real-Valued Wavelets: A Comparative Study

    SciTech Connect

    Ives, R.W.; Kiser, C.; Magotra, N.

    1998-10-27

    While many synthetic aperture radar (SAR) applications use only detected imagery, dramatic improvements in resolution and employment of algorithms requiring complex-valued SAR imagery suggest the need for compression of complex data. Here, we investigate the benefits of using complex- valued wavelets on complex SAR imagery in the embedded zerotree wavelet compression algorithm, compared to using real-valued wavelets applied separately to the real and imaginary components. This compression is applied at low ratios (4:1-12:1) for high fidelity output. The complex spatial correlation metric is used to numerically evaluate quality. Numerical results are tabulated and original and decompressed imagery are presented as well as correlation maps to allow visual comparisons.

  10. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  11. Oriented wavelet transform for image compression and denoising.

    PubMed

    Chappelier, Vivien; Guillemot, Christine

    2006-10-01

    In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.

  12. Edge-preserving image compression using adaptive lifting wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Qiu, Bingchang

    2015-07-01

    In this paper, a novel 2-D adaptive lifting wavelet transform is presented. The proposed algorithm is designed to further reduce the high-frequency energy of wavelet transform, improve the image compression efficiency and preserve the edge or texture of original images more effectively. In this paper, a new optional direction set, covering the surrounding integer pixels and sub-pixels, is designed. Hence, our algorithm adapts far better to the image orientation features in local image blocks. To obtain the computationally efficient and coding performance, the complete processes of 2-D adaptive lifting wavelet transform is introduced and implemented. Compared with the traditional lifting-based wavelet transform, the adaptive directional lifting and the direction-adaptive discrete wavelet transform, the new structure reduces the high-frequency wavelet coefficients more effectively, and the texture structures of the reconstructed images are more refined and clear than that of the other methods. The peak signal-to-noise ratio and the subjective quality of the reconstructed images are significantly improved.

  13. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  14. Three-dimensional image compression with integer wavelet transforms.

    PubMed

    Bilgin, A; Zweig, G; Marcellin, M W

    2000-04-10

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  15. Three-Dimensional Image Compression With Integer Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Bilgin, Ali; Zweig, George; Marcellin, Michael W.

    2000-04-01

    A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.

  16. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  17. Wavelet-based Image Compression using Subband Threshold

    NASA Astrophysics Data System (ADS)

    Muzaffar, Tanzeem; Choi, Tae-Sun

    2002-11-01

    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  18. Object-based wavelet compression using coefficient selection

    NASA Astrophysics Data System (ADS)

    Zhao, Lifeng; Kassim, Ashraf A.

    1998-12-01

    In this paper, we present a novel approach to code image regions of arbitrary shapes. The proposed algorithm combines a coefficient selection scheme with traditional wavelet compression for coding arbitrary regions and uses a shape adaptive embedded zerotree wavelet coding (SA-EZW) to quantize the selected coefficients. Since the shape information is implicitly encoded by the SA-EZW, our decoder can reconstruct the arbitrary region without separate shape coding. This makes the algorithm simple to implement and avoids the problem of contour coding. Our algorithm also provides a sufficient framework to address content-based scalability and improved coding efficiency as described by MPEG-4.

  19. Wavelet-based pavement image compression and noise reduction

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2005-08-01

    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  20. Improved successive refinement for wavelet-based embedded image compression

    NASA Astrophysics Data System (ADS)

    Creusere, Charles D.

    1999-10-01

    In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.

  1. Adaptive wavelet transform algorithm for lossy image compression

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio

    2004-11-01

    A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.

  2. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  3. Compression of digital hologram for three-dimensional object using Wavelet-Bandelets transform.

    PubMed

    Bang, Le Thanh; Ali, Zulfiqar; Quang, Pham Duc; Park, Jae-Hyeung; Kim, Nam

    2011-04-25

    In the transformation based compression algorithms of digital hologram for three-dimensional object, the balance between compression ratio and normalized root mean square (NRMS) error is always the core of algorithm development. The Wavelet transform method is efficient to achieve high compression ratio but NRMS error is also high. In order to solve this issue, we propose a hologram compression method using Wavelet-Bandelets transform. Our simulation and experimental results show that the Wavelet-Bandelets method has a higher compression ratio than Wavelet methods and all the other methods investigated in this paper, while it still maintains low NRMS error.

  4. Integer wavelet transform for embedded lossy to lossless image compression.

    PubMed

    Reichel, J; Menegaz, G; Nadenau, M J; Kunt, M

    2001-01-01

    The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.

  5. Electroencephalographic compression based on modulated filter banks and wavelet transform.

    PubMed

    Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2011-01-01

    Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality.

  6. Invertible update-then-predict integer lifting wavelet for lossless image compression

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Li, Yanjuan; Zhang, Haiying; Gao, Wenpeng

    2017-01-01

    This paper presents a new wavelet family for lossless image compression by re-factoring the channel representation of the update-then-predict lifting wavelet, introduced by Claypoole, Davis, Sweldens and Baraniuk, into lifting steps. We name the new wavelet family as invertible update-then-predict integer lifting wavelets (IUPILWs for short). To build IUPILWs, we investigate some central issues such as normalization, invertibility, integer structure, and scaling lifting. The channel representation of the previous update-then-predict lifting wavelet with normalization is given and the invertibility is discussed firstly. To guarantee the invertibility, we re-factor the channel representation into lifting steps. Then the integer structure and scaling lifting of the invertible update-then-predict wavelet are given and the IUPILWs are built. Experiments show that comparing with the integer lifting structure of 5/3 wavelet, 9/7 wavelet, and iDTT, IUPILW results in the lower bit-rates for lossless image compression.

  7. Adaptive wavelet transform algorithm for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo

    2003-11-01

    A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.

  8. Wavelet Compression of Satellite-Transmitted Digital Mammograms

    NASA Technical Reports Server (NTRS)

    Zheng, Yuan F.

    2001-01-01

    Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by

  9. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  10. Application of adaptive wavelet transforms via lifting in image data compression

    NASA Astrophysics Data System (ADS)

    Ye, Shujiang; Zhang, Ye; Liu, Baisen

    2008-10-01

    The adaptive wavelet transforms via lifting is proposed. In the transform, update filter is selected by the signal's character. Perfect reconstruction is possible without any overhead cost. To make sure the system's stability, in the lifting scheme of adaptive wavelet, update step is placed before prediction step. The Adaptive wavelet transforms via lifting is benefit for the image compression, because of the high stability, the small coefficients of high frequency parts, and the perfect reconstruction. With the adaptive wavelet transforms via lifting and the SPIHT, the image compression is realized in this paper, and the result is pleasant.

  11. Medical image compression with embedded-wavelet transform

    NASA Astrophysics Data System (ADS)

    Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz

    1997-10-01

    The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.

  12. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  13. Lossless image compression with projection-based and adaptive reversible integer wavelet transforms.

    PubMed

    Deever, Aaron T; Hemami, Sheila S

    2003-01-01

    Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.

  14. Property study of integer wavelet transform lossless compression coding based on lifting scheme

    NASA Astrophysics Data System (ADS)

    Xie, Cheng Jun; Yan, Su; Xiang, Yang

    2006-01-01

    In this paper the algorithms and its improvement of integer wavelet transform combining SPIHT and arithmetic coding in image lossless compression is mainly studied. The experimental result shows that if the order of low-pass filter vanish matrix is fixed, the improvement of compression effect is not evident when invertible integer wavelet transform is satisfied and focusing of energy property monotonic increase with transform scale. For the same wavelet bases, the order of low-pass filter vanish matrix is more important than the order of high-pass filter vanish matrix in improving the property of image compression. Integer wavelet transform lossless compression coding based on lifting scheme has no relation to the entropy of image. The effect of compression is depended on the the focuing of energy property of image transform.

  15. MAXAD distortion minimization for wavelet compression of remote sensing data

    NASA Astrophysics Data System (ADS)

    Alecu, Alin; Munteanu, Adrian; Schelkens, Peter; Cornelis, Jan P.; Dewitte, Steven

    2001-12-01

    In the context of compression of high resolution multi-spectral satellite image data consisting of radiances and top-of-the-atmosphere fluxes, it is vital that image calibration characteristics (luminance, radiance) must be preserved within certain limits in lossy image compression. Though existing compression schemes (SPIHT, JPEG2000, SQP) give good results as far as minimization of the global PSNR error is concerned, they fail to guarantee a maximum local error. With respect to this, we introduce a new image compression scheme, which guarantees a MAXAD distortion, defined as the maximum absolute difference between original pixel values and reconstructed pixel values. In terms of defining the Lagrangian optimization problem, this reflects in minimization of the rate given the MAXAD distortion. Our approach thus uses the l-infinite distortion measure, which is applied to the lifting scheme implementation of the 9-7 floating point Cohen-Daubechies-Feauveau (CDF) filter. Scalar quantizers, optimal in the D-R sense, are derived for every subband, by solving a global optimization problem that guarantees a user-defined MAXAD. The optimization problem has been defined and solved for the case of the 9-7 filter, and we show that our approach is valid and may be applied to any finite wavelet filters synthesized via lifting. The experimental assessment of our codec shows that our technique provides excellent results in applications such as those for remote sensing, in which reconstruction of image calibration characteristics within a tolerable local error (MAXAD) is perceived as being of crucial importance compared to obtaining an acceptable global error (PSNR), as is the case of existing quantizer design techniques.

  16. All-optical image processing and compression based on Haar wavelet transform.

    PubMed

    Parca, Giorgia; Teixeira, Pedro; Teixeira, Antonio

    2013-04-20

    Fast data processing and compression methods based on wavelet transform are fundamental tools in the area of real-time 2D data/image analysis, enabling high definition applications and redundant data reduction. The need for information processing at high data rates motivates the efforts on exploiting the speed and the parallelism of the light for data analysis and compression. Among several schemes for optical wavelet transform implementation, the Haar transform offers simple design and fast computation, plus it can be easily implemented by optical planar interferometry. We present an all optical scheme based on an asymmetric couplers network for achieving fast image processing and compression in the optical domain. The implementation of Haar wavelet transform through a 3D passive structure is supported by theoretical formulation and simulations results. Asymmetrical coupler 3D network design and optimization are reported and Haar wavelet transform, including compression, was achieved, thus demonstrating the feasibility of our approach.

  17. An improved image compression algorithm using binary space partition scheme and geometric wavelets.

    PubMed

    Chopra, Garima; Pal, A K

    2011-01-01

    Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The present study improves the geometric wavelet (GW) image coding method by using the slope intercept representation of the straight line in the binary space partition scheme. The performance of the proposed algorithm is compared with the wavelet transform-based compression methods such as the embedded zerotree wavelet (EZW), the set partitioning in hierarchical trees (SPIHT) and the embedded block coding with optimized truncation (EBCOT), and other recently developed "sparse geometric representation" based compression algorithms. The proposed image compression algorithm outperforms the EZW, the Bandelets and the GW algorithm. The presented algorithm reports a gain of 0.22 dB over the GW method at the compression ratio of 64 for the Cameraman test image.

  18. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  19. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  20. A method of image compression based on lifting wavelet transform and modified SPIHT

    NASA Astrophysics Data System (ADS)

    Lv, Shiliang; Wang, Xiaoqian; Liu, Jinguo

    2016-11-01

    In order to improve the efficiency of remote sensing image data storage and transmission we present a method of the image compression based on lifting scheme and modified SPIHT(set partitioning in hierarchical trees) by the design of FPGA program, which realized to improve SPIHT and enhance the wavelet transform image compression. The lifting Discrete Wavelet Transform (DWT) architecture has been selected for exploiting the correlation among the image pixels. In addition, we provide a study on what storage elements are required for the wavelet coefficients. We present lena's image using the 3/5 lifting scheme.

  1. Wavelets

    NASA Astrophysics Data System (ADS)

    DeVore, Ronald A.; Lucier, Bradley J.

    The subject of `wavelets' is expanding at such a tremendous rate that it is impossible to give, within these few pages, a complete introduction to all aspects of its theory. We hope, however, to allow the reader to become sufficiently acquainted with the subject to understand, in part, the enthusiasm of its proponents toward its potential application to various numerical problems. Furthermore, we hope that our exposition can guide the reader who wishes to make more serious excursions into the subject. Our viewpoint is biased by our experience in approximation theory and data compression; we warn the reader that there are other viewpoints that are either not represented here or discussed only briefly. For example, orthogonal wavelets were developed primarily in the context of signal processing, an application upon which we touch only indirectly. However, there are several good expositions (e.g. Daubechies (1990) and Rioul and Vetterli (1991)) of this application. A discussion of wavelet decompositions in the context of Littlewood-Paley theory can be found in the monograph of Frazier et al. (1991). We shall also not attempt to give a complete discussion of the history of wavelets. Historical accounts can be found in the book of Meyer (1990) and the introduction of the article of Daubechies (1990). We shall try to give sufficient historical commentary in the course of our presentation to provide some feeling for the subject's development.

  2. Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity

    PubMed Central

    Du, Huiqian; Han, Yu; Mei, Wenbo

    2014-01-01

    Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704

  3. Inter-view wavelet compression of light fields with disparity-compensated lifting

    NASA Astrophysics Data System (ADS)

    Chang, Chuo-Ling; Zhu, Xiaoqing; Ramanathan, Prashant; Girod, Bernd

    2003-06-01

    We propose a novel approach that uses disparity-compensated lifting for wavelet compression of light fields. Disparity compensation is incorporated into the lifting structure for the transform across the views to solve the irreversibility limitation in previous wavelet coding schemes. With this approach, we obtain the benefits of wavelet coding, such as scalability in all dimensions, as well as superior compression performance. For light fields of an object, shape adaptation is adopted to improve the compression efficiency and visual quality of reconstructed images. In this work we extend the scheme to handle light fields with arbitrary camera arrangements. A view-sequencing algorithm is developed to encode the images. Experimental results show that the proposed scheme outperforms existing light field compression techniques in terms of compression efficiency and visual quality of the reconstructed views.

  4. A new multi-resolution hybrid wavelet for analysis and image compression

    NASA Astrophysics Data System (ADS)

    Kekre, Hemant B.; Sarode, Tanuja K.; Vig, Rekha

    2015-12-01

    Most of the current image- and video-related applications require higher resolution of images and higher data rates during transmission, better compression techniques are constantly being sought after. This paper proposes a new and unique hybrid wavelet technique which has been used for image analysis and compression. The proposed hybrid wavelet combines the properties of existing orthogonal transforms in the most desirable way and also provides for multi-resolution analysis. These wavelets have unique properties that they can be generated for various sizes and types by using different component transforms and varying the number of components at each level of resolution. These hybrid wavelets have been applied to various standard images like Lena (512 × 512), Cameraman (256 × 256) and the values of peak signal to noise ratio (PSNR) obtained are compared with those obtained using some standard existing compression techniques. Considerable improvement in the values of PSNR, as much as 5.95 dB higher than the standard methods, has been observed, which shows that hybrid wavelet gives better compression. Images of various sizes like Scenery (200 × 200), Fruit (375 × 375) and Barbara (112 × 224) have also been compressed using these wavelets to demonstrate their use for different sizes and shapes.

  5. [Detection of reducing sugar content of potato granules based on wavelet compression by near infrared spectroscopy].

    PubMed

    Dong, Xiao-Ling; Sun, Xu-Dong

    2013-12-01

    The feasibility was explored in determination of reducing sugar content of potato granules based on wavelet compression algorithm combined with near-infrared spectroscopy. The spectra of 250 potato granules samples were recorded by Fourier transform near-infrared spectrometer in the range of 4000- 10000 cm-1. The three parameters of vanishing moments, wavelet coefficients and principal component factor were optimized. The optimization results of three parameters were 10, 100 and 20, respectively. The original spectra of 1501 spectral variables were transfered to 100 wavelet coefficients using db wavelet function. The partial least squares (PLS) calibration models were developed by 1501 spectral variables and 100 wavelet coefficients. Sixty two unknown samples of prediction set were applied to evaluate the performance of PLS models. By comparison, the optimal result was obtained by wavelet compression combined with PLS calibration model. The correlation coefficient of prediction and root mean square error of prediction were 0.98 and 0.181%, respectively. Experimental results show that the dimensions of spectral data were reduced, scarcely losing effective information by wavelet compression algorithm combined with near-infrared spectroscopy technology in determination of reducing sugar in potato granules. The PLS model is simplified, and the predictive ability is improved.

  6. Research on application for integer wavelet transform for lossless compression of medical image

    NASA Astrophysics Data System (ADS)

    Zhou, Zude; Li, Quan; Long, Quan

    2003-09-01

    This paper proposes an approach based on using lifting scheme to construct integer wavelet transform whose purpose is to realize the lossless compression of images. Then researches on application of medical image, software simulation of corresponding algorithm and experiment result are presented in this paper. Experiment shows that this method could improve the compression ration and resolution.

  7. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  8. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  9. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  10. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  11. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    PubMed

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  12. Generalized B-spline subdivision-surface wavelets for geometry compression.

    PubMed

    Bertram, Martin; Duchaineau, Mark A; Hamann, Bernd; Joy, Kenneth I

    2004-01-01

    We present a new construction of lifted biorthogonal wavelets on surfaces of arbitrary two-manifold topology for compression and multiresolution representation. Our method combines three approaches: subdivision surfaces of arbitrary topology, B-spline wavelets, and the lifting scheme for biorthogonal wavelet construction. The simple building blocks of our wavelet transform are local lifting operations performed on polygonal meshes with subdivision hierarchy. Starting with a coarse, irregular polyhedral base mesh, our transform creates a subdivision hierarchy of meshes converging to a smooth limit surface. At every subdivision level, geometric detail can be expanded from wavelet coefficients and added to the surface. We present wavelet constructions for bilinear, bicubic, and biquintic B-Spline subdivision. While the bilinear and bicubic constructions perform well in numerical experiments, the biquintic construction turns out to be unstable. For lossless compression, our transform can be computed in integer arithmetic, mapping integer coordinates of control points to integer wavelet coefficients. Our approach provides a highly efficient and progressive representation for complex geometries of arbitrary topology.

  13. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  14. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  15. ECG compression using non-recursive wavelet transform with quality control

    NASA Astrophysics Data System (ADS)

    Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching

    2016-09-01

    While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.

  16. ECG signal compression by multi-iteration EZW coding for different wavelets and thresholds.

    PubMed

    Tohumoglu, Gülay; Sezgin, K Erbil

    2007-02-01

    The modified embedded zero-tree wavelet (MEZW) compression algorithm for the one-dimensional signal was originally derived for image compression based on Shapiro's EZW algorithm. It is revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate. The EZW and MEZW algorithms apply the chosen threshold values or the expressions in order to specify that the significant transformed coefficients are greatly significant. Thus, two different threshold definitions, namely percentage and dyadic thresholds, are used, and they are applied for different wavelet types in biorthogonal and orthogonal classes. In detail, the MEZW and EZW algorithms results are quantitatively compared in terms of the compression ratio (CR) and percentage root mean square difference (PRD). Experiments are carried out on the selected records from the MIT-BIH arrhythmia database and an original ECG signal. It is observed that the MEZW algorithm shows a clear advantage in the CR achieved for a given PRD over the traditional EZW, and it gives better results for the biorthogonal wavelets than the orthogonal wavelets.

  17. On Fourier and Wavelets: Representation, Approximation and Compression

    DTIC Science & Technology

    2007-11-02

    thonormal bases (e.g. Four ier ser ies, wavelet ser ies) • b ior thogonal bases • overcomplete systems or f rames Note: no t ransforms, uncountable ϕn...ther or or: there is no good local or thogonal Four ier basis! Example of a basis: block based Fourier series Note: consequence of BL Thm on OFDM ...shi f t , modulat ion) by (shi f t , scale) or then there exist “good” local ized orthonormal bases , or wavelet bases Ψm n, t( ) 2 m 2

  18. Alternative common bases and signal compression for wavelets application in chemometrics.

    PubMed

    Forina, Michele; Oliveri, Paolo; Casale, Monica

    2011-02-01

    Representation or compression of data sets in the wavelet space is usually performed to retain the maximum variance of the original or pretreated data, like in the compression by means of principal components. In order to represent together a number of objects in the wavelet space, a common basis is required, and this common basis is usually obtained by means of the variance spectrum or of the variance wavelet tree. In this study, the use of alternative common bases is suggested, both for classification and regression problems. In the case of classification or class-modeling, the suggested common bases are based on the spectrum of the Fisher weights (a measure of the between-class to within-class variance ratio) or on the spectrum of the SIMCA discriminant weights. In the case of regression, the suggested common bases are obtained by the correlation spectrum (the correlation coefficients of the predictor variables with a response variable) or by the PLS (Partial Least Squares regression) importance of the predictors (the product between the absolute value of the regression coefficient of the predictor in the PLS model and its standard deviation). Other alternative strategies apply the Gram-Schmidt supervised orthogonalization to the wavelet coefficients. The results indicate that, both in classification and regression, the information retained after compression in the wavelets space can be more efficient than that retained with a common basis obtained by variance.

  19. Wavelet-based low-delay ECG compression algorithm for continuous ECG transmission.

    PubMed

    Kim, Byung S; Yoo, Sun K; Lee, Moon H

    2006-01-01

    The delay performance of compression algorithms is particularly important when time-critical data transmission is required. In this paper, we propose a wavelet-based electrocardiogram (ECG) compression algorithm with a low delay property for instantaneous, continuous ECG transmission suitable for telecardiology applications over a wireless network. The proposed algorithm reduces the frame size as much as possible to achieve a low delay, while maintaining reconstructed signal quality. To attain both low delay and high quality, it employs waveform partitioning, adaptive frame size adjustment, wavelet compression, flexible bit allocation, and header compression. The performances of the proposed algorithm in terms of reconstructed signal quality, processing delay, and error resilience were evaluated using the Massachusetts Institute of Technology University and Beth Israel Hospital (MIT-BIH) and Creighton University Ventricular Tachyarrhythmia (CU) databases and a code division multiple access-based simulation model with mobile channel noise.

  20. [Statistical study of the wavelet-based lossy medical image compression technique].

    PubMed

    Puniene, Jūrate; Navickas, Ramūnas; Punys, Vytenis; Jurkevicius, Renaldas

    2002-01-01

    Medical digital images have informational redundancy. Both the amount of memory for image storage and their transmission time could be reduced if image compression techniques are applied. The techniques are divided into two groups: lossless (compression ratio does not exceed 3 times) and lossy ones. Compression ratio of lossy techniques depends on visibility of distortions. It is a variable parameter and it can exceed 20 times. A compression study was performed to evaluate the compression schemes, which were based on the wavelet transform. The goal was to develop a set of recommendations for an acceptable compression ratio for different medical image modalities: ultrasound cardiac images and X-ray angiographic images. The acceptable image quality after compression was evaluated by physicians. Statistical analysis of the evaluation results was used to form a set of recommendations.

  1. Wavelet Approach to Data Analysis, Manipulation, Compression, and Communication

    DTIC Science & Technology

    2007-08-07

    applications to animation movie production. According to our colleague Tony DeRose of Pixar Animation Studios, recently acquired by Walt Disney ...rendering and animation , as well as wavelet-based digital image restoration. (a) Papers published in peer-reviewed journals (N/A for none) List of...Accepted for publication. (18) Coherent line drawing (with H. Kang and S. Lee), ACM SIGGRAPH on Non-photorealistic Animation and Rendering

  2. Faster techniques to evolve wavelet coefficients for better fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Shanavaz, K. T.; Mythili, P.

    2013-05-01

    In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 × 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.

  3. Method for low-light-level image compression based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Sun, Shaoyuan; Zhang, Baomin; Wang, Liping; Bai, Lianfa

    2001-10-01

    Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.

  4. Use of a JPEG-2000 Wavelet Compression Scheme for Content-Based Ophtalmologic Retinal Images Retrieval.

    PubMed

    Lamard, Mathieu; Daccache, Wissam; Cazuguel, Guy; Roux, Christian; Cochener, Beatrice

    2005-01-01

    In this paper we propose a content based image retrieval method for diagnosis aid in diabetic retinopathy. We characterize images without extracting significant features, and use histograms obtained from the compressed images in JPEG-2000 wavelet scheme to build signatures. The research is carried out by calculating signature distances between the query and database images. A weighted distance between histograms is used. Retrieval efficiency is given for different standard types of JPEG-2000 wavelets, and for different values of histogram weights. A classified diabetic retinopathy image database is built allowing algorithms tests. On this image database, results are promising: the retrieval efficiency is higher than 70% for some lesion types.

  5. Basic Investigation on Medical Ultrasonic Echo Image Compression by JPEG2000 - Availability of Wavelet Transform and ROI Method

    DTIC Science & Technology

    2007-11-02

    be approved in the near future. The main features of JPEG2000 are use of wavelet transform and ROI (Region of Interest) method. It is expected that... wavelet transform is more effective than Fourier transform for ultrasonic echo signal/image processing. Furthermore, ROI method seems to be appropriate...compression method of medical images. The purpose of this paper is to investigate the effectiveness of wavelet transform compared with DCT (JPEG) and

  6. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  7. DSP accelerator for the wavelet compression/decompression of high- resolution images

    SciTech Connect

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  8. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  9. Optical image compression based on adaptive directional prediction discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Qiu, Bingchang

    2013-11-01

    The traditional lifting wavelet transform cannot effectively reconstruct the nonhorizontal and nonvertical high-frequency information of an image. In this paper, we present a new image compression method based on adaptive directional prediction discrete wavelet transform (ADP-DWT). We first design a directional prediction model to obtain the optimal transform direction of the lifting wavelet. Then, we execute the directional lifting transform along the optimal transform direction. The edge and texture energy can be reduced in the nonhorizontal and nonvertical directions of the high-frequency sub-bands. Finally, the wavelet coefficients are coded with the set partitioning in hierarchical trees (SPIHT) algorithm. The new method holds the advantages of both adaptive directional lifting (ADL) and direction-adaptive discrete wavelet transform (DA-DWT), and the computational complexity is far lower than that in these methods. For the images containing regular and fine textures or edges, the coding preformance of ADP-DWT is better than that of ADL and DA-DWT.

  10. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  11. Review of digital fingerprint acquisition systems and wavelet compression

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    2003-04-01

    Over the last decade many criminal justice agencies have replaced their fingerprint card based systems with electronic processing. We examine these new systems and find that image acquisition to support the identification application is consistently a challenge. Image capture and compression are widely dispersed and relatively new technologies within criminal justice information systems. Image quality assurance programs are just beginning to mature.

  12. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  13. An evaluation of the effects of wavelet coefficient quantisation in transform based EEG compression.

    PubMed

    Garry, Higgins; McGinley, Brian; Jones, Edward; Glavin, Martin

    2013-07-01

    In recent years, there has been a growing interest in the compression of electroencephalographic (EEG) signals for telemedical and ambulatory EEG applications. Data compression is an important factor in these applications as a means of reducing the amount of data required for transmission. Allowing for a carefully controlled level of loss in the compression method can provide significant gains in data compression. Quantisation is easy to implement method of data reduction that requires little power expenditure. However, it is a relatively simple, non-invertible operation, and reducing the bit-level too far can result in the loss of too much information to reproduce the original signal to an appropriate fidelity. Other lossy compression methods allow for finer control over compression parameters, generally relying on discarding signal components the coder deems insignificant. SPIHT is a state of the art signal compression method based on the Discrete Wavelet Transform (DWT), originally designed for images but highly regarded as a general means of data compression. This paper compares the approaches of compression by changing the quantisation level of the DWT coefficients in SPIHT, with the standard thresholding method used in SPIHT, to evaluate the effects of each on EEG signals. The combination of increasing quantisation and the use of SPIHT as an entropy encoder has been shown to provide significantly improved results over using the standard SPIHT algorithm alone.

  14. An Evaluation of the Effects of Wavelet Coefficient Quantization in Transform Based EEG Compression

    PubMed Central

    Higgins, Garry; McGinley, Brian; Jones, Edward; Glavin, Martin

    2016-01-01

    In recent years, there has been a growing interest in the compression of electroencephalographic (EEG) signals for telemedical and ambulatory EEG applications. Data compression is an important factor in these applications as a means of reducing the amount of data required for transmission. Allowing for a carefully controlled level of loss in the compression method can provide significant gains in data compression. Quantization is an easy to implement method of data reduction that requires little power expenditure. However, it is a relatively simple, noninvertible operation, and reducing the bit-level too far can result in the loss of too much information to reproduce the original signal to an appropriate fidelity. Other lossy compression methods allow for finer control over compression parameters, generally relying on discarding signal components the coder deems insignificant. SPIHT is a state of the art signal compression method based on the Discrete Wavelet Transform (DWT), originally designed for images but highly regarded as a general means of data compression. This paper compares the approaches of compression by changing the quantization level of the DWT coefficients in SPIHT, with the standard thresholding method used in SPIHT, to evaluate the effects of each on EEG signals. The combination of increasing quantization and the use of SPIHT as an entropy encoder has been shown to provide significantly improved results over using the standard SPIHT algorithm alone. PMID:23668341

  15. Wavelet-based ECG compression by bit-field preserving and running length encoding.

    PubMed

    Chan, Hsiao-Lung; Siao, You-Chen; Chen, Szi-Wen; Yu, Shih-Fan

    2008-04-01

    Efficient electrocardiogram (ECG) compression can reduce the payload of real-time ECG transmission as well as reduce the amount of data storage in long-term ECG recording. In this paper an ECG compression/decompression architecture based on the bit-field preserving (BFP) and running length encoding (RLE)/decoding schemes incorporated with the discrete wavelet transform (DWT) is proposed. Compared to complex and repetitive manipulations in the set partitioning in hierarchical tree (SPIHT) coding and the vector quantization (VQ), the proposed algorithm has advantages of simple manipulations and a feedforward structure that would be suitable to implement on very-large-scale integrated circuits and general microcontrollers.

  16. Comparison of wavelet and Karhunen-Loeve transforms in video compression applications

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Soloveyko, Olexandr M.; Kurashov, Vitalij N.

    1999-12-01

    In the paper we present comparison of three advanced techniques for video compression. Among them 3D Embedded Zerotree Wavelet (EZW) coding, recently suggested Optimal Image Coding using Karhunen-Loeve (KL) transform (OICKL) and new algorithm of video compression based on 3D EZW coding scheme but with using KL transform for frames decorrelation (3D-EZWKL). It is shown that OICKL technique provides the best performance and usage of KL transform with 3D-EZW coding scheme gives better results than just usage of 3D-EZW algorithm.

  17. Evaluation of color-embedded wavelet image compression techniques

    NASA Astrophysics Data System (ADS)

    Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III

    1998-12-01

    Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.

  18. Adaptive lifting scheme of wavelet transforms for image compression

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Wang, Guoyin; Nie, Neng

    2001-03-01

    Aiming at the demand of adaptive wavelet transforms via lifting, a three-stage lifting scheme (predict-update-adapt) is proposed according to common two-stage lifting scheme (predict-update) in this paper. The second stage is updating stage. The third is adaptive predicting stage. Our scheme is an update-then-predict scheme that can detect jumps in image from the updated data and it needs not any more additional information. The first stage is the key in our scheme. It is the interim of updating. Its coefficient can be adjusted to adapt to data to achieve a better result. In the adaptive predicting stage, we use symmetric prediction filters in the smooth area of image, while asymmetric prediction filters at the edge of jumps to reduce predicting errors. We design these filters using spatial method directly. The inherent relationships between the coefficients of the first stage and the other stages are found and presented by equations. Thus, the design result is a class of filters with coefficient that are no longer invariant. Simulation result of image coding with our scheme is good.

  19. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  20. Applications of wavelet-based compression to multidimensional earth science data

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  1. Applications of wavelet-based compression to multidimensional earth science data

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  2. Study on the application of embedded zero-tree wavelet algorithm in still images compression

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Lu, Yanhe; Li, Taifu; Lei, Gang

    2005-12-01

    An image has directional selection capability with high frequency through wavelet transformation. It is coincident with the visual characteristics of human eyes. The most important visual characteristic in human eyes is the visual covering effect. The embedded Zero-tree Wavelet (EZW) coding method completes the same level coding for a whole image. In an image, important regions (regions of interest) and background regions (indifference regions) are coded through the same levels. On the basis of studying the human visual characteristics, that is, the visual covering effect, this paper employs an image-compressing method with regions of interest, i.e., an algorithm of Embedded Zero-tree Wavelet with Regions of Interest (EZWROI Algorism) to encode the regions of interest and regions of non-interest separately. In this way, the lost important information in the image is much less. It makes full use of channel resource and memory space, and improves the image quality in the regions of interest. Experimental study showed that a resumed image using an EZW_ROI algorithm is better in visual effects than that of EZW on condition of high compression ratio.

  3. Nonlinear wavelet compression of ion mobility spectra from ion mobility spectrometers mounted in an unmanned aerial vehicle.

    PubMed

    Cao, Libo; Harrington, Peter de B; Harden, Charles S; McHugh, Vincent M; Thomas, Martin A

    2004-02-15

    Linear and nonlinear wavelet compression of ion mobility spectrometry (IMS) data are compared and evaluated. IMS provides low detection limits and rapid response for many compounds. Nonlinear wavelet compression of ion mobility spectra reduced the data to 4-5% of its original size, while eliminating artifacts in the reconstructed spectra that occur with linear compression, and the root-mean-square reconstruction error was 0.17-0.20% of the maximum intensity of the uncompressed spectra. Furthermore, nonlinear wavelet compression precisely preserves the peak location (i.e., drift time). Small variations in peak location may occur in the reconstructed spectra that were linearly compressed. A method was developed and evaluated for optimizing the compression. The compression method was evaluated with in-flight data recorded from ion mobility spectrometers mounted in an unmanned aerial vehicle (UAV). Plumes of dimethyl methylphosphonate were disseminated for interrogation by the UAV-mounted IMS system. The daublet 8 wavelet filter exhibited the best performance for these evaluations.

  4. A Parallel Adaptive Wavelet Method for the Simulation of Compressible Reacting Flows

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel

    2011-11-01

    The Wavelet Adaptive Multiresolution Representation (WAMR) method provides a robust method for controlling spatial grid adaption--fine grid spacing in regions of a solution requiring high resolution (i.e. near steep gradients, singularities, or near- singularities) and using much coarser grid spacing where the solution is slowly varying. The sparse grids produced using the WAMR method exhibit very high compression ratios compared to uniform grids of equivalent resolution. Subsequently, a wide range of spatial scales often occurring in continuum physics models can be captured efficiently. Furthermore, the wavelet transform provides a direct measure of local error at each grid point, effectively producing automatically verified solutions. The algorithm is parallelized using an MPI-based domain decomposition approach suitable for a wide range of distributed-memory parallel architectures. The method is applied to the solution of the compressible, reactive Navier-Stokes equations and includes multi-component diffusive transport and chemical kinetics models. Results for the method's parallel performance are reported, and its effectiveness on several challenging compressible reacting flow problems is highlighted.

  5. Lossless compression of 3D hyperspectral sounder data using the wavelet and Burrows-Wheeler transforms

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin

    2004-10-01

    Hyperspectral sounder data is used for retrieval of useful geophysical parameters which promise better weather prediction. It features two characteristics. First it is huge in size with 2D spatial coverage and high spectral resolution in the infrared region. Second it allows low tolerance of noise and error in retrieving the geophysical parameters where a mathematically ill-posed problem is involved. Therefore compression is better to be lossless or near lossless for data transfer and archive. Meanwhile medical data from X-ray computerized tomography (CT) or magnetic resonance imaging (MRI) techniques also possesses similar characteristics. It provides motivation to apply lossless compression schemes for medical data to the hyperspectral sounder data. In this paper, we explore the use of a wavelet-based lossless data compression scheme for the 3D hyperspectral data which uses in sequence a forward difference scheme, an integer wavelet transform, a Burrows-Wheeler transform and an arithmetic coder. Compared to previous work, our approach is shown to outperform the CALIC and 3D EZW schemes.

  6. FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas

    1993-08-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  7. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  8. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  9. Spatial model of lifting scheme in wavelet transforms and image compression

    NASA Astrophysics Data System (ADS)

    Wu, Yu; Li, Gang; Wang, Guoyin

    2002-03-01

    Wavelet transforms via lifting scheme are called the second-generation wavelet transforms. However, in some lifting schemes the coefficients are transformed using mathematical method from the first-generation wavelets, so the filters with better performance using in lifting are limited. The spatial structures of lifting scheme are also simple. For example, the classical lifting scheme, predicting-updating, is two-stage, and most researchers simply adopt this structure. In addition, in most design results the lifting filters are not only hard to get and also fixed. In our former work, we had presented a new three-stage lifting scheme, predicting-updating-adapting, and the results of filter design are no more fixed. In this paper, we continue to research the spatial model of lifting scheme. A group of general multi-stage lifting schemes are achieved and designed. All lifting filters are designed in spatial domain and proper mathematical methods are selected. Our designed coefficients are flexible and can be adjusted according to different data. We give the mathematical design details in this paper. Finally, all designed model of lifting are used in image compression and satisfactory results are achieved.

  10. New image compression algorithm based on improved reversible biorthogonal integer wavelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Libao; Yu, Xianchuan

    2012-10-01

    The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.

  11. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  12. Wavelet compression of three-dimensional time-lapse biological image data.

    PubMed

    Stefansson, H Narfi; Eliceiri, Kevin W; Thomas, Charles F; Ron, Amos; DeVore, Ron; Sharpley, Robert; White, John G

    2005-02-01

    The use of multifocal-plane, time-lapse recordings of living specimens has allowed investigators to visualize dynamic events both within ensembles of cells and individual cells. Recordings of such four-dimensional (4D) data from digital optical sectioning microscopy produce very large data sets. We describe a wavelet-based data compression algorithm that capitalizes on the inherent redunancies within multidimensional data to achieve higher compression levels than can be obtained from single images. The algorithm will permit remote users to roam through large 4D data sets using communication channels of modest bandwidth at high speed. This will allow animation to be used as a powerful aid to visualizing dynamic changes in three-dimensional structures.

  13. Fast algorithm of byte-to-byte wavelet transform for image compression applications

    NASA Astrophysics Data System (ADS)

    Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.

    2002-11-01

    A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.

  14. Remote sensing image compression method based on lift scheme wavelet transform

    NASA Astrophysics Data System (ADS)

    Tao, Hongjiu; Tang, Xinjian; Liu, Jian; Tian, Jinwen

    2003-06-01

    Based on lifting scheme and the construction theorem of the integer Haar wavelet and biorthogonal wavelet, we propose a new integer wavelet transform construct method on the basis of lift scheme after introduciton of constructing specific-demand biorthogonal wavelet transform using Harr wavelet and Lazy wavelet. In this paper, we represent the method and algorithm of the lifting scheme, and we also give mathematical formulation on this method and experimental results as well.

  15. Dataflow and remapping for wavelet compression and realtime view-dependent optimization of billion-triangle isosurfaces

    SciTech Connect

    Duchaineau, M A; Porumbescu, S D; Bertram, M; Hamann, B; Joy, K I

    2000-10-06

    Currently, large physics simulations produce 3D fields whose individual surfaces, after conventional extraction processes, contain upwards of hundreds of millions of triangles. Detailed interactive viewing of these surfaces requires powerful compression to minimize storage, and fast view-dependent optimization of display triangulations to drive high-performance graphics hardware. In this work we provide an overview of an end-to-end multiresolution dataflow strategy whose goal is to increase efficiencies in practice by several orders of magnitude. Given recent advancements in subdivision-surface wavelet compression and view-dependent optimization, we present algorithms here that provide the ''glue'' that makes this strategy hold together. Shrink-wrapping converts highly detailed unstructured surfaces of arbitrary topology to the semi-structured form needed for wavelet compression. Remapping to triangle bintrees minimizes disturbing ''pops'' during real-time display-triangulation optimization and provides effective selective-transmission compression for out-of-core and remote access to these huge surfaces.

  16. Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

    PubMed

    Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong

    2016-01-01

    Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

  17. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  18. Comparison of wavelet scalar quantization and JPEG for fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Kidd, Robert C.

    1995-01-01

    An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.

  19. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  20. A High-Performance Lossless Compression Scheme for EEG Signals Using Wavelet Transform and Neural Network Predictors

    PubMed Central

    Sriraam, N.

    2012-01-01

    Developments of new classes of efficient compression algorithms, software systems, and hardware for data intensive applications in today's digital health care systems provide timely and meaningful solutions in response to exponentially growing patient information data complexity and associated analysis requirements. Of the different 1D medical signals, electroencephalography (EEG) data is of great importance to the neurologist for detecting brain-related disorders. The volume of digitized EEG data generated and preserved for future reference exceeds the capacity of recent developments in digital storage and communication media and hence there is a need for an efficient compression system. This paper presents a new and efficient high performance lossless EEG compression using wavelet transform and neural network predictors. The coefficients generated from the EEG signal by integer wavelet transform are used to train the neural network predictors. The error residues are further encoded using a combinational entropy encoder, Lempel-Ziv-arithmetic encoder. Also a new context-based error modeling is also investigated to improve the compression efficiency. A compression ratio of 2.99 (with compression efficiency of 67%) is achieved with the proposed scheme with less encoding time thereby providing diagnostic reliability for lossless transmission as well as recovery of EEG signals for telemedicine applications. PMID:22489238

  1. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  2. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  3. Lossless to lossy compression for hyperspectral imagery based on wavelet and integer KLT transforms with 3D binary EZW

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-05-01

    In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.

  4. High performance projectile seal development for non perfect railgun bores

    SciTech Connect

    Wolfe, T.R.; Vine, F.E. Le; Riedy, P.E.; Panlasigui, A.; Hawke, R.S.; Susoeff, A.R.

    1997-01-01

    The sealing of high pressure gas behind an accelerating projectile has been developed over centuries of use in conventional guns and cannons. The principal concern was propulsion efficiency and trajectory accuracy and repeatability. The development of guns for use as high pressure equation-of-state (EOS) research tools, increased the importance of better seals to prevent gas leakage from interfering with the experimental targets. The development of plasma driven railguns has further increased the need for higher quality seals to prevent gas and plasma blow-by. This paper summarizes more than a decade of effort to meet these increased requirements. In small bore railguns, the first improvement was prompted by the need to contain the propulsive plasma behind the projectile to avoid the initiation of current conducting paths in front of the projectile. The second major requirements arose from the development of a railgun to serve as an EOS tool where it was necessary to maintain an evacuated region in front of the projectile throughout the acceleration process. More recently, the techniques developed for the small bore guns have been applied to large bore railguns and electro-thermal chemical guns in order to maximize their propulsion efficiency. Furthermore, large bore railguns are often less rigid and less straight than conventional homogeneous material guns. Hence, techniques to maintain seals in non perfect, non homogeneous material launchers have been developed and are included in this paper.

  5. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    PubMed Central

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  6. Data compression studies for NOAA Hyperspectral Environmental Suite (HES) using 3D integer wavelet transforms with 3D set partitioning in hierarchical trees

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.

    2004-02-01

    The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Hyperspectral sounder data is a particular class of data requiring high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Hence compression of these data sets is better to be lossless or near lossless. Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are processed with the 3D set partitioning in hierarchical trees (SPIHT) scheme followed by context-based arithmetic coding. SPIHT provides better coding efficiency than Shapiro's original embedded zerotree wavelet (EZW) algorithm. We extend the 3D SPIHT scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.

  7. Lossless data compression studies for NOAA hyperspectral environmental suite using 3D integer wavelet transforms with 3D embedded zerotree coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Huang, Hung-Lung; Chen, Hao; Ahuja, Alok; Baggett, Kevin; Schmit, Timothy J.; Heymann, Roger W.

    2003-09-01

    Hyperspectral sounder data is a particular class of data that requires high accuracy for useful retrieval of atmospheric temperature and moisture profiles, surface characteristics, cloud properties, and trace gas information. Therefore compression of these data sets is better to be lossless or near lossless. The next-generation NOAA/NESDIS GOES-R hyperspectral sounder, now referred to as the HES (Hyperspectral Environmental Suite), will have hyperspectral resolution (over one thousand channels with spectral widths on the order of 0.5 wavenumber) and high spatial resolution (less than 10 km). Given the large volume of three-dimensional hyperspectral sounder data that will be generated by the HES instrument, the use of robust data compression techniques will be beneficial to data transfer and archive. In this paper, we study lossless data compression for the HES using 3D integer wavelet transforms via the lifting schemes. The wavelet coefficients are then processed with the 3D embedded zerotree wavelet (EZW) algorithm followed by context-based arithmetic coding. We extend the 3D EZW scheme to take on any size of 3D satellite data, each of whose dimensions need not be divisible by 2N, where N is the levels of the wavelet decomposition being performed. The compression ratios of various kinds of wavelet transforms are presented along with a comparison with the JPEG2000 codec.

  8. Wavelet-based compression with ROI coding support for mobile access to DICOM images over heterogeneous radio networks.

    PubMed

    Maglogiannis, Ilias; Doukas, Charalampos; Kormentzas, George; Pliakas, Thomas

    2009-07-01

    Most of the commercial medical image viewers do not provide scalability in image compression and/or region of interest (ROI) encoding/decoding. Furthermore, these viewers do not take into consideration the special requirements and needs of a heterogeneous radio setting that is constituted by different access technologies [e.g., general packet radio services (GPRS)/ universal mobile telecommunications system (UMTS), wireless local area network (WLAN), and digital video broadcasting (DVB-H)]. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. The proposed application enables scalable wavelet-based compression, retrieval, and decompression of DICOM medical images and also supports ROI coding/decoding. Furthermore, the presented application is appropriate for use by mobile devices activating in heterogeneous radio settings. In this context, performance issues regarding the usage of the proposed application in the case of a prototype heterogeneous system setup are also discussed.

  9. Mean square error approximation for wavelet-based semiregular mesh compression.

    PubMed

    Payan, Frédéric; Antonini, Marc

    2006-01-01

    The objective of this paper is to propose an efficient model-based bit allocation process optimizing the performances of a wavelet coder for semiregular meshes. More precisely, this process should compute the best quantizers for the wavelet coefficient subbands that minimize the reconstructed mean square error for one specific target bitrate. In order to design a fast and low complex allocation process, we propose an approximation of the reconstructed mean square error relative to the coding of semiregular mesh geometry. This error is expressed directly from the quantization errors of each coefficient subband. For that purpose, we have to take into account the influence of the wavelet filters on the quantized coefficients. Furthermore, we propose a specific approximation for wavelet transforms based on lifting schemes. Experimentally, we show that, in comparison with a "naive" approximation (depending on the subband levels), using the proposed approximation as distortion criterion during the model-based allocation process improves the performances of a wavelet-based coder for any model, any bitrate, and any lifting scheme.

  10. Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms

    NASA Astrophysics Data System (ADS)

    Gurkan, Hakan

    2012-12-01

    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.

  11. The Performance of Wavelets for Data Compression in Selected Military Applications

    DTIC Science & Technology

    1990-02-23

    Ratio vs. Radial Distance [1 graph per (reference image, test patch) pair, 7 compressions per graph] Exhibit H-7 Graph: Laplacian Sidelobe to Peak...Ratio vs. Radial Distance [I graph per (reference image, test patch) pair, 7 compressions per graph] Exhibit 11-8 Graph: Laplacian vs. Radial Distance 11...of Al Wl (at various scalings) with A l (at a given compression) vs. Radial Distance [Two graphs per compression; seven scalings per graph] Exhibit II

  12. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    NASA Technical Reports Server (NTRS)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; Moore, Thomas E.

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  13. Comparison of Wavelet Packets With Cosine-Modulated Pseudo-QMF Bank for ECG Compression

    DTIC Science & Technology

    2001-10-25

    filters of 16-channel QMF bank , the nobles identities for multirate systems have to be used [5] resulting...pp. 713-718, March 1992. [4] P. P. Vaidyanathan, Multirate Systems and Filter Banks . Englewood Cliffs, NJ: Prentice-Hall, 1993. [5] C. D. Creusere...N. J. Fliege, Multirate Digital Signal Processing: Multirate Systems , Filter Banks , Wavelets. John Wiley & Sons, 1994. [9] M.Blanco, F.López,

  14. Application of region selective embedded zerotree wavelet coder in CT image compression.

    PubMed

    Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping

    2005-01-01

    Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.

  15. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression

    PubMed Central

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-01-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  16. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    PubMed Central

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-01-01

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636

  17. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    SciTech Connect

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  18. Psychophysical evaluation of the effect of JPEG, full-frame discrete cosine transform (DCT) and wavelet image compression on signal detection in medical image noise

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Morioka, Craig A.; Whiting, James S.; Eigler, Neal L.

    1995-04-01

    Image quality associated with image compression has been either arbitrarily evaluated through visual inspection, loosely defined in terms of some subjective criteria such as image sharpness or blockiness, or measured by arbitrary measures such as the mean square error between the uncompressed and compressed image. The present paper psychophysically evaluated the effect of three different compression algorithms (JPEG, full-frame, and wavelet) on human visual detection of computer-simulated low-contrast lesions embedded in real medical image noise from patient coronary angiogram. Performance identifying the signal present location as measure by d' index of detectability decreased for all three algorithms by approximately 30% and 62% for the 16:1 and 30:1 compression rations respectively. We evaluated the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm. The MSE predicted significantly higher image quality for the JPEG algorithm in the 16:1 compression ratio and for both JPEG and full-frame for the 30:1 compression ratio. The NNND predicted significantly high image quality for the full-frame algorithm for both compassion rations. These findings suggest that these two measures of image quality may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms.

  19. Orthogonal wavelets for image transmission and compression schemes: implementation and results

    NASA Astrophysics Data System (ADS)

    Ahmadian, Alireza; Bharath, Anil A.

    1996-10-01

    Diagnostic quality medical images consume vast amounts of network time, system bandwidth and disk storage in current computer architectures. There are many ways in which the use of system and network resources may be optimize without compromising diagnostic image quality. One of these is in the choice of image representation, both for storage and transfer. In this paper, we show how a particularly flexible method of image representation, based on Mallat's algorithm, leads to efficient methods of both lossy image compression and progressive image transmission. We illustrate the application of a progressive transmission scheme to medical images, and provide some examples of image refinement in a multiscale fashion. We show how thumbnail images created by a multiscale orthogonal decomposition can be optimally interpolated, in a minimum square error sense, based on a generalized Moore-Penrose inverse operator. In the final part of this paper, we show that the representation can provide a framework for lossy image compression, with signal/noise ratios far superior to those provided by a standard JPEG algorithm. The approach can also accommodate precision based progressive coding. We show the results of increasing the priority of encoding a selected region of interest in a bit-stream describing a multiresolution image representation.

  20. Wavelets and Scattering

    DTIC Science & Technology

    1994-07-29

    Douglas (MDA). This has been extended to the use of local SVD methods and the use of wavelet packets to provide a controlled sparsening. The goal is to be...possibilities for segmenting, compression and denoising signals and one of us (GVW) is using these wavelets to study edge sets with Prof. B. Jawerth. The

  1. Wavelets on Planar Tesselations

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-02-25

    We present a new technique for progressive approximation and compression of polygonal objects in images. Our technique uses local parameterizations defined by meshes of convex polygons in the plane. We generalize a tensor product wavelet transform to polygonal domains to perform multiresolution analysis and compression of image regions. The advantage of our technique over conventional wavelet methods is that the domain is an arbitrary tessellation rather than, for example, a uniform rectilinear grid. We expect that this technique has many applications image compression, progressive transmission, radiosity, virtual reality, and image morphing.

  2. Adaptive boxcar/wavelet transform

    NASA Astrophysics Data System (ADS)

    Sezer, Osman G.; Altunbasak, Yucel

    2009-01-01

    This paper presents a new adaptive Boxcar/Wavelet transform for image compression. Boxcar/Wavelet decomposition emphasizes the idea of average-interpolation representation which uses dyadic averages and their interpolation to explain a special case of biorthogonal wavelet transforms (BWT). This perspective for image compression together with lifting scheme offers the ability to train an optimum 2-D filter set for nonlinear prediction (interpolation) that will adapt to the context around the low-pass wavelet coefficients for reducing energy in the high-pass bands. Moreover, the filters obtained after training is observed to posses directional information with some textural clues that can provide better prediction performance. This work addresses a firrst step towards obtaining this new set of training-based fillters in the context of Boxcar/Wavelet transform. Initial experimental results show better subjective quality performance compared to popular 9/7-tap and 5/3-tap BWTs with comparable results in objective quality.

  3. Periodized wavelets

    SciTech Connect

    Schlossnagle, G.; Restrepo, J.M.; Leaf, G.K.

    1993-12-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and contrasted against their counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrate by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and several tabulated values are included.

  4. Wavelets, signal processing and matrix computations

    NASA Astrophysics Data System (ADS)

    Suter, Bruce W.

    1994-09-01

    Key scientific results were found in the following four areas: (1) multidimensional Malvar wavelets; (2) time/spatial varying filter banks; (3) vector filter banks and vector-valued wavelets; and (4) multirate time-frequency. These results have opened the following new areas of research: nonseparable multidimensional Malvar wavelets, vector-valued wavelets and vector filter banks, and multirate time-frequency analysis. These results also provide fundamental tools in many Air Force and industrial applications, such as modeling of turbulence, compression of images/video images, etc.

  5. Wavelet theory and its applications

    SciTech Connect

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  6. Optical Wavelet Signals Processing and Multiplexing

    NASA Astrophysics Data System (ADS)

    Cincotti, Gabriella; Moreolo, Michela Svaluto; Neri, Alessandro

    2005-12-01

    We present compact integrable architectures to perform the discrete wavelet transform (DWT) and the wavelet packet (WP) decomposition of an optical digital signal, and we show that the combined use of planar lightwave circuits (PLC) technology and multiresolution analysis (MRA) can add flexibility to current multiple access optical networks. We furnish the design guidelines to synthesize wavelet filters as two-port lattice-form planar devices, and we give some examples of optical signal denoising and compression/decompression techniques in the wavelet domain. Finally, we present a fully optical wavelet packet division multiplexing (WPDM) scheme where data signals are waveform-coded onto wavelet atom functions for transmission, and numerically evaluate its performances.

  7. Optical wavelet transform for fingerprint identification

    NASA Astrophysics Data System (ADS)

    MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.

    1994-03-01

    The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.

  8. Predictive depth coding of wavelet transformed images

    NASA Astrophysics Data System (ADS)

    Lehtinen, Joonas

    1999-10-01

    In this paper, a new prediction based method, predictive depth coding, for lossy wavelet image compression is presented. It compresses a wavelet pyramid composition by predicting the number of significant bits in each wavelet coefficient quantized by the universal scalar quantization and then by coding the prediction error with arithmetic coding. The adaptively found linear prediction context covers spatial neighbors of the coefficient to be predicted and the corresponding coefficients on lower scale and in the different orientation pyramids. In addition to the number of significant bits, the sign and the bits of non-zero coefficients are coded. The compression method is tested with a standard set of images and the results are compared with SFQ, SPIHT, EZW and context based algorithms. Even though the algorithm is very simple and it does not require any extra memory, the compression results are relatively good.

  9. Wavelet-aided pavement distress image processing

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2003-11-01

    A wavelet-based pavement distress detection and evaluation method is proposed. This method consists of two main parts, real-time processing for distress detection and offline processing for distress evaluation. The real-time processing part includes wavelet transform, distress detection and isolation, and image compression and noise reduction. When a pavement image is decomposed into different frequency subbands by wavelet transform, the distresses, which are usually irregular in shape, appear as high-amplitude wavelet coefficients in the high-frequency details subbands, while the background appears in the low-frequency approximation subband. Two statistical parameters, high-amplitude wavelet coefficient percentage (HAWCP) and high-frequency energy percentage (HFEP), are established and used as criteria for real-time distress detection and distress image isolation. For compression of isolated distress images, a modified EZW (Embedded Zerotrees of Wavelet coding) is developed, which can simultaneously compress the images and reduce the noise. The compressed data are saved to the hard drive for further analysis and evaluation. The offline processing includes distress classification, distress quantification, and reconstruction of the original image for distress segmentation, distress mapping, and maintenance decision-making. The compressed data are first loaded and decoded to obtain wavelet coefficients. Then Radon transform is then applied and the parameters related to the peaks in the Radon domain are used for distress classification. For distress quantification, a norm is defined that can be used as an index for evaluating the severity and extent of the distress. Compared to visual or manual inspection, the proposed method has the advantages of being objective, high-speed, safe, automated, and applicable to different types of pavements and distresses.

  10. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  11. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  12. Construction of compactly supported biorthogonal wavelet based on Human Visual System

    NASA Astrophysics Data System (ADS)

    Hu, Haiping; Hou, Weidong; Liu, Hong; Mo, Yu L.

    2000-11-01

    As an important analysis tool, wavelet transform has made a great development in image compression coding, since Daubechies constructed a kind of compact support orthogonal wavelet and Mallat presented a fast pyramid algorithm for wavelet decomposition and reconstruction. In order to raise the compression ratio and improve the visual quality of reconstruction, it becomes very important to find a wavelet basis that fits the human visual system (HVS). Marr wavelet, as it is known, is a kind of wavelet, so it is not suitable for implementation of image compression coding. In this paper, a new method is provided to construct a kind of compactly supported biorthogonal wavelet based on human visual system, we employ the genetic algorithm to construct compactly supported biorthogonal wavelet that can approximate the modulation transform function for HVS. The novel constructed wavelet is applied to image compression coding in our experiments. The experimental results indicate that the visual quality of reconstruction with the new kind of wavelet is equivalent to other compactly biorthogonal wavelets in the condition of the same bit rate. It has good performance of reconstruction, especially used in texture image compression coding.

  13. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  14. Implementing a global DEM database on the sphere based on spherical wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Di; Zhao, Xuesheng; Shan, Shigang; Yao, Liangjun

    2010-11-01

    Wavelets have been proven to be an exceedingly powerful and highly efficient tool for fast computational algorithms in the fields of image data analysis and compression. Traditionally, the classical constructed wavelets are often employed to Euclidean infinite domains (such as the real line R and plane R2). In this paper, a spherical wavelet constructed for discrete DEM data based on the sphere is approached. Firstly, the discrete biorthogonal spherical wavelet with custom properties is constructed with the lifting scheme based on wavelet toolbox in Matlab. Then, the decomposition and reconstruction algorithms are proposed for efficient computation and the related wavelet coefficients are obtained. Finally, different precise images are displayed and analyzed at the different percentage of wavelet coefficients. The efficiency of this spherical wavelet algorithm is tested by using the GTOPO30 DEM data and the results show that at the same precision, the spherical wavelet algorithm consumes smaller storage volume. The results are good and acceptable.

  15. Implementing a global DEM database on the sphere based on spherical wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Di; Zhao, Xuesheng; Shan, Shigang; Yao, Liangjun

    2009-09-01

    Wavelets have been proven to be an exceedingly powerful and highly efficient tool for fast computational algorithms in the fields of image data analysis and compression. Traditionally, the classical constructed wavelets are often employed to Euclidean infinite domains (such as the real line R and plane R2). In this paper, a spherical wavelet constructed for discrete DEM data based on the sphere is approached. Firstly, the discrete biorthogonal spherical wavelet with custom properties is constructed with the lifting scheme based on wavelet toolbox in Matlab. Then, the decomposition and reconstruction algorithms are proposed for efficient computation and the related wavelet coefficients are obtained. Finally, different precise images are displayed and analyzed at the different percentage of wavelet coefficients. The efficiency of this spherical wavelet algorithm is tested by using the GTOPO30 DEM data and the results show that at the same precision, the spherical wavelet algorithm consumes smaller storage volume. The results are good and acceptable.

  16. Image encoding with triangulation wavelets

    NASA Astrophysics Data System (ADS)

    Hebert, D. J.; Kim, HyungJun

    1995-09-01

    We demonstrate some wavelet-based image processing applications of a class of simplicial grids arising in finite element computations and computer graphics. The cells of a triangular grid form the set of leaves of a binary tree and the nodes of a directed graph consisting of a single cycle. The leaf cycle of a uniform grid forms a pattern for pixel image scanning and for coherent computation of coefficients of splines and wavelets. A simple form of image encoding is accomplished with a 1D quadrature mirror filter whose coefficients represent an expansion of the image in terms of 2D Haar wavelets with triangular support. A combination the leaf cycle and an inherent quadtree structure allow efficient neighbor finding, grid refinement, tree pruning and storage. Pruning of the simplex tree yields a partially compressed image which requires no decoding, but rather may be rendered as a shaded triangulation. This structure and its generalization to n-dimensions form a convenient setting for wavelet analysis and computations based on simplicial grids.

  17. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  18. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  19. Optimization technology of 9/7 wavelet lifting scheme on DSP*

    NASA Astrophysics Data System (ADS)

    Chen, Zhengzhang; Yang, Xiaoyuan; Yang, Rui

    2007-12-01

    Nowadays wavelet transform has been one of the most effective transform means in the realm of image processing, especially the biorthogonal 9/7 wavelet filters proposed by Daubechies, which have good performance in image compression. This paper deeply studied the implementation and optimization technologies of 9/7 wavelet lifting scheme based on the DSP platform, including carrying out the fixed-point wavelet lifting steps instead of time-consuming floating-point operation, adopting pipelining technique to improve the iteration procedure, reducing the times of multiplication calculation by simplifying the normalization operation of two-dimension wavelet transform, and improving the storage format and sequence of wavelet coefficients to reduce the memory consumption. Experiment results have shown that these implementation and optimization technologies can improve the wavelet lifting algorithm's efficiency more than 30 times, which establish a technique foundation for successfully developing real-time remote sensing image compression system in future.

  20. The Sea of Wavelets

    NASA Astrophysics Data System (ADS)

    Jones, B. J. T.

    Wavelet analysis has become a major tool in many aspects of data handling, whether it be statistical analysis, noise removal or image reconstruction. Wavelet analysis has worked its way into fields as diverse as economics, medicine, geophysics, music and cosmology.

  1. Transient Detection Using Wavelets.

    DTIC Science & Technology

    1995-03-01

    signaL and transients are nonstationary. A new technique for the analysis of this type of signal, called the Wavelet Transform , was applied to artificial...and real signals. A brief theoretical comparison between the Short Time Fourier Transform and the Wavelet Transform is introduced A multisolution...analysis approach for implementing the transform was used. Computer code for the Discrete Wavelet Transform was implemented. Different types of wavelets to use as basis functions were evaluated. (KAR) P. 2

  2. Subband image encoder using discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Seong, Hae Kyung; Rhee, Kang Hyeon

    2004-03-01

    Introduction of digital communication network such as Integrated Services Digital Networks (ISDN) and digital storage media have rapidly developed. Due to a large amount of image data, compression is the key techniques in still image and video using digital signal processing for transmitting and storing. Digital image compression provides solutions for various image applications that represent digital image requiring a large amount of data. In this paper, the proposed DWT (Discrete Wavelet Transform) filter bank is consisted of simple architecture, but it is efficiently designed that a user obtains a wanted compression rate as only input parameter. If it is implemented by FPGA chip, the designed encoder operates in 12 MHz.

  3. Wavelet encoding and variable resolution progressive transmission

    NASA Technical Reports Server (NTRS)

    Blanford, Ronald P.

    1993-01-01

    Progressive transmission is a method of transmitting and displaying imagery in stages of successively improving quality. The subsampled lowpass image representations generated by a wavelet transformation suit this purpose well, but for best results the order of presentation is critical. Candidate data for transmission are best selected using dynamic prioritization criteria generated from image contents and viewer guidance. We show that wavelets are not only suitable but superior when used to encode data for progressive transmission at non-uniform resolutions. This application does not preclude additional compression using quantization of highpass coefficients, which to the contrary results in superior image approximations at low data rates.

  4. Adaptive Wavelet Transforms

    SciTech Connect

    Szu, H.; Hsu, C.

    1996-12-31

    Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.

  5. Wavelets in Physics

    NASA Astrophysics Data System (ADS)

    van den Berg, J. C.

    1999-08-01

    A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.

  6. Wavelets in Physics

    NASA Astrophysics Data System (ADS)

    van den Berg, J. C.

    2004-03-01

    A guided tour J. C. van den Berg; 1. Wavelet analysis, a new tool in physics J.-P. Antoine; 2. The 2-D wavelet transform, physical applications J.-P. Antoine; 3. Wavelets and astrophysical applications A. Bijaoui; 4. Turbulence analysis, modelling and computing using wavelets M. Farge, N. K.-R. Kevlahan, V. Perrier and K. Schneider; 5. Wavelets and detection of coherent structures in fluid turbulence L. Hudgins and J. H. Kaspersen; 6. Wavelets, non-linearity and turbulence in fusion plasmas B. Ph. van Milligen; 7. Transfers and fluxes of wind kinetic energy between orthogonal wavelet components during atmospheric blocking A. Fournier; 8. Wavelets in atomic physics and in solid state physics J.-P. Antoine, Ph. Antoine and B. Piraux; 9. The thermodynamics of fractals revisited with wavelets A. Arneodo, E. Bacry and J. F. Muzy; 10. Wavelets in medicine and physiology P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, M. G. Rosenblum and H. E. Stanley; 11. Wavelet dimension and time evolution Ch.-A. Guérin and M. Holschneider.

  7. Image coding based on energy-sorted wavelet packets

    NASA Astrophysics Data System (ADS)

    Kong, Lin-Wen; Lay, Kuen-Tsair

    1995-04-01

    The discrete wavelet transform performs multiresolution analysis, which effectively decomposes a digital image into components with different degrees of details. In practice, it is usually implemented in the form of filter banks. If the filter banks are cascaded and both the low-pass and the high-pass components are further decomposed, a wavelet packet is obtained. The coefficients of the wavelet packet effectively represent subimages in different resolution levels. In the energy-sorted wavelet- packet decomposition, all subimages in the packet are then sorted according to their energies. The most important subimages, as measured by the energy, are preserved and coded. By investigating the histogram of each subimage, it is found that the pixel values are well modelled by the Laplacian distribution. Therefore, the Laplacian quantization is applied to quantized the subimages. Experimental results show that the image coding scheme based on wavelet packets achieves high compression ratio while preserving satisfactory image quality.

  8. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  9. Filtering, Coding, and Compression with Malvar Wavelets

    DTIC Science & Technology

    1993-12-01

    The vocal tract is made up of the lips, mouth, and tongue . These can not change nearly as quickly as the vocal cords can, therefore the vocal tract...fluctuates slowly in the frequency domain and has a spike in the low quefrency region. These spikes are called formant peaks and have a number of uses in...the formant corresponding to the pitch (2). The cepstrum is used to find the formants of the pitch so that this information can be removed from the

  10. Wavelet transforms and filter banks in digital communications

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.; Medley, Michael J.

    1996-03-01

    Within the past few years, wavelet transforms and filter banks have received considerable attention in the technical literature, prompting applications in a variety of disciplines including applied mathematics, speech and image processing and compression, medical imaging, geophysics, signal processing, and information theory. More recently, several researchers in the field of communications have developed theoretical foundations for applications of wavelets as well. The objective of this paper is to survey the connections of wavelets and filter banks to communication theory and summarize current research efforts.

  11. Applications of continuous and orthogonal wavelet transforms to MHD and plasma turbulence

    NASA Astrophysics Data System (ADS)

    Farge, Marie; Schneider, Kai

    2016-10-01

    Wavelet analysis and compression tools are presented and different applications to study MHD and plasma turbulence are illustrated. We use the continuous and the orthogonal wavelet transform to develop several statistical diagnostics based on the wavelet coefficients. We show how to extract coherent structures out of fully developed turbulent flows using wavelet-based denoising and describe multiscale numerical simulation schemes using wavelets. Several examples for analyzing, compressing and computing one, two and three dimensional turbulent MHD or plasma flows are presented. Details can be found in M. Farge and K. Schneider. Wavelet transforms and their applications to MHD and plasma turbulence: A review. Support by the French Research Federation for Fusion Studies within the framework of the European Fusion Development Agreement (EFDA) is thankfully acknowledged.

  12. Embedded wavelet video coding with error concealment

    NASA Astrophysics Data System (ADS)

    Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te

    2000-04-01

    We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.

  13. Compressed convolution

    NASA Astrophysics Data System (ADS)

    Elsner, Franz; Wandelt, Benjamin D.

    2014-01-01

    We introduce the concept of compressed convolution, a technique to convolve a given data set with a large number of non-orthogonal kernels. In typical applications our technique drastically reduces the effective number of computations. The new method is applicable to convolutions with symmetric and asymmetric kernels and can be easily controlled for an optimal trade-off between speed and accuracy. It is based on linear compression of the collection of kernels into a small number of coefficients in an optimal eigenbasis. The final result can then be decompressed in constant time for each desired convolved output. The method is fully general and suitable for a wide variety of problems. We give explicit examples in the context of simulation challenges for upcoming multi-kilo-detector cosmic microwave background (CMB) missions. For a CMB experiment with detectors with similar beam properties, we demonstrate that the algorithm can decrease the costs of beam convolution by two to three orders of magnitude with negligible loss of accuracy. Likewise, it has the potential to allow the reduction of disk space required to store signal simulations by a similar amount. Applications in other areas of astrophysics and beyond are optimal searches for a large number of templates in noisy data, e.g. from a parametrized family of gravitational wave templates; or calculating convolutions with highly overcomplete wavelet dictionaries, e.g. in methods designed to uncover sparse signal representations.

  14. A Low Power Application-Specific Integrated Circuit (ASIC) Implementation of Wavelet Transform/Inverse Transform

    DTIC Science & Technology

    2001-03-01

    A unique ASIC was designed implementing the Haar Wavelet transform for image compression/decompression. ASIC operations include performing the Haar... wavelet transform on a 512 by 512 square pixel image, preparing the image for transmission by quantizing and thresholding the transformed data, and...performing the inverse Haar wavelet transform , returning the original image with only minor degradation. The ASIC is based on an existing four-chip FPGA

  15. Image coding by way of wavelets

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1993-01-01

    The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.

  16. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  17. Splitting algorithms for the wavelet transform of first-degree splines on nonuniform grids

    NASA Astrophysics Data System (ADS)

    Shumilov, B. M.

    2016-07-01

    For the splines of first degree with nonuniform knots, a new type of wavelets with a biased support is proposed. Using splitting with respect to the even and odd knots, a new wavelet decomposition algorithm in the form of the solution of a three-diagonal system of linear algebraic equations with respect to the wavelet coefficients is proposed. The application of the proposed implicit scheme to the point prediction of time series is investigated for the first time. Results of numerical experiments on the prediction accuracy and the compression of spline wavelet decompositions are presented.

  18. Multiwavelet-transform-based image compression techniques

    NASA Astrophysics Data System (ADS)

    Rao, Sathyanarayana S.; Yoon, Sung H.; Shenoy, Deepak

    1996-10-01

    Multiwavelet transforms are a new class of wavelet transforms that use more than one prototype scaling function and wavelet in the multiresolution analysis/synthesis. The popular Geronimo-Hardin-Massopust multiwavelet basis functions have properties of compact support, orthogonality, and symmetry which cannot be obtained simultaneously in scalar wavelets. The performance of multiwavelets in still image compression is studied using vector quantization of multiwavelet subbands with a multiresolution codebook. The coding gain of multiwavelets is compared with that of other well-known wavelet families using performance measures such as unified coding gain. Implementation aspects of multiwavelet transforms such as pre-filtering/post-filtering and symmetric extension are also considered in the context of image compression.

  19. SFCVQ and EZW coding method based on Karhunen-Loeve transformation and integer wavelet transformation

    NASA Astrophysics Data System (ADS)

    Yan, Jingwen; Chen, Jiazhen

    2007-03-01

    A new hyperspectral image compression method of spectral feature classification vector quantization (SFCVQ) and embedded zero-tree of wavelet (EZW) based on Karhunen-Loeve transformation (KLT) and integer wavelet transformation is represented. In comparison with the other methods, this method not only keeps the characteristics of high compression ratio and easy real-time transmission, but also has the advantage of high computation speed. After lifting based integer wavelet and SFCVQ coding are introduced, a system of nearly lossless compression of hyperspectral images is designed. KLT is used to remove the correlation of spectral redundancy as one-dimensional (1D) linear transform, and SFCVQ coding is applied to enhance compression ratio. The two-dimensional (2D) integer wavelet transformation is adopted for the decorrelation of 2D spatial redundancy. EZW coding method is applied to compress data in wavelet domain. Experimental results show that in comparison with the method of wavelet SFCVQ (WSFCVQ), the method of improved BiBlock zero tree coding (IBBZTC) and the method of feature spectral vector quantization (FSVQ), the peak signal-to-noise ratio (PSNR) of this method can enhance over 9 dB, and the total compression performance is improved greatly.

  20. Wavelet filtering for data recovery

    NASA Astrophysics Data System (ADS)

    Schmidt, W.

    2013-09-01

    In case of electrical wave measurements in space instruments, digital filtering and data compression on board can significantly enhance the signal and reduce the amount of data to be transferred to Earth. While often the instrument's transfer function is well known making the application of an optimized wavelet algorithm feasible the computational power requirements may be prohibitive as normally complex floating point operations are needed. This article presents a simplified possibility implemented in low-power 16-bit integer processors used for plasma wave measurements in the SPEDE instrument on SMART-1 and for the Permittivity Probe measurements of the SESAME/PP instrument in Rosetta's Philae Lander on its way to comet 67P/Churyumov-Gerasimenko.

  1. Sparse imaging of cortical electrical current densities via wavelet transforms

    NASA Astrophysics Data System (ADS)

    Liao, Ke; Zhu, Min; Ding, Lei; Valette, Sébastien; Zhang, Wenbo; Dickens, Deanna

    2012-11-01

    While the cerebral cortex in the human brain is of functional importance, functions defined on this structure are difficult to analyze spatially due to its highly convoluted irregular geometry. This study developed a novel L1-norm regularization method using a newly proposed multi-resolution face-based wavelet method to estimate cortical electrical activities in electroencephalography (EEG) and magnetoencephalography (MEG) inverse problems. The proposed wavelets were developed based on multi-resolution models built from irregular cortical surface meshes, which were realized in this study too. The multi-resolution wavelet analysis was used to seek sparse representation of cortical current densities in transformed domains, which was expected due to the compressibility of wavelets, and evaluated using Monte Carlo simulations. The EEG/MEG inverse problems were solved with the use of the novel L1-norm regularization method exploring the sparseness in the wavelet domain. The inverse solutions obtained from the new method using MEG data were evaluated by Monte Carlo simulations too. The present results indicated that cortical current densities could be efficiently compressed using the proposed face-based wavelet method, which exhibited better performance than the vertex-based wavelet method. In both simulations and auditory experimental data analysis, the proposed L1-norm regularization method showed better source detection accuracy and less estimation errors than other two classic methods, i.e. weighted minimum norm (wMNE) and cortical low-resolution electromagnetic tomography (cLORETA). This study suggests that the L1-norm regularization method with the use of face-based wavelets is a promising tool for studying functional activations of the human brain.

  2. Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

    SciTech Connect

    Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin; Clyne, John; Childs, Hank

    2015-10-25

    I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.

  3. Global and Local Distortion Inference During Embedded Zerotree Wavelet Decompression

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.

    1996-01-01

    This paper presents algorithms for inferring global and spatially local estimates of the squared-error distortion measures for the Embedded Zerotree Wavelet (EZW) image compression algorithm. All distortion estimates are obtained at the decoder without significantly compromising EZW's rate-distortion performance. Two methods are given for propagating distortion estimates from the wavelet domain to the spatial domain, thus giving individual estimates of distortion for each pixel of the decompressed image. These local distortion estimates seem to provide only slight improvement in the statistical characterization of EZW compression error relative to the global measure, unless actual squared errors are propagated. However, they provide qualitative information about the asymptotic nature of the error that may be helpful in wavelet filter selection for low bit rate applications.

  4. Wavelet analysis in neurodynamics

    NASA Astrophysics Data System (ADS)

    Pavlov, Aleksei N.; Hramov, Aleksandr E.; Koronovskii, Aleksei A.; Sitnikova, Evgenija Yu; Makarov, Valeri A.; Ovchinnikov, Alexey A.

    2012-09-01

    Results obtained using continuous and discrete wavelet transforms as applied to problems in neurodynamics are reviewed, with the emphasis on the potential of wavelet analysis for decoding signal information from neural systems and networks. The following areas of application are considered: (1) the microscopic dynamics of single cells and intracellular processes, (2) sensory data processing, (3) the group dynamics of neuronal ensembles, and (4) the macrodynamics of rhythmical brain activity (using multichannel EEG recordings). The detection and classification of various oscillatory patterns of brain electrical activity and the development of continuous wavelet-based brain activity monitoring systems are also discussed as possibilities.

  5. Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram

    SciTech Connect

    Anant, K.S.

    1997-06-01

    In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the P as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the

  6. Wavelet transforms with discrete-time continuous-dilation wavelets

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Rao, Raghuveer M.

    1999-03-01

    Wavelet constructions and transforms have been confined principally to the continuous-time domain. Even the discrete wavelet transform implemented through multirate filter banks is based on continuous-time wavelet functions that provide orthogonal or biorthogonal decompositions. This paper provides a novel wavelet transform construction based on the definition of discrete-time wavelets that can undergo continuous parameter dilations. The result is a transformation that has the advantage of discrete-time or digital implementation while circumventing the problem of inadequate scaling resolution seen with conventional dyadic or M-channel constructions. Examples of constructing such wavelets are presented.

  7. The Discrete Wavelet Transform

    DTIC Science & Technology

    1991-06-01

    Split- Band Coding," Proc. ICASSP, May 1977, pp 191-195. 12. Vetterli, M. "A Theory of Multirate Filter Banks ," IEEE Trans. ASSP, 35, March 1987, pp 356...both special cases of a single filter bank structure, the discrete wavelet transform, the behavior of which is governed by one’s choice of filters . In...B-1 ,.iii FIGURES 1.1 A wavelet filter bank structure ..................................... 2 2.1 Diagram illustrating the dialation and

  8. Wavelet despiking of fractographs

    NASA Astrophysics Data System (ADS)

    Aubry, Jean-Marie; Saito, Naoki

    2000-12-01

    Fractographs are elevation maps of the fracture zone of some broken material. The technique employed to create these maps often introduces noise composed of positive or negative 'spikes' that must be removed before further analysis. Since the roughness of these maps contains useful information, it must be preserved. Consequently, conventional denoising techniques cannot be employed. We use continuous and discrete wavelet transforms of these images, and the properties of wavelet coefficients related to pointwise Hoelder regularity, to detect and remove the spikes.

  9. Wavelets and Multifractal Analysis

    DTIC Science & Technology

    2004-07-01

    distribution unlimited 13. SUPPLEMENTARY NOTES See also ADM001750, Wavelets and Multifractal Analysis (WAMA) Workshop held on 19-31 July 2004., The original...f)] . . . 16 2.5.4 Detrended Fluctuation Analysis [DFA(m)] . . . . . . . . . . . . . . . 17 2.6 Scale-Independent Measures...18 2.6.1 Detrended -Fluctuation- Analysis Power-Law Exponent (αD) . . . . . . 18 2.6.2 Wavelet-Transform Power-Law Exponent

  10. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  11. A new wavelet transform to sparsely represent cortical current densities for EEG/MEG inverse problems.

    PubMed

    Liao, Ke; Zhu, Min; Ding, Lei

    2013-08-01

    The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies.

  12. Integrated system for image storage, retrieval, and transmission using wavelet transform

    NASA Astrophysics Data System (ADS)

    Yu, Dan; Liu, Yawen; Mu, Ray Y.; Yang, Shi-Qiang

    1998-12-01

    Currently, much work has been done in the area of image storage and retrieval. However, the overall performance has been far from practical. A highly integrated wavelet-based image management system is proposed in this paper. By integrating wavelet-based solutions for image compression and decompression, content-based retrieval and progressive transmission, much higher performance can be achieved. The multiresolution nature of the wavelet transform has been proven to be a powerful tool to represent images. The wavelet transform decomposes the image into a set of subimages with different resolutions. From here three solutions for key aspects of image management are reached. The content-based image retrieval (CBIR) features of our system include the color, contour, texture, sample, keyword and topic information of images. The first four features can be naturally extracted from the wavelet transform coefficients. By scoring the similarity of users' requests with images in the database, those who have higher scores are noted and the user receives feedback. Image compression and decompression. Assuming that details at high resolution and diagonal directions are less visible to the human eye, a good compression ratio can be achieved. In each subimage, the wavelet coefficients are vector quantized (VQ), using the LGB algorithm, which is improved in our approach to accelerate the process. Higher compression ratio can be achieved with DPCM and entropy coding method applied together. With YIQ representation, color images can also be effectively compressed. There is a very low load on the network bandwidth by transmitting compressed image data across the network. Progressive transmission is possible by employment of the multiresolution nature of the wavelet, which makes the system respond faster and the user-interface more friendly. The system shows a high overall performance by exploring the excellent features of wavelet, and integrating key aspects of image management. An

  13. Riesz wavelets and multiresolution structures

    NASA Astrophysics Data System (ADS)

    Larson, David R.; Tang, Wai-Shing; Weber, Eric

    2001-12-01

    Multiresolution structures are important in applications, but they are also useful for analyzing properties of associated wavelets. Given a nonorthogonal (multi-) wavelet in a Hilbert space, we construct a core subspace. Subsequently, the dilates of the core subspace defines a ladder of nested subspaces. Of fundamental importance are two questions: 1) when is the core subspace shift invariant; and if yes, then 2) when is the core subspace generated by shifts of a single vector, i.e. there exists a scaling vector. If the wavelet generates a Riesz basis then the answer to question 1) is yes if and only if the wavelet is a biorthogonal wavelet. Additionally, if the wavelet generates a tight frame of arbitrary frame constant, then the core subspace is shift invariant. Question 1) is still open in case the wavelet generates a non-tight frame. We also present some known results to question 2) and provide some preliminary improvements. Our analysis here arises from investigating the dimension function and the multiplicity function of a wavelet. These two functions agree if the wavelet is orthogonal. Finally, we discuss how these questions are important for considering linear perturbation of wavelets. Utilizing the idea of the local commutant of a unitary system developed by Dai and Larson, we show that nearly all linear perturbations of two orthonormal wavelets form a Riesz wavelet. If in fact these wavelets correspond to a von Neumann algebra in the local commutant of a base wavelet, then the interpolated wavelet is biorthogonal. Moreover, we demonstrate that in this case the interpolated wavelets have a scaling vector if the base wavelet has a scaling vector.

  14. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  15. Significance-linked connected component analysis for wavelet image coding.

    PubMed

    Chai, B B; Vass, J; Zhuang, X

    1999-01-01

    Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.

  16. Wavelet-based coding of ultraspectral sounder data

    NASA Astrophysics Data System (ADS)

    Garcia-Vilchez, Fernando; Serra-Sagrista, Joan; Auli-Llinas, Francesc

    2005-08-01

    In this paper we provide a study concerning the suitability of well-known image coding techniques originally devised for lossy compression of still natural images when applied to lossless compression of ultraspectral sounder data. We present here the experimental results of six wavelet-based widespread coding techniques, namely EZW, IC, SPIHT, JPEG2000, SPECK and CCSDS-IDC. Since the considered techniques are 2-dimensional (2D) in nature but the ultraspectral data are 3D, a pre-processing stage is applied to convert the two spatial dimensions into a single spatial dimension. All the wavelet-based techniques are competitive when compared either to the benchmark prediction-based methods for lossless compression, CALIC and JPEG-LS, or to two common compression utilities, GZIP and BZIP2. EZW, SPIHT, SPECK and CCSDS-IDC provide a very similar performance, while IC and JPEG2000 improve the compression factor when compared to the other wavelet-based methods. Nevertheless, they are not competitive when compared to a fast precomputed vector quantizer. The benefits of applying a pre-processing stage, the Bias Adjusted Reordering, prior to the coding process in order to further exploit the spectral and/or spatial correlation when 2D techniques are employed, are also presented.

  17. Wavelet-based zerotree coding of aerospace images

    NASA Astrophysics Data System (ADS)

    Franques, Victoria T.; Jain, Vijay K.

    1996-06-01

    This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.

  18. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  19. Using optical wavelet packet transform to improve the performance of an optoelectronic iris recognition system

    NASA Astrophysics Data System (ADS)

    Cai, De; Tan, Qiaofeng; Yan, Yingbai; Jin, Guofan; He, Qingsheng

    2005-01-01

    Iris, one important biometric feature, has unique advantages: it has complex texture and is almost unchanged for the lifespan. So iris recognition has been widely studied for intelligent personal identification. Most of researchers use wavelets as iris feature extractor. And their systems obtain high accuracy. But wavelet transform is time consuming, so the problem is to enhance the useful information but still keep high processing speed. This is the reason we propose an opto-electronic system for iris recognition because of high parallelism of optics. In this system, we use eigen-images generated corresponding to optimally chosen wavelet packets to compress the iris image bank. After optical correlation between eigen-images and input, the statistic features are extracted. Simulation shows that wavelet packets preprocessing of the input images results in higher identification rate. And this preprocessing can be fulfilled by optical wavelet packet transform (OWPT), a new optical transform introduced by us. To generate the approximations of 2-D wavelet packet basis functions for implementing OWPT, mother wavelet, which has scaling functions, is utilized. Using the cascade algorithm and 2-D separable wavelet transform scheme, an optical wavelet packet filter is constructed based on the selected best bases. Inserting this filter makes the recognition performance better.

  20. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  1. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  2. Data Compression.

    ERIC Educational Resources Information Center

    Bookstein, Abraham; Storer, James A.

    1992-01-01

    Introduces this issue, which contains papers from the 1991 Data Compression Conference, and defines data compression. The two primary functions of data compression are described, i.e., storage and communications; types of data using compression technology are discussed; compression methods are explained; and current areas of research are…

  3. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  4. A multiresolution analysis for tensor-product splines using weighted spline wavelets

    NASA Astrophysics Data System (ADS)

    Kapl, Mario; Jüttler, Bert

    2009-09-01

    We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.

  5. Video coding with lifted wavelet transforms and complementary motion-compensated signals

    NASA Astrophysics Data System (ADS)

    Flierl, Markus H.; Vandergheynst, Pierre; Girod, Bernd

    2004-01-01

    This paper investigates video coding with wavelet transforms applied in the temporal direction of a video sequence. The wavelets are implemented with the lifting scheme in order to permit motion compensation between successive pictures. We improve motion compensation in the lifting steps and utilize complementary motion-compensated signals. Similar to superimposed predictive coding with complementary signals, this approach improves compression efficiency. We investigate experimentally and theoretically complementary motion-compensated signals for lifted wavelet transforms. Experimental results with the complementary motion-compensated Haar wavelet and frame-adaptive motion compensation show improvements in coding efficiency of up to 3 dB. The theoretical results demonstrate that the lifted Haar wavelet scheme with complementary motion-compensated signals is able to approach the bound for bit-rate savings of 2 bits per sample and motion-accuracy step when compared to optimum intra-frame coding of the input pictures.

  6. Stationary wavelet transform for under-sampled MRI reconstruction.

    PubMed

    Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M

    2014-12-01

    In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions.

  7. Wavelet-based embedded zerotree extension to color coding

    NASA Astrophysics Data System (ADS)

    Franques, Victoria T.

    1998-03-01

    Recently, a new image compression algorithm was developed which employs wavelet transform and a simple binary linear quantization scheme with an embedded coding technique to perform data compaction. This new family of coder, Embedded Zerotree Wavelet (EZW), provides a better compression performance than the current JPEG coding standard for low bit rates. Since EZW coding algorithm emerged, all of the published coding results related to this coding technique are on monochrome images. In this paper the author has enhanced the original coding algorithm to yield a better compression ratio, and has extended the wavelet-based zerotree coding to color images. Color imagery is often represented by several components, such as RGB, in which each component is generally processed separately. With color coding, each component could be compressed individually in the same manner as a monochrome image, therefore requiring a threefold increase in processing time. Most image coding standards employ de-correlated components, such as YIQ or Y, CB, CR and subsampling of the 'chroma' components, such coding technique is employed here. Results of the coding, including reconstructed images and coding performance, will be presented.

  8. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  9. Compressive imaging using fast transform coding

    NASA Astrophysics Data System (ADS)

    Thompson, Andrew; Calderbank, Robert

    2016-10-01

    We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.

  10. Unequal error protection codes for wavelet video transmission over W-CDMA, AWGN, and Rayleigh fading channels

    NASA Astrophysics Data System (ADS)

    Le, Minh Hung; Liyana-Pathirana, Ranjith

    2003-06-01

    The unequal error protection (UEP) codes with wavelet-based algorithm for video compression over wide-band code division multiple access (W-CDMA), additive white Gaussian noise (AWGN) and Rayleigh fading channels are analysed. The utilization of Wavelets has come out to be a powerful method for compress video sequence. The wavelet transform compression technique has shown to be more appropriate to high quality video applications, producing better quality output for the compressed frames of video. A spatially scalable video coding framework of MPEG2 in which motion correspondences between successive video frames are exploited in the wavelet transform domain. The basic motivation for our coder is that motion fields are typically smooth that can be efficiently captured through a multiresolutional framework. Wavelet decomposition is applied to video frames and the coefficients at each level are predicted from the coarser level through backward motion compensation. The proposed algorithms of the embedded zero-tree wavelet (EZW) coder and the 2-D wavelet packet transform (2-D WPT) are investigated.

  11. Wavelet-based Poisson solver for use in particle-in-cell simulations.

    PubMed

    Terzić, Balsa; Pogorelov, Ilya V

    2005-06-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in three-dimensional particle-in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and additional compression of relevant data sets. We present and discuss preliminary results relating to the application of the new solver to test problems in accelerator physics and astrophysics.

  12. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  13. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  14. Wavelet Signal Processing for Transient Feature Extraction

    DTIC Science & Technology

    1992-03-15

    Research was conducted to evaluate the feasibility of applying Wavelets and Wavelet Transform methods to transient signal feature extraction problems... Wavelet transform techniques were developed to extract low dimensional feature data that allowed a simple classification scheme to easily separate

  15. Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach

    NASA Technical Reports Server (NTRS)

    Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.

    2000-01-01

    The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.

  16. Wavelet Preprocessing of Acoustic Signals

    DTIC Science & Technology

    1991-12-01

    wavelet transform to preprocess acoustic broadband signals in a system that discriminates between different classes of acoustic bursts. This is motivated by the similarity between the proportional bandwidth filters provided by the wavelet transform and those found in biological hearing systems. The experiment involves comparing statistical pattern classifier effects of wavelet and FFT preprocessed acoustic signals. The data used was from the DARPA Phase I database, which consists of artificially generated signals with real ocean background. The

  17. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  18. Wavelets and Approximation

    DTIC Science & Technology

    2007-11-02

    Daubechies-DeVore (Cohen-Daubechies-Gulleryuz-Orchard) This encoder is optimal on all Besov classes compactly embedded into L2 EZW , Said-Pearlman...DeVore (Cohen-Daubechies-Gulleryuz-Orchard) This encoder is optimal on all Besov classes compactly embedded into L2 EZW , Said-Pearlman, Cargese – p.49...Cohen-Daubechies-Gulleryuz-Orchard) This encoder is optimal on all Besov classes compactly embedded into L2 EZW , Said-Pearlman, Cargese – p.49/49 Wavelet

  19. Wavelet phase synchronization and chaoticity.

    PubMed

    Postnikov, E B

    2009-11-01

    It has been shown that the so-called "wavelet phase" (or "time-scale") synchronization of chaotic signals is actually synchronization of smoothed functions with reduced chaotic fluctuations. This fact is based on the representation of the wavelet transform with the Morlet wavelet as a solution of the Cauchy problem for a simple diffusion equation with initial condition in a form of harmonic function modulated by a given signal. The topological background of the resulting effect is discussed. It is argued that the wavelet phase synchronization provides information about the synchronization of an averaged motion described by bounding tori instead of the fine-level classical chaotic phase synchronization.

  20. Wavelets in medical imaging

    SciTech Connect

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  1. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Hulya Kodal; Aslan, Zafer; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  2. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  3. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  4. Neural network wavelet technology: A frontier of automation

    NASA Technical Reports Server (NTRS)

    Szu, Harold

    1994-01-01

    Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.

  5. Contrast Sensitivity of the Wavelet, Dual Tree Complex Wavelet, Curvelet and Steerable Pyramid Transforms.

    PubMed

    Hill, Paul; Achim, Alin; Al-Mualla, Mohammed Ebrahim; Bull, David

    2016-04-11

    Accurate estimation of the contrast sensitivity of the human visual system is crucial for perceptually based image processing in applications such as compression, fusion and denoising. Conventional Contrast Sensitivity Functions (CSFs) have been obtained using fixed sized Gabor functions. However, the basis functions of multiresolution decompositions such as wavelets often resemble Gabor functions but are of variable size and shape. Therefore to use conventional contrast sensitivity functions in such cases is not appropriate. We have therefore conducted a set of psychophysical tests in order to obtain the contrast sensitivity function for a range of multiresolution transforms: the Discrete Wavelet Transform (DWT), the Steerable Pyramid, the Dual-Tree Complex Wavelet Transform (DT-CWT) and the Curvelet Transform. These measures were obtained using contrast variation of each transforms' basis functions in a 2AFC experiment combined with an adapted version of the QUEST psychometric function method. The results enable future image processing applications that exploit these transforms such as signal fusion, super-resolution processing, denoising and motion estimation, to be perceptually optimised in a principled fashion. The results are compared to an existing vision model (HDR-VDP2) and are used to show quantitative improvements within a denoising application compared to using conventional CSF values.

  6. Experimental Studies on a Compact Storage Scheme for Wavelet-based Multiresolution Subregion Retrieval

    NASA Technical Reports Server (NTRS)

    Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.

    1996-01-01

    Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.

  7. The berkeley wavelet transform: a biologically inspired orthogonal wavelet transform.

    PubMed

    Willmore, Ben; Prenger, Ryan J; Wu, Michael C-K; Gallant, Jack L

    2008-06-01

    We describe the Berkeley wavelet transform (BWT), a two-dimensional triadic wavelet transform. The BWT comprises four pairs of mother wavelets at four orientations. Within each pair, one wavelet has odd symmetry, and the other has even symmetry. By translation and scaling of the whole set (plus a single constant term), the wavelets form a complete, orthonormal basis in two dimensions. The BWT shares many characteristics with the receptive fields of neurons in mammalian primary visual cortex (V1). Like these receptive fields, BWT wavelets are localized in space, tuned in spatial frequency and orientation, and form a set that is approximately scale invariant. The wavelets also have spatial frequency and orientation bandwidths that are comparable with biological values. Although the classical Gabor wavelet model is a more accurate description of the receptive fields of individual V1 neurons, the BWT has some interesting advantages. It is a complete, orthonormal basis and is therefore inexpensive to compute, manipulate, and invert. These properties make the BWT useful in situations where computational power or experimental data are limited, such as estimation of the spatiotemporal receptive fields of neurons.

  8. Optimized discrete wavelet transforms in the cubed sphere with the lifting scheme—implications for global finite-frequency tomography

    NASA Astrophysics Data System (ADS)

    Chevrot, Sébastien; Martin, Roland; Komatitsch, Dimitri

    2012-12-01

    Wavelets are extremely powerful to compress the information contained in finite-frequency sensitivity kernels and tomographic models. This interesting property opens the perspective of reducing the size of global tomographic inverse problems by one to two orders of magnitude. However, introducing wavelets into global tomographic problems raises the problem of computing fast wavelet transforms in spherical geometry. Using a Cartesian cubed sphere mapping, which grids the surface of the sphere with six blocks or 'chunks', we define a new algorithm to implement fast wavelet transforms with the lifting scheme. This algorithm is simple and flexible, and can handle any family of discrete orthogonal or bi-orthogonal wavelets. Since wavelet coefficients are local in space and scale, aliasing effects resulting from a parametrization with global functions such as spherical harmonics are avoided. The sparsity of tomographic models expanded in wavelet bases implies that it is possible to exploit the power of compressed sensing to retrieve Earth's internal structures optimally. This approach involves minimizing a combination of a ℓ2 norm for data residuals and a ℓ1 norm for model wavelet coefficients, which can be achieved through relatively minor modifications of the algorithms that are currently used to solve the tomographic inverse problem.

  9. A generalized wavelet extrema representation

    SciTech Connect

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  10. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  11. Wavelet preprocessing of acoustic signals

    NASA Astrophysics Data System (ADS)

    Huang, W. Y.; Solorzano, M. R.

    1991-12-01

    This paper describes results using the wavelet transform to preprocess acoustic broadband signals in a system that discriminates between different classes of acoustic bursts. This is motivated by the similarity between the proportional bandwidth filters provided by the wavelet transform and those found in biological hearing systems. The experiment involves comparing statistical pattern classifier effects of wavelet and FFT preprocessed acoustic signals. The data used was from the DARPA Phase 1 database, which consists of artificially generated signals with real ocean background. The results show that the wavelet transform did provide improved performance when classifying in a frame-by-frame basis. The DARPA Phase 1 database is well matched to proportional bandwidth filtering; i.e., signal classes that contain high frequencies do tend to have shorter duration in this database. It is also noted that the decreasing background levels at high frequencies compensate for the poor match of the wavelet transform for long duration (high frequency) signals.

  12. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  13. Wavelet Analysis of Bioacoustic Scattering and Marine Mammal Vocalizations

    DTIC Science & Technology

    2005-09-01

    17 B. DISCRETE WAVELET TRANSFORM .....................................................17 1. Mother Wavelet ...LEFT BLANK 11 III. WAVELET THEORY There are two distinct classes of wavelet transforms : the continuous wavelet transform (CWT) and the discrete ... wavelet transform (DWT). The discrete wavelet transform is a compact representation of the data and is particularly useful for noise reduction and

  14. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  15. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  16. Wavelets and spacetime squeeze

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1993-01-01

    It is shown that the wavelet is the natural language for the Lorentz covariant description of localized light waves. A model for covariant superposition is constructed for light waves with different frequencies. It is therefore possible to construct a wave function for light waves carrying a covariant probability interpretation. It is shown that the time-energy uncertainty relation (Delta(t))(Delta(w)) is approximately 1 for light waves is a Lorentz-invariant relation. The connection between photons and localized light waves is examined critically.

  17. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  18. Wavelet Packets in Wideband Multiuser Communications

    DTIC Science & Technology

    2004-11-01

    developed doubly orthogonal CDMA user spreading waveforms based on wavelet packets. We have also developed and evaluated a wavelet packet based ...inter symbol interferences. Compared with the existing DFT based multicarrier CDMA systems, better performance is achieved with the wavelet packet...23 3.4 Over Loaded Waveform Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4. Wavelet Packet Based Time-Varying

  19. An Introduction to Wavelet Theory and Analysis

    SciTech Connect

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  20. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  1. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  2. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  3. A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.

    PubMed

    Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W

    2005-01-01

    We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.

  4. An introduction to wavelet theory and application for the radiological physicist.

    PubMed

    Harpen, M D

    1998-10-01

    The wavelet transform, part of a rapidly advancing new area of mathematics, has become an important technique for image compression, noise suppression, and feature extraction. As a result, the radiological physicist can expect to be confronted with elements of wavelet theory as diagnostic radiology advances into teleradiology, PACS, and computer aided feature extraction and diagnosis. With this in mind we present a primer on wavelet theory geared specifically for the radiological physicist. The mathematical treatment is free of the details of mathematical rigor, which are found in most other treatments of the subject and which are of little interest to physicists, yet is sufficient to convey a reasonably deep working knowledge of wavelet theory.

  5. Wavelet Transform Signal Processing Applied to Ultrasonics.

    DTIC Science & Technology

    1995-05-01

    THE WAVELET TRANSFORM IS APPLIED TO THE ANALYSIS OF ULTRASONIC WAVES FOR IMPROVED SIGNAL DETECTION AND ANALYSIS OF THE SIGNALS. In instances where...the mother wavelet is well defined, the wavelet transform has relative insensitivity to noise and does not need windowing. Peak detection of...ultrasonic pulses using the wavelet transform is described and results show good detection even when large white noise was added. The use of the wavelet

  6. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  7. Spherical 3D isotropic wavelets

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2012-04-01

    Context. Future cosmological surveys will provide 3D large scale structure maps with large sky coverage, for which a 3D spherical Fourier-Bessel (SFB) analysis in spherical coordinates is natural. Wavelets are particularly well-suited to the analysis and denoising of cosmological data, but a spherical 3D isotropic wavelet transform does not currently exist to analyse spherical 3D data. Aims: The aim of this paper is to present a new formalism for a spherical 3D isotropic wavelet, i.e. one based on the SFB decomposition of a 3D field and accompany the formalism with a public code to perform wavelet transforms. Methods: We describe a new 3D isotropic spherical wavelet decomposition based on the undecimated wavelet transform (UWT) described in Starck et al. (2006). We also present a new fast discrete spherical Fourier-Bessel transform (DSFBT) based on both a discrete Bessel transform and the HEALPIX angular pixelisation scheme. We test the 3D wavelet transform and as a toy-application, apply a denoising algorithm in wavelet space to the Virgo large box cosmological simulations and find we can successfully remove noise without much loss to the large scale structure. Results: We have described a new spherical 3D isotropic wavelet transform, ideally suited to analyse and denoise future 3D spherical cosmological surveys, which uses a novel DSFBT. We illustrate its potential use for denoising using a toy model. All the algorithms presented in this paper are available for download as a public code called MRS3D at http://jstarck.free.fr/mrs3d.html

  8. Tailoring wavelets for chaos control.

    PubMed

    Wei, G W; Zhan, Meng; Lai, C-H

    2002-12-31

    Chaos is a class of ubiquitous phenomena and controlling chaos is of great interest and importance. In this Letter, we introduce wavelet controlled dynamics as a new paradigm of dynamical control. We find that by modifying a tiny fraction of the wavelet subspaces of a coupling matrix, we could dramatically enhance the transverse stability of the synchronous manifold of a chaotic system. Wavelet controlled Hopf bifurcation from chaos is observed. Our approach provides a robust strategy for controlling chaos and other dynamical systems in nature.

  9. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  10. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  11. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  12. Image compression requirements and standards in PACS

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.

    1995-05-01

    Cost effective telemedicine and storage create a need for medical image compression. Compression saves communication bandwidth and reduces the size of the stored images. After clinicians become acquainted with the quality of the images using some of the newer algorithms, they accept the idea of lossy compression. The older algorithms, JPEG and MPEG in particular, are generally not adequate for high quality compression of medical images. The requirements for compression for medical images center on diagnostic quality images after the restoration of the images. The compression artifacts should not interfere with the viewing of the images for diagnosis. New requirements for compression arise from the fact that the images will likely be viewed on a computer workstation, where the images may be manipulated in ways that would bring out the artifacts. A medical imaging compression standard must be applicable across a large variety of image types from CT and MR to CR and ultrasound. To have one or a very few compression algorithms that are effective across a broad range of image types is desirable. Related series of images as for CT, MR, or cardiology require inter-image processing as well as intra-image processing for effective compression. Two preferred decompositions of the medical images are lapped orthogonal transforms and wavelet transforms. These transforms decompose the images in frequency in two different ways. The lapped orthogonal transforms groups the data according to the area where the data originated, while the wavelet transforms group the data by the frequency band of the image. The compression realized depends on the similarity of close transform coefficients. Huffman coding or the coding of the RICE algorithm are a beginning for the encoding. To be really effective the coding must have an extension for the areas where there is little information, the low entropy extension. In these areas there are less than one bit per pixel and multiple pixels must be

  13. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  14. Wavelet entropy of stochastic processes

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-06-01

    We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.

  15. Wavelet Analysis of Protein Motion

    PubMed Central

    BENSON, NOAH C.

    2014-01-01

    As high-throughput molecular dynamics simulations of proteins become more common and the databases housing the results become larger and more prevalent, more sophisticated methods to quickly and accurately mine large numbers of trajectories for relevant information will have to be developed. One such method, which is only recently gaining popularity in molecular biology, is the continuous wavelet transform, which is especially well-suited for time course data such as molecular dynamics simulations. We describe techniques for the calculation and analysis of wavelet transforms of molecular dynamics trajectories in detail and present examples of how these techniques can be useful in data mining. We demonstrate that wavelets are sensitive to structural rearrangements in proteins and that they can be used to quickly detect physically relevant events. Finally, as an example of the use of this approach, we show how wavelet data mining has led to a novel hypothesis related to the mechanism of the protein γδ resolvase. PMID:25484480

  16. A new fractional wavelet transform

    NASA Astrophysics Data System (ADS)

    Dai, Hongzhe; Zheng, Zhibao; Wang, Wei

    2017-03-01

    The fractional Fourier transform (FRFT) is a potent tool to analyze the time-varying signal. However, it fails in locating the fractional Fourier domain (FRFD)-frequency contents which is required in some applications. A novel fractional wavelet transform (FRWT) is proposed to solve this problem. It displays the time and FRFD-frequency information jointly in the time-FRFD-frequency plane. The definition, basic properties, inverse transform and reproducing kernel of the proposed FRWT are considered. It has been shown that an FRWT with proper order corresponds to the classical wavelet transform (WT). The multiresolution analysis (MRA) associated with the developed FRWT, together with the construction of the orthogonal fractional wavelets are also presented. Three applications are discussed: the analysis of signal with time-varying frequency content, the FRFD spectrum estimation of signals that involving noise, and the construction of fractional Harr wavelet. Simulations verify the validity of the proposed FRWT.

  17. A wavelet phase filter for emission tomography

    SciTech Connect

    Olsen, E.T.; Lin, B.

    1995-07-01

    The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2{pi}). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods.

  18. Gearbox Fault Diagnosis Using Adaptive Wavelet Filter

    NASA Astrophysics Data System (ADS)

    LIN, J.; ZUO, M. J.

    2003-11-01

    Vibration signals from a gearbox are usually noisy. As a result, it is difficult to find early symptoms of a potential failure in a gearbox. Wavelet transform is a powerful tool to disclose transient information in signals. An adaptive wavelet filter based on Morlet wavelet is introduced in this paper. The parameters in the Morlet wavelet function are optimised based on the kurtosis maximisation principle. The wavelet used is adaptive because the parameters are not fixed. The adaptive wavelet filter is found to be very effective in detection of symptoms from vibration signals of a gearbox with early fatigue tooth crack. Two types of discrete wavelet transform (DWT), the decimated with DB4 wavelet and the undecimated with harmonic wavelet, are also used to analyse the same signals for comparison. No periodic impulses appear on any scale in either DWT decomposition.

  19. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  20. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  1. Optical HAAR Wavelet Transforms using Computer Generated Holography

    DTIC Science & Technology

    1992-12-17

    This research introduces an optical implementation of the continuous wavelet transform to filter images. The wavelet transform is modeled as a...continuous wavelet transform was performed and that the results compared favorably to digital simulation. Wavelets, Holography, Optical correlators.

  2. Heart Disease Detection Using Wavelets

    NASA Astrophysics Data System (ADS)

    González S., A.; Acosta P., J. L.; Sandoval M., M.

    2004-09-01

    We develop a wavelet based method to obtain standardized gray-scale chart of both healthy hearts and of hearts suffering left ventricular hypertrophy. The hypothesis that early bad functioning of heart can be detected must be tested by comparing the wavelet analysis of the corresponding ECD with the limit cases. Several important parameters shall be taken into account such as age, sex and electrolytic changes.

  3. A wavelet watermarking algorithm based on a tree structure

    NASA Astrophysics Data System (ADS)

    Guitart Pla, Oriol; Lin, Eugene T.; Delp, Edward J., III

    2004-06-01

    We describe a blind watermarking technique for digital images. Our technique constructs an image-dependent watermark in the discrete wavelet transform (DWT) domain and inserts the watermark in the most signifcant coefficients of the image. The watermarked coefficients are determined by using the hierarchical tree structure induced by the DWT, similar in concept to embedded zerotree wavelet (EZW) compression. If the watermarked image is attacked or manipulated such that the set of significant coefficients is changed, the tree structure allows the correlation-based watermark detector to recover synchronization. Our technique also uses a visual adaptive scheme to insert the watermark to minimize watermark perceptibility. The visual adaptive scheme also takes advantage of the tree structure. Finally, a template is inserted into the watermark to provide robustness against geometric attacks. The template detection uses the cross-ratio of four collinear points.

  4. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  5. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  6. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  7. Discrete wavelet transform core for image processing applications

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas E.; Carbone, Richard

    2005-02-01

    This paper presents a flexible hardware architecture for performing the Discrete Wavelet Transform (DWT) on a digital image. The proposed architecture uses a variation of the lifting scheme technique and provides advantages that include small memory requirements, fixed-point arithmetic implementation, and a small number of arithmetic computations. The DWT core may be used for image processing operations, such as denoising and image compression. For example, the JPEG2000 still image compression standard uses the Cohen-Daubechies-Favreau (CDF) 5/3 and CDF 9/7 DWT for lossless and lossy image compression respectively. Simple wavelet image denoising techniques resulted in improved images up to 27 dB PSNR. The DWT core is modeled using MATLAB and VHDL. The VHDL model is synthesized to a Xilinx FPGA to demonstrate hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons. The execution time for performing both DWTs is nearly identical at approximately 14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is around 15,000 gates using only 5% of the Xilinx FPGA hardware area, at 2.185 MHz max clock speed and 24 mW power consumption.

  8. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  9. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  10. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  11. Exploiting prior knowledge in compressed sensing wireless ECG systems.

    PubMed

    Polanía, Luisa F; Carrillo, Rafael E; Blanco-Velasco, Manuel; Barner, Kenneth E

    2015-03-01

    Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet-based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods.

  12. Wavelet Transforms using VTK-m

    SciTech Connect

    Li, Shaomeng; Sewell, Christopher Meyer

    2016-09-27

    These are a set of slides that deal with the topics of wavelet transforms using VTK-m. First, wavelets are discussed and detailed, then VTK-m is discussed and detailed, then wavelets and VTK-m are looked at from a performance comparison, then from an accuracy comparison, and finally lessons learned, conclusion, and what is next. Lessons learned are the following: Launching worklets is expensive; Natural logic of performing 2D wavelet transform: Repeat the same 1D wavelet transform on every row, repeat the same 1D wavelet transform on every column, invoke the 1D wavelet worklet every time: num_rows x num_columns; VTK-m approach of performing 2D wavelet transform: Create a worklet for 2D that handles both rows and columns, invoke this new worklet only one time; Fast calculation, but cannot reuse 1D implementations.

  13. Edge Detection Using a Complex Wavelet

    DTIC Science & Technology

    1993-12-01

    A complex wavelet of the form Psi(x, y) = C(x jy)exp(-p(x-sq+y-sq))) is used in the continuous wavelet transform to obtain edges from a digital image...and x and y are position variables. The square root of the sum of the squares of the real and imaginary parts of the wavelet transform are used to...radar images and the resulting images are shown. Continuous wavelet transform , Digital image.

  14. Double-density complex wavelet cartoon-texture decomposition

    NASA Astrophysics Data System (ADS)

    Hewer, Gary A.; Kuo, Wei; Hanson, Grant

    2007-09-01

    Both the Kingsbury dual-tree and the subsequent Selesnick double-density dual-tree complex wavelet transform approximate an analytic function. The classification of the phase dependency across scales is largely unexplored except by Romberg et al.. Here we characterize the sub-band dependency of the orientation of phase gradients by applying the Helmholtz principle to bivariate histograms to locate meaningful modes. A further characterization using the Earth Mover's Distance with the fundamental Rudin-Osher-Meyer Banach space decomposition into cartoon and texture elements is presented. Possible applications include image compression and invariant descriptor selection for image matching.

  15. Wavelet-Based Multiresolution Analyses of Signals

    DTIC Science & Technology

    1992-06-01

    classification. Some signals, notably those of a transient nature, are inherently difficult to analyze with these traditional tools. The Discrete Wavelet Transform has...scales. This thesis investigates dyadic discrete wavelet decompositions of signals. A new multiphase wavelet transform is proposed and investigated. The

  16. Lossless compression for three-dimensional images

    NASA Astrophysics Data System (ADS)

    Tang, Xiaoli; Pearlman, William A.

    2004-01-01

    We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zerotrees of Wavelet coefficients (3D-CB-EZW), and JPEG2000 Part II for multi-component images. Two kinds of images are investigated in our study -- 8-bit CT and MR medical images and 16-bit AVIRIS hyperspectral images. First, the performances by using different size of coding units are compared. It shows that increasing the size of coding unit improves the performance somewhat. Second, the performances by using different integer wavelet transforms are compared for AT-3DSPIHT, 3D-SPECK and 3D-CB-EZW. None of the considered filters always performs the best for all data sets and algorithms. At last, we compare the different lossless compression algorithms by applying integer wavelet transform on the entire image volumes. For 8-bit medical image volumes, AT-3DSPIHT performs the best almost all the time, achieving average of 12% decreases in file size compared with JPEG2000 multi-component, the second performer. For 16-bit hyperspectral images, AT-3DSPIHT always performs the best, yielding average 5.8% and 8.9% decreases in file size compared with 3D-SPECK and JPEG2000 multi-component, respectively. Two 2D compression algorithms, JPEG2000 and UNIX zip, are also included for reference, and all 3D algorithms perform much better than 2D algorithms.

  17. Recent advances in wavelet technology

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.

  18. Image Segmentation Using Affine Wavelets

    DTIC Science & Technology

    1991-12-12

    Fourier Transform [23:677] ........ .. 3-15 3.6. Typical Wavelet Function and its Fourier Transform [23:577] ............ 3-16 3.7. Orientation of...Wavelet Decomposition Filters ii the Fourier Dcmain [14:65] 3-18 4.1. Datafiow- Diagram of the Wa’velet Decompossii ’n Proga, F.r..t cvc.. A -•A 4.2...global spatial relationships, as does a Fourier transforn."[l 1] The main thrust of Daugman’s article [11] was to show the utility of a neural network

  19. Wavelet filtering of chaotic data

    NASA Astrophysics Data System (ADS)

    Grzesiak, M.

    Satisfactory method of removing noise from experimental chaotic data is still an open problem. Normally it is necessary to assume certain properties of the noise and dynamics, which one wants to extract, from time series. The wavelet based method of denoising of time series originating from low-dimensional dynamical systems and polluted by the Gaussian white noise is considered. Its efficiency is investigated by comparing the correlation dimension of clean and noisy data generated for some well-known dynamical systems. The wavelet method is contrasted with the singular value decomposition (SVD) and finite impulse response (FIR) filter methods.

  20. An evaluation of lightweight JPEG2000 encryption with anisotropic wavelet packets

    NASA Astrophysics Data System (ADS)

    Engel, Dominik; Uhl, Andreas

    2007-02-01

    In this paper we evaluate a lightweight encryption scheme for JPEG2000 which relies on a secret transform domain constructed with anisotropic wavelet packets. The pseudo-random selection of the bases used for transformation takes compression performance into account, and discards a number of possible bases which lead to poor compression performance. Our main focus in this paper is to answer the important question of how many bases remain to construct the keyspace. In order to determine the trade-off between compression performance and keyspace size, we compare the approach to a method that selects bases from the whole set of anisotropic wavelet packet bases following a pseudo-random uniform distribution. The compression performance of both approaches is compared to get an estimate of the range of compression quality in the set of all bases. We then analytically investigate the number of bases that are discarded for the sake of retaining compression performance in the compression-oriented approach as compared to selection by uniform distribution. Finally, the question of keyspace quality is addressed, i.e. how much similarity between the basis used for analysis and the basis used for synthesis is tolerable from a security point of view and how this affects the lightweight encryption scheme.

  1. Efficient spatial and temporal representations of global ionosphere maps over Japan using B-spline wavelets

    NASA Astrophysics Data System (ADS)

    Mautz, R.; Ping, J.; Heki, K.; Schaffrin, B.; Shum, C.; Potts, L.

    2005-05-01

    Wavelet expansion has been demonstrated to be suitable for the representation of spatial functions. Here we propose the so-called B-spline wavelets to represent spatial time-series of GPS-derived global ionosphere maps (GIMs) of the vertical total electron content (TEC) from the Earth’s surface to the mean altitudes of GPS satellites, over Japan. The scalar-valued B-spline wavelets can be defined in a two-dimensional, but not necessarily planar, domain. Generated by a sequence of knots, different degrees of B-splines can be implemented: degree 1 represents the Haar wavelet; degree 2, the linear B-spline wavelet, or degree 4, the cubic B-spline wavelet. A non-uniform version of these wavelets allows us to handle data on a bounded domain without any edge effects. B-splines are easily extended with great computational efficiency to domains of arbitrary dimensions, while preserving their properties. This generalization employs tensor products of B-splines, defined as linear superposition of products of univariate B-splines in different directions. The data and model may be identical at the locations of the data points if the number of wavelet coefficients is equal to the number of grid points. In addition, data compression is made efficient by eliminating the wavelet coefficients with negligible magnitudes, thereby reducing the observational noise. We applied the developed methodology to the representation of the spatial and temporal variations of GIM from an extremely dense GPS network, the GPS Earth Observation Network (GEONET) in Japan. Since the sampling of the TEC is registered regularly in time, we use a two-dimensional B-spline wavelet representation in space and a one-dimensional spline interpolation in time. Over the Japan region, the B-spline wavelet method can overcome the problem of bias for the spherical harmonic model at the boundary, caused by the non-compact support. The hierarchical decomposition not only allows an inexpensive calculation, but also

  2. Denoising and robust nonlinear wavelet analysis

    NASA Astrophysics Data System (ADS)

    Bruce, Andrew G.; Donoho, David L.; Gao, Hong-Ye; Martin, R. D.

    1994-03-01

    In a series of papers, Donoho and Johnstone develop a powerful theory based on wavelets for extracting non-smooth signals from noisy data. Several nonlinear smoothing algorithms are presented which provide high performance for removing Gaussian noise from a wide range of spatially inhomogeneous signals. However, like other methods based on the linear wavelet transform, these algorithms are very sensitive to certain types of non-Gaussian noise, such as outliers. In this paper, we develop outlier resistant wavelet transforms. In these transforms, outliers and outlier patches are localized to just a few scales. By using the outlier resistant wavelet transform, we improve upon the Donoho and Johnstone nonlinear signal extraction methods. The outlier resistant wavelet algorithms are included with the 'S+WAVELETS' object-oriented toolkit for wavelet analysis.

  3. Wavelets based on Hermite cubic splines

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2016-06-01

    In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.

  4. On the improved correlative prediction scheme for aliased electrocardiogram (ECG) data compression.

    PubMed

    Gao, Xin

    2012-01-01

    An improved scheme for aliased electrocardiogram (ECG) data compression has been constructed, where the predictor exploits the correlative characteristics of adjacent QRS waveforms. The twin-R correlation prediction and lifting wavelet transform (LWT) for periodical ECG waves exhibits feasibility and high efficiency to achieve lower distortion rates with realizable compression ratio (CR); grey predictions via GM(1, 1) model have been adopted to evaluate the parametric performance for ECG data compression. Simulation results illuminate the validity of our approach.

  5. A Wavelet Perspective on the Allan Variance.

    PubMed

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance.

  6. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  7. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  8. The Effects of Signal and Image Compression of Sar Data on Change Detection Algorithms

    DTIC Science & Technology

    2007-09-01

    2.3 Change Detection For a compression research effort to be effective, a suitable benchmark has to be selected . While SAR is a valuable tool, its...series of images into the same globally positioned coordinates. When registering images, a suitable spatial transformation must be selected in order...All time scale wavelets are derived from a single “ mother wavelet ” by scaling and translating the original function. Translation occurs over the time

  9. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  10. Progressive image data compression with adaptive scale-space quantization

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur

    1999-12-01

    Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.

  11. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  12. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  13. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  14. Joint image encryption and compression scheme based on IWT and SPIHT

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-03-01

    A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.

  15. Compression stockings

    MedlinePlus

    ... knee bend. Compression Stockings Can Be Hard to Put on If it's hard for you to put on the stockings, try these tips: Apply lotion ... your legs, but let it dry before you put on the stockings. Use a little baby powder ...

  16. Compressed sensing for real-time energy-efficient ECG compression on wireless body sensor nodes.

    PubMed

    Mamaghanian, Hossein; Khaled, Nadia; Atienza, David; Vandergheynst, Pierre

    2011-09-01

    Wireless body sensor networks (WBSN) hold the promise to be a key enabling information and communications technology for next-generation patient-centric telecardiology or mobile cardiology solutions. Through enabling continuous remote cardiac monitoring, they have the potential to achieve improved personalization and quality of care, increased ability of prevention and early diagnosis, and enhanced patient autonomy, mobility, and safety. However, state-of-the-art WBSN-enabled ECG monitors still fall short of the required functionality, miniaturization, and energy efficiency. Among others, energy efficiency can be improved through embedded ECG compression, in order to reduce airtime over energy-hungry wireless links. In this paper, we quantify the potential of the emerging compressed sensing (CS) signal acquisition/compression paradigm for low-complexity energy-efficient ECG compression on the state-of-the-art Shimmer WBSN mote. Interestingly, our results show that CS represents a competitive alternative to state-of-the-art digital wavelet transform (DWT)-based ECG compression solutions in the context of WBSN-based ECG monitoring systems. More specifically, while expectedly exhibiting inferior compression performance than its DWT-based counterpart for a given reconstructed signal quality, its substantially lower complexity and CPU execution time enables it to ultimately outperform DWT-based ECG compression in terms of overall energy efficiency. CS-based ECG compression is accordingly shown to achieve a 37.1% extension in node lifetime relative to its DWT-based counterpart for "good" reconstruction quality.

  17. Video compression based on enhanced EZW scheme and Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Soloveyko, Olexandr M.; Musatenko, Yurij S.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.

    2000-06-01

    The paper presents a new method for video compression based on the enhanced embedded zerotree wavelet (EZW) scheme. Recently, video codecs from the EZW family which use a 3D version of EZW or SPIHT algorithms showed better performance than the MPEG-2 compression algorithm. These algorithms have many advantages inherent for wavelet based schemes and EZW- like coders. Their most important advantages are good compression performance and scalability. However, they still allow improvement in several ways. First, as we recently showed, using Karhunen-Loeve (KL) transform instead of wavelet transform along the time axis improves compression ratio. Second, instead of the 3D EZW quantization scheme, we offer to use a convenient 2D quantization for every decorrelated frame adding one symbol `Strong Zero Tree', which means that every frame from a chosen set has a zero tree in the same location. The suggested compression algorithm based on KL transform, wavelet transform, and a new quantization scheme with strong zerotrees is free from some drawbacks of the plain 3D EZW codec. The presented codec shows 1 - 6 dB better results compared to the MPEG-2 compression algorithm on video sequences with small and medium motion.

  18. Wavelet decomposition of forced turbulence: Applicability of the iterative Donoho-Johnstone threshold

    NASA Astrophysics Data System (ADS)

    Lord, Jesse W.; Rast, Mark P.; Mckinlay, Christopher; Clyne, John; Mininni, Pablo D.

    2012-02-01

    We examine the decomposition of forced Taylor-Green and Arn'old-Beltrami-Childress (ABC) flows into coherent and incoherent components using an orthonormal wavelet decomposition. We ask whether wavelet coefficient thresholding based on the Donoho-Johnstone criterion can extract a coherent vortex signal while leaving behind Gaussian random noise. We find that no threshold yields a strictly Gaussian incoherent component, and that the most Gaussian incoherent flow is found for data compression lower than that achieved with the fully iterated Donoho-Johnstone threshold. Moreover, even at such low compression, the incoherent component shows clear signs of large-scale spatial correlations that are signatures of the forcings used to drive the flows.

  19. Optimization and implementation of the integer wavelet transform for image coding.

    PubMed

    Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella

    2002-01-01

    This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity.

  20. Wavelet analysis of internal gravity waves

    NASA Astrophysics Data System (ADS)

    Hawkins, J.; Warn-Varnas, A.; Chin-Bing, S.; King, D.; Smolarkiewicsz, P.

    2005-05-01

    A series of model studies of internal gravity waves (igw) have been conducted for several regions of interest. Dispersion relations from the results have been computed using wavelet analysis as described by Meyers (1993). The wavelet transform is repeatedly applied over time and the components are evaluated with respect to their amplitude and peak position (Torrence and Compo, 1998). In this sense we have been able to compute dispersion relations from model results and from measured data. Qualitative agreement has been obtained in some cases. The results from wavelet analysis must be carefully interpreted because the igw models are fully nonlinear and wavelet analysis is fundamentally a linear technique. Nevertheless, a great deal of information describing igw propagation can be obtained from the wavelet transform. We address the domains over which wavelet analysis techniques can be applied and discuss the limits of their applicability.

  1. Removing Signal Intensity Inhomogeneity From Surface Coil MRI Using Discrete Wavelet Transform and Wavelet Packet

    DTIC Science & Technology

    2001-10-25

    We evaluate a combined discrete wavelet transform (DWT) and wavelet packet algorithm to improve the homogeneity of magnetic resonance imaging when a...image and uses this information to normalize the image intensity variations. Estimation of the coil sensitivity profile based on the wavelet transform of

  2. Wavelet-Based Adaptive Denoising of Phonocardiographic Records

    DTIC Science & Technology

    2007-11-02

    the approximated signal, and d the signal details at the given scale; h and g are biorthogonal filters, corresponding to the selected mother wavelet ...dyadic scale can be written as: where is the orthogonal mother wavelet , and: The discrete version of the dyadic wavelet transform can be based on... wavelet with 4 moments equal to zero (Coiflet-2) as the mother wavelet . The two channels were wavelet decomposed up to the 9th order (i = 0, 1 ... 8

  3. Wavelet transform of neural spike trains

    NASA Astrophysics Data System (ADS)

    Kim, Youngtae; Jung, Min Whan; Kim, Yunbok

    2000-02-01

    Wavelet transform of neural spike trains recorded with a tetrode in the rat primary somatosensory cortex is described. Continuous wavelet transform (CWT) of the spike train clearly shows singularities hidden in the noisy or chaotic spike trains. A multiresolution analysis of the spike train is also carried out using discrete wavelet transform (DWT) for denoising and approximating at different time scales. Results suggest that this multiscale shape analysis can be a useful tool for classifying the spike trains.

  4. Application and Development of Wavelet Analysis

    DTIC Science & Technology

    1992-08-15

    found that optics is quite suitable to generate and display both the direct and the inverse wavelet transforms in parallel. Unlike the digital...toward identifying the suitability of using optics for the multichannel signal analysis. Both the Gabor and the wavelet transforms were studied in terms...inverse wavelet transforms . This is the case for processing both the one and two dimensional signals. A detail comparison of the space-bandwidth

  5. Develop, Apply and Evaluate Wavelet Technology.

    DTIC Science & Technology

    1992-10-20

    Eddington (1928), A. S . The Nature of the Physical World, Cambridge: Cambridge University Press. [11] Einstein , A. (155), The Meaning of Relativity...Albequerque, NM, 1990. [9] R. A. Gopinath and C. S . Burrus, "Wavelet transforms and filter banks," pp. 603-654 in Wavelets: A Tutorial in Theory and...Resnikoff, "Multidimensional wavelet bases," Aware Technical Report, Aware, Inc., Cambridge, MA 1991. [25] S . G. Mallat, "A Theory for multiresolution

  6. Wavelet analysis in two-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  7. Wavelet Features Based Fingerprint Verification

    NASA Astrophysics Data System (ADS)

    Bagadi, Shweta U.; Thalange, Asha V.; Jain, Giridhar P.

    2010-11-01

    In this work; we present a automatic fingerprint identification system based on Level 3 features. Systems based only on minutiae features do not perform well for poor quality images. In practice, we often encounter extremely dry, wet fingerprint images with cuts, warts, etc. Due to such fingerprints, minutiae based systems show poor performance for real time authentication applications. To alleviate the problem of poor quality fingerprints, and to improve overall performance of the system, this paper proposes fingerprint verification based on wavelet statistical features & co-occurrence matrix features. The features include mean, standard deviation, energy, entropy, contrast, local homogeneity, cluster shade, cluster prominence, Information measure of correlation. In this method, matching can be done between the input image and the stored template without exhaustive search using the extracted feature. The wavelet transform based approach is better than the existing minutiae based method and it takes less response time and hence suitable for on-line verification, with high accuracy.

  8. Multidimensional signaling via wavelet packets

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.

    1995-04-01

    This work presents a generalized signaling strategy for orthogonally multiplexed communication. Wavelet packet modulation (WPM) employs the basis functions from an arbitrary pruning of a full dyadic tree structured filter bank as orthogonal pulse shapes for conventional QAM symbols. The multi-scale modulation (MSM) and M-band wavelet modulation (MWM) schemes which have been recently introduced are handled as special cases, with the added benefit of an entire library of potentially superior sets of basis functions. The figures of merit are derived and it is shown that the power spectral density is equivalent to that for QAM (in fact, QAM is another special case) and hence directly applicable in existing systems employing this standard modulation. Two key advantages of this method are increased flexibility in time-frequency partitioning and an efficient all-digital filter bank implementation, making the WPM scheme more robust to a larger set of interferences (both temporal and sinusoidal) and computationally attractive as well.

  9. Robust object tracking in compressed image sequences

    NASA Astrophysics Data System (ADS)

    Mujica, Fernando; Murenzi, Romain; Smith, Mark J.; Leduc, Jean-Pierre

    1998-10-01

    Accurate object tracking is important in defense applications where an interceptor missile must hone into a target and track it through the pursuit until the strike occurs. The expense associated with an interceptor missile can be reduced through a distributed processing arrangement where the computing platform on which the tracking algorithm is run resides on the ground, and the interceptor need only carry the sensor and communications equipment as part of its electronics complement. In this arrangement, the sensor images are compressed, transmitted to the ground, and compressed to facilitate real-time downloading of the data over available bandlimited channels. The tracking algorithm is run on a ground-based computer while tracking results are transmitted back to the interceptor as soon as they become available. Compression and transmission in this scenario introduce distortion. If severe, these distortions can lead to erroneous tracking results. As a consequence, tracking algorithms employed for this purpose must be robust to compression distortions. In this paper we introduced a robust object racking algorithm based on the continuous wavelet transform. The algorithm processes image sequence data on a frame-by-frame basis, implicitly taking advantage of temporal history and spatial frame filtering to reduce the impact of compression artifacts. Test results show that tracking performance can be maintained at low transmission bit rates and can be used reliably in conjunction with many well-known image compression algorithms.

  10. Wavelet methods in data mining

    NASA Astrophysics Data System (ADS)

    Manchanda, P.

    2012-07-01

    Data mining (knowledge discovery in data base) is comparatively new interdisciplinary field developed by joint efforts of mathematicians, statisticians, computer scientists and engineers. There are twelve important ingredients of this field along with their applications in real world problems. In this chapter, we have reviewed application of wavelet methods to data mining, particularly denoising, dimension reduction, similarity search, feature extraction and prediction. Meteorological data of Saudi Arabia and Stock market data of India are considered for illustration.

  11. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  12. Feature preserving compression of high resolution SAR images

    NASA Astrophysics Data System (ADS)

    Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing

    2006-10-01

    Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.

  13. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    NASA Astrophysics Data System (ADS)

    Fan, W. J.; Lu, Y.

    2006-10-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting.

  14. Performance of a space-based wavelet compressor for plasma count data on the MMS Fast Plasma Investigation

    NASA Astrophysics Data System (ADS)

    Barrie, A. C.; Smith, S. E.; Dorelli, J. C.; Gershman, D. J.; Yeh, P.; Schiff, C.; Avanov, L. A.

    2017-01-01

    Data compression has been a staple of imaging instruments for years. Recently, plasma measurements have utilized compression with relatively low compression ratios. The Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale (MMS) mission generates data roughly 100 times faster than previous plasma instruments, requiring a higher compression ratio to fit within the telemetry allocation. This study investigates the performance of a space-based compression standard employing a Discrete Wavelet Transform and a Bit Plane Encoder (DWT/BPE) in compressing FPI plasma count data. Data from the first 6 months of FPI operation are analyzed to explore the error modes evident in the data and how to adapt to them. While approximately half of the Dual Electron Spectrometer (DES) maps had some level of loss, it was found that there is little effect on the plasma moments and that errors present in individual sky maps are typically minor. The majority of Dual Ion Spectrometer burst sky maps compressed in a lossless fashion, with no error introduced during compression. Because of induced compression error, the size limit for DES burst images has been increased for Phase 1B. Additionally, it was found that the floating point compression mode yielded better results when images have significant compression error, leading to floating point mode being used for the fast survey mode of operation for Phase 1B. Despite the suggested tweaks, it was found that wavelet-based compression, and a DWT/BPE algorithm in particular, is highly suitable to data compression for plasma measurement instruments and can be recommended for future missions.

  15. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    NASA Astrophysics Data System (ADS)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The

  16. Image compression with embedded multiwavelet coding

    NASA Astrophysics Data System (ADS)

    Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay

    1996-03-01

    An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.

  17. Optical Wavelet Transform for Fingerprint Identification

    DTIC Science & Technology

    1993-12-15

    requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical... wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial

  18. Wavelet Local Extrema Applied to Image Processing

    DTIC Science & Technology

    1992-12-01

    The research project had two components. In the first part, we developed a numerical method, based on the wavelet transform , for the solution of...on the orthogonal wavelet transform , that adapts the computational resolution in space and time to the regularity of the solution. This scheme saves

  19. Wavelet=Galerkin discretization of hyperbolic equations

    SciTech Connect

    Restrepo, J.M.; Leaf, G.K.

    1994-12-31

    The relative merits of the wavelet-Galerkin solution of hyperbolic partial differential equations, typical of geophysical problems, are quantitatively and qualitatively compared to traditional finite difference and Fourier-pseudo-spectral methods. The wavelet-Galerkin solution presented here is found to be a viable alternative to the two conventional techniques.

  20. 3D steerable wavelets in practice.

    PubMed

    Chenouard, Nicolas; Unser, Michael

    2012-11-01

    We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems.

  1. Improvements of embedded zerotree wavelet (EZW) coding

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-04-01

    In this research, we investigate several improvements of embedded zerotree wavelet (EZW) coding. Several topics addressed include: the choice of wavelet transforms and boundary conditions, the use of arithmetic coder and arithmetic context and the design of encoding order for effective embedding. The superior performance of our improvements is demonstrated with extensive experimental results.

  2. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  3. Compressive Fresnel digital holography using Fresnelet based sparse representation

    NASA Astrophysics Data System (ADS)

    Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith

    2015-04-01

    Compressive sensing (CS) in digital holography requires only very less number of pixel level detections in hologram plane for accurate image reconstruction and this is achieved by exploiting the sparsity of the object wave. When the input object fields are non-sparse in spatial domain, CS demands a suitable sparsification method like wavelet decomposition. The Fresnelet, a suitable wavelet basis for processing Fresnel digital holograms is an efficient sparsifier for the complex Fresnel field obtained by the Fresnel transform of the object field and minimizes the mutual coherence between sensing and sparsifying matrices involved in CS. The paper demonstrates the merits of Fresnelet based sparsification in compressive digital Fresnel holography over conventional method of sparsifying the input object field. The phase shifting digital Fresnel holography (PSDH) is used to retrieve the complex Fresnel field for the chosen problem. The results are presented from a numerical experiment to show the proof of the concept.

  4. Using wavelets to learn pattern templates

    NASA Astrophysics Data System (ADS)

    Scott, Clayton D.; Nowak, Robert D.

    2002-07-01

    Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.

  5. Finite element wavelets with improved quantitative properties

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoang; Stevenson, Rob

    2009-08-01

    In [W. Dahmen, R. Stevenson, Element-by-element construction of wavelets satisfying stability and moment conditions, SIAM J. Numer. Anal. 37 (1) (1999) 319-352 (electronic)], finite element wavelets were constructed on polygonal domains or Lipschitz manifolds that are piecewise parametrized by mappings with constant Jacobian determinants. The wavelets could be arranged to have any desired order of cancellation properties, and they generated stable bases for the Sobolev spaces Hs for (or s<=1 on manifolds). Unfortunately, it appears that the quantitative properties of these wavelets are rather disappointing. In this paper, we modify the construction from the above-mentioned work to obtain finite element wavelets which are much better conditioned.

  6. Wavelets as basis functions to represent the coarse-graining potential in multiscale coarse graining approach

    SciTech Connect

    Maiolo, M.; Vancheri, A.; Krause, R.; Danani, A.

    2015-11-01

    In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficient orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.

  7. Adaptive three-dimensional motion-compensated wavelet transform for image sequence coding

    NASA Astrophysics Data System (ADS)

    Leduc, Jean-Pierre

    1994-09-01

    This paper describes a 3D spatio-temporal coding algorithm for the bit-rate compression of digital-image sequences. The coding scheme is based on different specificities namely, a motion representation with a four-parameter affine model, a motion-adapted temporal wavelet decomposition along the motion trajectories and a signal-adapted spatial wavelet transform. The motion estimation is performed on the basis of four-parameter affine transformation models also called similitude. This transformation takes into account translations, rotations and scalings. The temporal wavelet filter bank exploits bi-orthogonal linear-phase dyadic decompositions. The 2D spatial decomposition is based on dyadic signal-adaptive filter banks with either para-unitary or bi-orthogonal bases. The adaptive filtering is carried out according to a performance criterion to be optimized under constraints in order to eventually maximize the compression ratio at the expense of graceful degradations of the subjective image quality. The major principles of the present technique is, in the analysis process, to extract and to separate the motion contained in the sequences from the spatio-temporal redundancy and, in the compression process, to take into account of the rate-distortion function on the basis of the spatio-temporal psycho-visual properties to achieve the most graceful degradations. To complete this description of the coding scheme, the compression procedure is therefore composed of scalar quantizers which exploit the spatio-temporal 3D psycho-visual properties of the Human Visual System and of entropy coders which finalize the bit rate compression.

  8. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  9. Iterative wavelet thresholding for rapid MRI reconstruction

    NASA Astrophysics Data System (ADS)

    Kayvanrad, Mohammad H.; McKenzie, Charles A.; Peters, Terry M.

    2011-03-01

    According to the developments in the field of compressed sampling and and sparse recovery, one might take advantage of the sparsity of an object, as an additional a priori knowledge about the object, to reconstruct it from fewer samples than that needed by the traditional sampling strategies. Since most magnetic resonance (MR) images are sparse in some domain, in this work we consider the problem of MR reconstruction and how one could apply this idea to accelerate the process of MR image/map acquisition. In particular, based on the Paupolis-Gerchgerg algorithm, an iterative thresholding algorithm for reconstruction of MR images from limited k-space observations is proposed. The proposed method takes advantage of the sparsity of most MR images in the wavelet domain. Initializing with a minimum-energy reconstruction, the object of interest is reconstructed by going through a sequence of thresholding and recovery iterations. Furthermore, MR studies often involve acquisition of multiple images in time that are highly correlated. This correlation can be used as additional knowledge on the object beside the sparsity to further reduce the reconstruction time. The performance of the proposed algorithms is experimentally evaluated and compared to other state-of-the-art methods. In particular, we show that the quality of reconstruction is increased compared to total variation (TV) regularization, and the conventional Papoulis-Gerchberg algorithm both in the absence and in the presence of noise. Also, phantom experiments show good accuracy in the reconstruction of relaxation maps from a set of highly undersampled k-space observations.

  10. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  11. Wavelet Neural Network Using Multiple Wavelet Functions in Target Threat Assessment

    PubMed Central

    Guo, Lihong; Duan, Hong

    2013-01-01

    Target threat assessment is a key issue in the collaborative attack. To improve the accuracy and usefulness of target threat assessment in the aerial combat, we propose a variant of wavelet neural networks, MWFWNN network, to solve threat assessment. How to select the appropriate wavelet function is difficult when constructing wavelet neural network. This paper proposes a wavelet mother function selection algorithm with minimum mean squared error and then constructs MWFWNN network using the above algorithm. Firstly, it needs to establish wavelet function library; secondly, wavelet neural network is constructed with each wavelet mother function in the library and wavelet function parameters and the network weights are updated according to the relevant modifying formula. The constructed wavelet neural network is detected with training set, and then optimal wavelet function with minimum mean squared error is chosen to build MWFWNN network. Experimental results show that the mean squared error is 1.23 × 10−3, which is better than WNN, BP, and PSO_SVM. Target threat assessment model based on the MWFWNN has a good predictive ability, so it can quickly and accurately complete target threat assessment. PMID:23509436

  12. Lossless compression of 3D seismic data using a horizon displacement compensated 3D lifting scheme

    NASA Astrophysics Data System (ADS)

    Meftah, Anis; Antonini, Marc; Ben Amar, Chokri

    2010-01-01

    In this paper we present a method to optimize the computation of the wavelet transform for the 3D seismic data while reducing the energy of coefficients to the minimum. This allow us to reduce the entropy of the signal and so increase the compression ratios. The proposed method exploits the geometrical information contained in the seismic 3D data to optimize the computation of the wavelet transform. Indeed, the classic filtering is replaced by a filtering following the horizons contained in the 3D seismic images. Applying this approach in two dimensions permits us to obtain wavelets coefficients with lowest energy. The experiments show that our method permits to save extra 8% of the size of the object compared to the classic wavelet transform.

  13. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  14. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  15. Image compression with Iris-C

    NASA Astrophysics Data System (ADS)

    Gains, David

    2009-05-01

    Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.

  16. Maximally Localized Radial Profiles for Tight Steerable Wavelet Frames.

    PubMed

    Pad, Pedram; Uhlmann, Virginie; Unser, Michael

    2016-03-22

    A crucial component of steerable wavelets is the radial profile of the generating function in the frequency domain. In this work, we present an infinite-dimensional optimization scheme that helps us find the optimal profile for a given criterion over the space of tight frames. We consider two classes of criteria that measure the localization of the wavelet. The first class specifies the spatial localization of the wavelet profile, and the second that of the resulting wavelet coefficients. From these metrics and the proposed algorithm, we construct tight wavelet frames that are optimally localized and provide their analytical expression. In particular, one of the considered criterion helps us finding back the popular Simoncelli wavelet profile. Finally, the investigation of local orientation estimation, image reconstruction from detected contours in the wavelet domain, and denoising, indicate that optimizing wavelet localization improves the performance of steerable wavelets, since our new wavelets outperform the traditional ones.

  17. Maximally Localized Radial Profiles for Tight Steerable Wavelet Frames.

    PubMed

    Pad, Pedram; Uhlmann, Virginie; Unser, Michael

    2016-05-01

    A crucial component of steerable wavelets is the radial profile of the generating function in the frequency domain. In this paper, we present an infinite-dimensional optimization scheme that helps us find the optimal profile for a given criterion over the space of tight frames. We consider two classes of criteria that measure the localization of the wavelet. The first class specifies the spatial localization of the wavelet profile, and the second that of the resulting wavelet coefficients. From these metrics and the proposed algorithm, we construct tight wavelet frames that are optimally localized and provide their analytical expression. In particular, one of the considered criterion helps us finding back the popular Simoncelli wavelet profile. Finally, the investigation of local orientation estimation, image reconstruction from detected contours in the wavelet domain, and denoising indicate that optimizing wavelet localization improves the performance of steerable wavelets, since our new wavelets outperform the traditional ones.

  18. Energy and Quality Evaluation for Compressive Sensing of Fetal Electrocardiogram Signals

    PubMed Central

    Da Poian, Giulia; Brandalise, Denis; Bernardini, Riccardo; Rinaldo, Roberto

    2016-01-01

    This manuscript addresses the problem of non-invasive fetal Electrocardiogram (ECG) signal acquisition with low power/low complexity sensors. A sensor architecture using the Compressive Sensing (CS) paradigm is compared to a standard compression scheme using wavelets in terms of energy consumption vs. reconstruction quality, and, more importantly, vs. performance of fetal heart beat detection in the reconstructed signals. We show in this paper that a CS scheme based on reconstruction with an over-complete dictionary has similar reconstruction quality to one based on wavelet compression. We also consider, as a more important figure of merit, the accuracy of fetal beat detection after reconstruction as a function of the sensor power consumption. Experimental results with an actual implementation in a commercial device show that CS allows significant reduction of energy consumption in the sensor node, and that the detection performance is comparable to that obtained from original signals for compression ratios up to about 75%. PMID:28025510

  19. Bit-plane-channelized hotelling observer for predicting task performance using lossy-compressed images

    NASA Astrophysics Data System (ADS)

    Schmanske, Brian M.; Loew, Murray H.

    2003-05-01

    A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.

  20. Discrete directional wavelet bases and frames: analysis and applications

    NASA Astrophysics Data System (ADS)

    Dragotti, Pier Luigi; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar

    2003-11-01

    The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show interesting gains compared to the standard two-dimensional analysis.

  1. Optimization of integer wavelet transforms based on difference correlation structures.

    PubMed

    Li, Hongliang; Liu, Guizhong; Zhang, Zhongwei

    2005-11-01

    In this paper, a novel lifting integer wavelet transform based on difference correlation structure (DCCS-LIWT) is proposed. First, we establish a relationship between the performance of a linear predictor and the difference correlations of an image. The obtained results provide a theoretical foundation for the following construction of the optimal lifting filters. Then, the optimal prediction lifting coefficients in the sense of least-square prediction error are derived. DCCS-LIWT puts heavy emphasis on image inherent dependence. A distinct feature of this method is the use of the variance-normalized autocorrelation function of the difference image to construct a linear predictor and adapt the predictor to varying image sources. The proposed scheme also allows respective calculations of the lifting filters for the horizontal and vertical orientations. Experimental evaluation shows that the proposed method produces better results than the other well-known integer transforms for the lossless image compression.

  2. High order finite volume methods on wavelet-adapted grids with local time-stepping on multicore architectures for the simulation of shock-bubble interactions

    NASA Astrophysics Data System (ADS)

    Hejazialhosseini, Babak; Rossinelli, Diego; Bergdorf, Michael; Koumoutsakos, Petros

    2010-11-01

    We present a space-time adaptive solver for single- and multi-phase compressible flows that couples average interpolating wavelets with high-order finite volume schemes. The solver introduces the concept of wavelet blocks, handles large jumps in resolution and employs local time-stepping for efficient time integration. We demonstrate that the inherently sequential wavelet-based adaptivity can be implemented efficiently in multicore computer architectures using task-based parallelism and introducing the concept of wavelet blocks. We validate our computational method on a number of benchmark problems and we present simulations of shock-bubble interaction at different Mach numbers, demonstrating the accuracy and computational performance of the method.

  3. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  4. Image coding with geometric wavelets.

    PubMed

    Alani, Dror; Averbuch, Amir; Dekel, Shai

    2007-01-01

    This paper describes a new and efficient method for low bit-rate image coding which is based on recent development in the theory of multivariate nonlinear piecewise polynomial approximation. It combines a binary space partition scheme with geometric wavelet (GW) tree approximation so as to efficiently capture curve singularities and provide a sparse representation of the image. The GW method successfully competes with state-of-the-art wavelet methods such as the EZW, SPIHT, and EBCOT algorithms. We report a gain of about 0.4 dB over the SPIHT and EBCOT algorithms at the bit-rate 0.0625 bits-per-pixels (bpp). It also outperforms other recent methods that are based on "sparse geometric representation." For example, we report a gain of 0.27 dB over the Bandelets algorithm at 0.1 bpp. Although the algorithm is computationally intensive, its time complexity can be significantely reduced by collecting a "global" GW n-term approximation to the image from a collection of GW trees, each constructed separately over tiles of the image.

  5. Three-dimensional wavelet transform and multiresolution surface reconstruction from volume data

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Sloan, Kenneth R., Jr.

    1995-04-01

    Multiresolution surface reconstruction from volume data is very useful in medical imaging, data compression and multiresolution modeling. This paper presents a hierarchical structure for extracting multiresolution surfaces from volume data by using a 3-D wavelet transform. The hierarchical scheme is used to visualize different levels of detail of the surface and allows a user to explore different features of the surface at different scales. We use 3-D surface curvature as a smoothness condition to control the hierarchical level and the distance error between the reconstructed surface and the original data as the stopping criteria. A 3-D wavelet transform provides an appropriate hierarchical structure to build the volume pyramid. It can be constructed by the tensor products of 1-D wavelet transforms in three subspaces. We choose the symmetric and smoothing filters such as Haar, linear, pseudoCoiflet, cubic B-spline and their corresponding orthogonal wavelets to build the volume pyramid. The surface is reconstructed at each level of volume data by using the cell interpolation method. Some experimental results are shown through the comparison of the different filters based on the distance errors of the surfaces.

  6. Detecting the BAO using Discrete Wavelet Packets

    NASA Astrophysics Data System (ADS)

    Garcia, Noel Anthony; Wu, Yunyun; Kadowaki, Kevin; Pando, Jesus

    2017-01-01

    We use wavelet packets to investigate the clustering of matter on galactic scales in search of the Baryon Acoustic Oscillations. We do so in two ways. We develop a wavelet packet approach to measure the power spectrum and apply this method to the CMASS galaxy catalogue from the Sloan Digital Sky Survey (SDSS). We compare the resulting power spectrum to published BOSS results by measuring a parameter β that compares our wavelet detected oscillations to the results from the SDSS collaboration. We find that β=1 indicating that our wavelet packet methods are detecting the BAO at a similar level as traditional Fourier techniques. We then use wavelet packets to decompose, denoise, and then reconstruct the galaxy density field. Using this denoised field, we compute the standard two-point correlation function. We are able to successfully detect the BAO at r ≈ 105 h-1 Mpc in line with previous SDSS results. We conclude that wavelet packets do reproduce the results of the key clustering statistics computed by other means. The wavelet packets show distinct advantages in suppressing high frequency noise and in keeping information localized.

  7. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  8. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  9. Application of the wavelet method for the simultaneous quantitative determination of benazepril and hydrochlorothiazide in their mixtures.

    PubMed

    Dinç, Erdal; Baleanu, Dumitru

    2004-01-01

    The discrete and continuous wavelet transforms were applied to the overlapping signal analysis of the ratio data signal for simultaneous quantitative determination of the title subject compounds in samples. The ratio spectra data of the binary mixtures containing benazepril (BE) and hydrochlorothiazide (HCT) were transferred as data vectors into the wavelet domain. Signal compression, followed by a 1-dimension continuous wavelet transform (CWT), was used to obtain coincident transformed signals for pure BE and HCT and their mixtures. The coincident transformed amplitudes corresponding to both maximum and minimum points allowed construction of calibration graphs for each compound in the binary mixture. The validity of CWT calibrations was tested by analyzing synthetic mixtures of the investigated compounds, and successful results were obtained. All calculations were performed within EXCEL, C++, and MATLAB6.5 softwares. The obtained results indicated that our approach was flexible and applicable for the binary mixture analysis.

  10. Applications of a fast, continuous wavelet transform

    SciTech Connect

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  11. Use of Fresnelets for phase-shifting digital hologram compression.

    PubMed

    Darakis, Emmanouil; Soraghan, John J

    2006-12-01

    Fresnelets are wavelet-like base functions specially tailored for digital holography applications. We introduce their use in phase-shifting interferometry (PSI) digital holography for the compression of such holographic data. Two compression methods are investigated. One uses uniform quantization of the Fresnelet coefficients followed by lossless coding, and the other uses set portioning in hierarchical trees (SPIHT) coding. Quantization and lossless coding of the original data is used to compare the performance of the proposed algorithms. The comparison reveals that the Fresnelet transform of phase-shifting holograms in combination with SPIHT or uniform quantization can be used very effectively for the compression of holographic data. The performance of the new compression schemes is demonstrated on real PSI digital holographic data.

  12. Wavelet frames and admissibility in higher dimensions

    NASA Astrophysics Data System (ADS)

    Führ, Hartmut

    1996-12-01

    This paper is concerned with the relations between discrete and continuous wavelet transforms on k-dimensional Euclidean space. We start with the construction of continuous wavelet transforms with the help of square-integrable representations of certain semidirect products, thereby generalizing results of Bernier and Taylor. We then turn to frames of L2(Rk) and to the question, when the functions occurring in a given frame are admissible for a given continuous wavelet transform. For certain frames we give a characterization which generalizes a result of Daubechies to higher dimensions.

  13. Wavelet Applications for Flight Flutter Testing

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.

    1999-01-01

    Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.

  14. Optical Planar Discrete Fourier and Wavelet Transforms

    NASA Astrophysics Data System (ADS)

    Cincotti, Gabriella; Moreolo, Michela Svaluto; Neri, Alessandro

    2007-10-01

    We present all-optical architectures to perform discrete wavelet transform (DWT), wavelet packet (WP) decomposition and discrete Fourier transform (DFT) using planar lightwave circuits (PLC) technology. Any compact-support wavelet filter can be implemented as an optical planar two-port lattice-form device, and different subband filtering schemes are possible to denoise, or multiplex optical signals. We consider both parallel and serial input cases. We design a multiport decoder/decoder that is able to generate/process optical codes simultaneously and a flexible logarithmic wavelength multiplexer, with flat top profile and reduced crosstalk.

  15. Transionospheric signal detection with chirped wavelets

    SciTech Connect

    Doser, A.B.; Dunham, M.E.

    1997-11-01

    Chirped wavelets are utilized to detect dispersed signals in the joint time scale domain. Specifically, pulses that become dispersed by transmission through the ionosphere and are received by satellites as nonlinear chirps are investigated. Since the dispersion greatly lowers the signal to noise ratios, it is difficult to isolate the signals in the time domain. Satellite data are examined with discrete wavelet expansions. Detection is accomplished via a template matching threshold scheme. Quantitative experimental results demonstrate that the chirped wavelet detection scheme is successful in detecting the transionospheric pulses at very low signal to noise ratios.

  16. Wavelet analysis of fusion plasma transients

    SciTech Connect

    Dose, V.; Venus, G.; Zohm, H.

    1997-02-01

    Analysis of transient signals in the diagnostic of fusion plasmas often requires the simultaneous consideration of their time and frequency information. The newly emerging technique of wavelet analysis contains both time and frequency domains. Therefore it can be a valuable tool for the analysis of transients. In this paper the basic method of wavelet analysis is described. As an example, wavelet analysis is applied to the well-known phenomena of mode locking and fishbone instability. The results quantify the current qualitative understanding of these events in terms of instantaneous frequencies and amplitudes and encourage applications of the method to other problems. {copyright} {ital 1997 American Institute of Physics.}

  17. Significance tests for the wavelet cross spectrum and wavelet linear coherence

    NASA Astrophysics Data System (ADS)

    Ge, Z.

    2008-12-01

    This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007). Conventional approaches that are used by Torrence and Compo (1998) based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [-π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As with simulated

  18. Region segmentation techniques for object-based image compression: a review

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.; Ritter, Gerhard X.

    2004-10-01

    Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region"s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step. Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.

  19. Hyperspectral images lossless compression using the 3D binary EZW algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-02-01

    This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.

  20. Wavelet differential neural network observer.

    PubMed

    Chairez, Isaac

    2009-09-01

    State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown.

  1. Nonuniform spatially adaptive wavelet packets

    NASA Astrophysics Data System (ADS)

    Carre, Philippe; Fernandez-Maloigne, Christine

    2000-12-01

    In this paper, we propose a new decomposition scheme for spatially adaptive wavelet packets. Contrary to the double tree algorithm, our method is non-uniform and shift- invariant in the time and frequency domains, and is minimal for an information cost function. We prose some-restrictions to our algorithm to reduce the complexity and permitting us to provide some time-frequency partitions of the signal in agreement with its structure. This new 'totally' non-uniform transform, more adapted than Malvar, Packets or dyadic double-tree decomposition, allows the study of all possible time-frequency partitions with the only restriction that the blocks are rectangular. It permits one to obtain a satisfying Time-Frequency representation, and is applied for the study of EEG signals.

  2. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  3. Discrete multiscale wavelet shrinkage and integrodifferential equations

    NASA Astrophysics Data System (ADS)

    Didas, S.; Steidl, G.; Weickert, J.

    2008-04-01

    We investigate the relation between discrete wavelet shrinkage and integrodifferential equations in the context of simplification and denoising of one-dimensional signals. In the continuous setting, strong connections between these two approaches were discovered in 6 (see references). The key observation is that the wavelet transform can be understood as derivative operator after the convolution with a smoothing kernel. In this paper, we extend these ideas to the practically relevant discrete setting with both orthogonal and biorthogonal wavelets. In the discrete case, the behaviour of the smoothing kernels for different scales requires additional investigation. The results of discrete multiscale wavelet shrinkage and related discrete versions of integrodifferential equations are compared with respect to their denoising quality by numerical experiments.

  4. Digital transceiver implementation for wavelet packet modulation

    NASA Astrophysics Data System (ADS)

    Lindsey, Alan R.; Dill, Jeffrey C.

    1998-03-01

    Current transceiver designs for wavelet-based communication systems are typically reliant on analog waveform synthesis, however, digital processing is an important part of the eventual success of these techniques. In this paper, a transceiver implementation is introduced for the recently introduced wavelet packet modulation scheme which moves the analog processing as far as possible toward the antenna. The transceiver is based on the discrete wavelet packet transform which incorporates level and node parameters for generalized computation of wavelet packets. In this transform no particular structure is imposed on the filter bank save dyadic branching, and a maximum level which is specified a priori and dependent mainly on speed and/or cost considerations. The transmitter/receiver structure takes a binary sequence as input and, based on the desired time- frequency partitioning, processes the signal through demultiplexing, synthesis, analysis, multiplexing and data determination completely in the digital domain - with exception of conversion in and out of the analog domain for transmission.

  5. Wavelet Analysis for Acoustic Phased Array

    NASA Astrophysics Data System (ADS)

    Kozlov, Inna; Zlotnick, Zvi

    2003-03-01

    Wavelet spectrum analysis is known to be one of the most powerful tools for exploring quasistationary signals. In this paper we use wavelet technique to develop a new Direction Finding (DF) Algorithm for the Acoustic Phased Array (APA) systems. Utilising multi-scale analysis of libraries of wavelets allows us to work with frequency bands instead of individual frequency of an acoustic source. These frequency bands could be regarded as features extracted from quasistationary signals emitted by a noisy object. For detection, tracing and identification of a sound source in a noisy environment we develop smart algorithm. The essential part of this algorithm is a special interacting procedure of the above-mentioned DF-algorithm and the wavelet-based Identification (ID) algorithm developed in [4]. Significant improvement of the basic properties of a receiving APA pattern is achieved.

  6. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  7. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  8. Interactive video compression for remote sensing

    NASA Astrophysics Data System (ADS)

    Maleh, Ray; Boyle, Frank A.; Deignan, Paul B.; Yancey, Jerry W.

    2011-05-01

    Modern day remote video cameras enjoy the ability of producing quality video streams at extremely high resolutions. Unfortunately, the benefit of such technology cannot be realized when the channel between the sensor and the operator restricts the bit-rate of incoming data. In order to cram more information into the available bandwidth, video technologies typically employ compression schemes (e.g. H.264/MPEG 4 standard) which exploit spatial and temporal redundancies. We present an alternative method utilizing region of interest (ROI) based compression. Each region in the incoming scene is assigned a score measuring importance to the operator. Scores may be determined based on the manual selection of one or more objects which are then automatically tracked by the system; or alternatively, listeners may be pre-assigned to various areas that trigger high scores upon the occurrence of customizable events. A multi-resolution wavelet expansion is then used to optimally transmit important regions at higher resolutions and frame rates than less interesting peripheral background objects subject to bandwidth constraints. We show that our methodology makes it possible to obtain high compression ratios while ensuring no loss in overall situational awareness. If combined with modules from traditional video codecs, compression ratios of 100:1 to 1000:1, depending on ROI size, can easily be achieved.

  9. Applications of a fast continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Dress, William B.

    1997-04-01

    A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice

  10. Contour detection based on wavelet differentiation

    NASA Astrophysics Data System (ADS)

    Bezuglov, D.; Kuzin, A.; Voronin, V.

    2016-05-01

    This work proposes a novel algorithm for contour detection based on high-performance algorithm of wavelet analysis for multimedia applications. To solve the noise effect on the result of peaking in this paper we consider the direct and inverse wavelet differentiation. Extensive experimental evaluation on noisy images demonstrates that our contour detection method significantly outperform competing algorithms. The proposed algorithm provides a means of coupling our system to recognition application such as detection and identification of vehicle number plate.

  11. EEG Multiresolution Analysis Using Wavelet Transform

    DTIC Science & Technology

    2007-11-02

    Wavelet transform (WT) is a new multiresolution time-frequency analysis method. WT possesses well localization feature both in tine and frequency...plays a key role in the diagnosing diseases and is useful for both physiological research and medical applications. Using the dyadic wavelet ... transform the EEG signals are successfully decomposed to the alpha rhythm (8-13Hz) beta rhythm (14-30Hz) theta rhythm (4-7Hz) and delta rhythm (0.3-3Hz) and

  12. Wavelet Characterizations of Multi-Directional Regularity

    NASA Astrophysics Data System (ADS)

    Slimane, Mourad Ben

    2011-05-01

    The study of d dimensional traces of functions of m several variables leads to directional behaviors. The purpose of this paper is two-fold. Firstly, we extend the notion of one direction pointwise Hölder regularity introduced by Jaffard to multi-directions. Secondly, we characterize multi-directional pointwise regularity by Triebel anisotropic wavelet coefficients (resp. leaders), and also by Calderón anisotropic continuous wavelet transform.

  13. A Wavelet Model for Vocalic Speech Coarticulation

    DTIC Science & Technology

    1994-10-01

    128 Figure 8. 1 Wavelet Transforms of the /d/ words: /did/, / dmd /, /d~d/, /dud/........................... 134 x Figure 8.2 Wavelet Transforms of... Kennedy (1967) used synthetic CVC syllables to demonstrate the influence of adjacent consonants on the perception of the vowel. A series of vowel sounds...34 The Journal of the Acoustical Society of America 35(11), pp. 1773-1781. 172 Lindblom, B.E.F. and Studdert- Kennedy , M. (1967). "On the Role of Formant

  14. Wavelet-based color pathological image watermark through dynamically adjusting the embedding intensity.

    PubMed

    Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman

    2012-01-01

    This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT). The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n × n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping.

  15. Wavelet-based Poisson Solver for use in Particle-In-CellSimulations

    SciTech Connect

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-05-13

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

  16. Wavelet Kernels on a DSP: A Comparison between Lifting and Filter Banks for Image Coding

    NASA Astrophysics Data System (ADS)

    Gnavi, Stefano; Penna, Barbara; Grangetto, Marco; Magli, Enrico; Olmo, Gabriella

    2002-12-01

    We develop wavelet engines on a digital signal processors (DSP) platform, the target application being image and intraframe video compression by means of the forthcoming JPEG2000 and Motion-JPEG2000 standards. We describe two implementations, based on the lifting scheme and the filter bank scheme, respectively, and we present experimental results on code profiling. In particular, we address the following problems: (1) evaluating the execution speed of a wavelet engine on a modern DSP; (2) comparing the actual execution speed of the lifting scheme and the filter bank scheme with the theoretical results; (3) using the on-board direct memory access (DMA) to possibly optimize the execution speed. The results allow to assess the performance of a modern DSP in the image coding task, as well as to compare the lifting and filter bank performance in a realistic application scenario. Finally, guidelines for optimizing the code efficiency are provided by investigating the possible use of the on-board DMA.

  17. Optimal wavelet denoising for smart biomonitor systems

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-03-01

    Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.

  18. Trabecular bone texture classification using wavelet leaders

    NASA Astrophysics Data System (ADS)

    Zou, Zilong; Yang, Jie; Megalooikonomou, Vasileios; Jennane, Rachid; Cheng, Erkang; Ling, Haibin

    2016-03-01

    In this paper we propose to use the Wavelet Leader (WL) transformation for studying trabecular bone patterns. Given an input image, its WL transformation is defined as the cross-channel-layer maximum pooling of an underlying wavelet transformation. WL inherits the advantage of the original wavelet transformation in capturing spatial-frequency statistics of texture images, while being more robust against scale and orientation thanks to the maximum pooling strategy. These properties make WL an attractive alternative to replace wavelet transformations which are used for trabecular analysis in previous studies. In particular, in this paper, after extracting wavelet leader descriptors from a trabecular texture patch, we feed them into two existing statistic texture characterization methods, namely the Gray Level Co-occurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). The most discriminative features, Energy of GLCM and Gray Level Non-Uniformity of GLRLM, are retained to distinguish two different populations between osteoporotic patients and control subjects. Receiver Operating Characteristics (ROC) curves are used to measure performance of classification. Experimental results on a recently released benchmark dataset show that WL significantly boosts the performance of baseline wavelet transformations by 5% in average.

  19. Fast wavelet estimation of weak biosignals.

    PubMed

    Causevic, Elvir; Morley, Robert E; Wickerhauser, M Victor; Jacquin, Arnaud E

    2005-06-01

    Wavelet-based signal processing has become commonplace in the signal processing community over the past decade and wavelet-based software tools and integrated circuits are now commercially available. One of the most important applications of wavelets is in removal of noise from signals, called denoising, accomplished by thresholding wavelet coefficients in order to separate signal from noise. Substantial work in this area was summarized by Donoho and colleagues at Stanford University, who developed a variety of algorithms for conventional denoising. However, conventional denoising fails for signals with low signal-to-noise ratio (SNR). Electrical signals acquired from the human body, called biosignals, commonly have below 0 dB SNR. Synchronous linear averaging of a large number of acquired data frames is universally used to increase the SNR of weak biosignals. A novel wavelet-based estimator is presented for fast estimation of such signals. The new estimation algorithm provides a faster rate of convergence to the underlying signal than linear averaging. The algorithm is implemented for processing of auditory brainstem response (ABR) and of auditory middle latency response (AMLR) signals. Experimental results with both simulated data and human subjects demonstrate that the novel wavelet estimator achieves superior performance to that of linear averaging.

  20. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    NASA Technical Reports Server (NTRS)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  1. Efficient coding of wavelet trees and its applications in image coding

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Yang, En-hui; Tewfik, Ahmed H.; Kieffer, John C.

    1996-02-01

    We propose in this paper a novel lossless tree coding algorithm. The technique is a direct extension of the bisection method, the simplest case of the complexity reduction method proposed recently by Kieffer and Yang, that has been used for lossless data string coding. A reduction rule is used to obtain the irreducible representation of a tree, and this irreducible tree is entropy-coded instead of the input tree itself. This reduction is reversible, and the original tree can be fully recovered from its irreducible representation. More specifically, we search for equivalent subtrees from top to bottom. When equivalent subtrees are found, a special symbol is appended to the value of the root node of the first equivalent subtree, and the root node of the second subtree is assigned to the index which points to the first subtree, an all other nodes in the second subtrees are removed. This procedure is repeated until it cannot be reduced further. This yields the irreducible tree or irreducible representation of the original tree. The proposed method can effectively remove the redundancy in an image, and results in more efficient compression. It is proved that when the tree size approaches infinity, the proposed method offers the optimal compression performance. It is generally more efficient in practice than direct coding of the input tree. The proposed method can be directly applied to code wavelet trees in non-iterative wavelet-based image coding schemes. A modified method is also proposed for coding wavelet zerotrees in embedded zerotree wavelet (EZW) image coding. Although its coding efficiency is slightly reduced, the modified version maintains exact control of bit rate and the scalability of the bit stream in EZW coding.

  2. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  3. Stable and Robust Sampling Strategies for Compressive Imaging.

    PubMed

    Krahmer, Felix; Ward, Rachel

    2014-02-01

    In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because low-order wavelets and low-order frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper, we turn to a more refined notion of coherence-the so-called local coherence-measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square power-law density, we can prove the restricted isometry property with near-optimal embedding dimensions. Consequently, the variable-density sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1-minimization and total variation minimization. The local coherence framework developed in this paper should be of independent interest, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis-as opposed to bounded maximal coherence-as long as the sampling strategy is adapted accordingly.

  4. Light field compression using disparity-compensated lifting and shape adaptation.

    PubMed

    Chang, Chuo-Ling; Zhu, Xiaoqing; Ramanathan, Prashant; Girod, Bernd

    2006-04-01

    We propose disparity-compensated lifting for wavelet compression of light fields. With this approach, we obtain the benefits of wavelet coding, such as scalability in all dimensions, as well as superior compression performance. Additionally, the proposed approach solves the irreversibility limitations of previous light field wavelet coding approaches, using the lifting structure. Our scheme incorporates disparity compensation into the lifting structure for the transform across the views in the light field data set. Another transform is performed to exploit the coherence among neighboring pixels, followed by a modified SPIHT coder and rate-distortion optimized bitstream assembly. A view-sequencing algorithm is developed to organize the views for encoding. For light fields of an object, we propose to use shape adaptation to improve the compression efficiency and visual quality of the images. The necessary shape information is efficiently coded based on prediction from the existing geometry model. Experimental results show that the proposed scheme exhibits superior compression performance over existing light field compression techniques.

  5. Application of wavelet analysis for monitoring the hydrologic effects of dam operation: Glen canyon dam and the Colorado River at lees ferry, Arizona

    USGS Publications Warehouse

    White, M.A.; Schmidt, J.C.; Topping, D.J.

    2005-01-01

    Wavelet analysis is a powerful tool with which to analyse the hydrologic effects of dam construction and operation on river systems. Using continuous records of instantaneous discharge from the Lees Ferry gauging station and records of daily mean discharge from upstream tributaries, we conducted wavelet analyses of the hydrologic structure of the Colorado River in Grand Canyon. The wavelet power spectrum (WPS) of daily mean discharge provided a highly compressed and integrative picture of the post-dam elimination of pronounced annual and sub-annual flow features. The WPS of the continuous record showed the influence of diurnal and weekly power generation cycles, shifts in discharge management, and the 1996 experimental flood in the post-dam period. Normalization of the WPS by local wavelet spectra revealed the fine structure of modulation in discharge scale and amplitude and provides an extremely efficient tool with which to assess the relationships among hydrologic cycles and ecological and geomorphic systems. We extended our analysis to sections of the Snake River and showed how wavelet analysis can be used as a data mining technique. The wavelet approach is an especially promising tool with which to assess dam operation in less well-studied regions and to evaluate management attempts to reconstruct desired flow characteristics. Copyright ?? 2005 John Wiley & Sons, Ltd.

  6. Wavelet transforms as solutions of partial differential equations

    SciTech Connect

    Zweig, G.

    1997-10-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.

  7. Application of Hermitian wavelet to crack fault detection in gearbox

    NASA Astrophysics Data System (ADS)

    Li, Hui; Zhang, Yuping; Zheng, Haiqi

    2011-05-01

    The continuous wavelet transform enables one to look at the evolution in the time scale joint representation plane. This advantage makes it very suitable for the detection of singularity generated by localized defects in the mechanical system. However, most of the applications of the continuous wavelet transform have widely focused on the use of Morlet wavelet transform. The complex Hermitian wavelet is constructed based on the first and the second derivatives of the Gaussian function to detect signal singularities. The Fourier spectrum of Hermitian wavelet is real; therefore, Hermitian wavelet does not affect the phase of a signal in the complex domain. This gives a desirable ability to extract the singularity characteristic of a signal precisely. In this study, Hermitian wavelet is used to diagnose the gear localized crack fault. The simulative and experimental results show that Hermitian wavelet can extract the transients from strong noise signals and can effectively diagnose the localized gear fault.

  8. [Hyperspectral image compression technology research based on EZW].

    PubMed

    Wei, Jun-Xia; Xiangli, Bin; Duan, Xiao-Feng; Xu, Zhao-Hui; Xue, Li-Jun

    2011-08-01

    Along with the development of hyperspectral remote sensing technology, hyperspectral imaging technology has been applied in the aspect of aviation and spaceflight, which is different from multispectral imaging, and with the band width of nanoscale spectral imaging the target continuously, the image resolution is very high. However, with the increasing number of band, spectral data quantity will be more and more, and these data storage and transmission is the problem that the authors must face. Along with the development of wavelet compression technology, in field of image compression, many people adopted and improved EZW, the present paper used the method in hyperspectral spatial dimension compression, but does not involved the spectrum dimension compression. From hyperspectral image compression reconstruction results, whether from the peak signal-to-noise ratio (PSNR) and spectral curve or from the subjective comparison of source and reconstruction image, the effect is well. If the first compression of image from spectrum dimension is made, then compression on space dimension, the authors believe the effect will be better.

  9. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  10. Functional calculus using wavelet transforms

    NASA Astrophysics Data System (ADS)

    Holschneider, Matthias

    1994-07-01

    It is shown how the wavelet transform may be used to compute for a function s the symbol s(A) for any (not necessarily) self-adjoint operator A whose spectrum is contained in the upper half plane. For self-adjoint operators it is shown that this functional calculus coincides with the usual one. In particular it is shown how the exponential eitA can be written in terms of the resolvent Rz=(A-z)-1 of A as follows: eitA=(1/c) ∫0∞da an-2∫-∞+∞ dbĝ¯ (at)eitbRb-ian(A), with c=-2iπ×∫0∞(dω/ω) (-iω)n-1ĝ¯(ω)e-ω, and n∈N, and the integral is understood as the Cesaro limit. This shows explicitly how the behavior for large t is determined by the behavior of Rz at Iz ≂1/t.

  11. Generalizing Lifted Tensor-Product Wavelets to Irregular Polygonal Domains

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2002-04-11

    We present a new construction approach for symmetric lifted B-spline wavelets on irregular polygonal control meshes defining two-manifold topologies. Polygonal control meshes are recursively refined by stationary subdivision rules and converge to piecewise polynomial limit surfaces. At every subdivision level, our wavelet transforms provide an efficient way to add geometric details that are expanded from wavelet coefficients. Both wavelet decomposition and reconstruction operations are based on local lifting steps and have linear-time complexity.

  12. Wavelet Based Feature Extraction for Target Recognition and Minefield Detection

    DTIC Science & Technology

    2007-11-02

    with Ron Gross (NSWC); presentation of course "Wavelets and Filter Banks " to NSWC personnel; application of simulated annealing to optimize RF absorption...characteristics of multilayer surfaces; generalization of wavelet transform to M-band wavelets; algorithm to generate a wavelet filter bank using any...filter whatsoever as the analysis filter; implementation of an algorithm to parameterize all M-band paraunitary filter banks .

  13. Multiresolution Stochastic Models, Data Fusion, and Wavelet Transforms

    DTIC Science & Technology

    1992-05-01

    based on the wavelet transform . The statistical structure of these models is Markovian in scale, and in addition the eigenstructure of these models is...given by the wavelet transform . The implication of this is that by using the wavelet transform we can convert the apparently complicated problem of...plays the role of the time-like variable. In addition we show how the wavelet transform , which is defined for signals that extend from -infinity to

  14. Undecimated Wavelet Transforms for Image De-noising

    SciTech Connect

    Gyaourova, A; Kamath, C; Fodor, I K

    2002-11-19

    A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.

  15. Review of wavelet transforms for pattern recognitions

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1996-03-01

    After relating the adaptive wavelet transform to the human visual and hearing systems, we exploit the synergism between such a smart sensor processing with brain-style neural network computing. The freedom of choosing an appropriate kernel of a linear transform, which is given to us by the recent mathematical foundation of the wavelet transform, is exploited fully and is generally called the adaptive wavelet transform (WT). However, there are several levels of adaptivity: (1) optimum coefficients: adjustable transform coefficients chosen with respect to a fixed mother kernel for better invariant signal representation, (2) super-mother: grouping different scales of daughter wavelets of same or different mother wavelets at different shift location into a new family called a superposition mother kernel for better speech signal classification, (3) variational calculus to determine ab initio a constraint optimization mother for a specific task. The tradeoff between the mathematical rigor of the complete orthonormality and the speed of order (N) with the adaptive flexibility is finally up to the user's needs. Then, to illustrate (1), a new invariant optoelectronic architecture of a wedge- shape filter in the WT domain is given for scale-invariant signal classification by neural networks.

  16. Wavelet applied to computer vision in astrophysics

    NASA Astrophysics Data System (ADS)

    Bijaoui, Albert; Slezak, Eric; Traina, Myriam

    2004-02-01

    Multiscale analyses can be provided by application wavelet transforms. For image processing purposes, we applied algorithms which imply a quasi isotropic vision. For a uniform noisy image, a wavelet coefficient W has a probability density function (PDF) p(W) which depends on the noise statistic. The PDF was determined for many statistical noises: Gauss, Poission, Rayleigh, exponential. For CCD observations, the Anscombe transform was generalized to a mixed Gasus+Poisson noise. From the discrete wavelet transform a set of significant wavelet coefficients (SSWC)is obtained. Many applications have been derived like denoising and deconvolution. Our main application is the decomposition of the image into objects, i.e the vision. At each scale an image labelling is performed in the SSWC. An interscale graph linking the fields of significant pixels is then obtained. The objects are identified using this graph. The wavelet coefficients of the tree related to a given object allow one to reconstruct its image by a classical inverse method. This vision model has been applied to astronomical images, improving the analysis of complex structures.

  17. Higher-density dyadic wavelet transform and its application

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Tang, Baoping; Wang, Jiaxu

    2010-04-01

    This paper proposes a higher-density dyadic wavelet transform with two generators, whose corresponding wavelet filters are band-pass and high-pass. The wavelet coefficients at each scale in this case have the same length as the signal. This leads to a new redundant dyadic wavelet transform, which is strictly shift invariant and further increases the sampling in the time dimension. We describe the definition of higher-density dyadic wavelet transform, and discuss the condition of perfect reconstruction of the signal from its wavelet coefficients. The fast implementation algorithm for the proposed transform is given as well. Compared with the higher-density discrete wavelet transform, the proposed transform is shift invariant. Applications into signal denoising indicate that the proposed wavelet transform has better denoising performance than other commonly used wavelet transforms. In the end, various typical wavelet transforms are applied to analyze the vibration signals of two faulty roller bearings, the results show that the proposed wavelet transform can more effectively extract the fault characteristics of the roller bearings than the other wavelet transforms.

  18. The Discrete, Orthogonal Wavelet Transform, A Protective Approach.

    DTIC Science & Technology

    1995-09-01

    completely determined by the collection of functions onto which it projects. The wavelet transform projects onto a set of functions which satisfy a...simple linear relationship between different levels of dilation. The properties of the wavelet transform are determined by the coefficients of this linear...relationship. This thesis examines the connections between the wavelet transform properties and the linear relationship coefficients. (AN)

  19. [Wavelet entropy analysis of spontaneous EEG signals in Alzheimer's disease].

    PubMed

    Zhang, Meiyun; Zhang, Benshu; Chen, Ying

    2014-08-01

    Wavelet entropy is a quantitative index to describe the complexity of signals. Continuous wavelet transform method was employed to analyze the spontaneous electroencephalogram (EEG) signals of mild, moderate and severe Alzheimer's disease (AD) patients and normal elderly control people in this study. Wavelet power spectrums of EEG signals were calculated based on wavelet coefficients. Wavelet entropies of mild, moderate and severe AD patients were compared with those of normal controls. The correlation analysis between wavelet entropy and MMSE score was carried out. There existed significant difference on wavelet entropy among mild, moderate, severe AD patients and normal controls (P<0.01). Group comparisons showed that wavelet entropy for mild, moderate, severe AD patients was significantly lower than that for normal controls, which was related to the narrow distribution of their wavelet power spectrums. The statistical difference was significant (P<0.05). Further studies showed that the wavelet entropy of EEG and the MMSE score were significantly correlated (r= 0. 601-0. 799, P<0.01). Wavelet entropy is a quantitative indicator describing the complexity of EEG signals. Wavelet entropy is likely to be an electrophysiological index for AD diagnosis and severity assessment.

  20. Time Difference of Arrival (TDOA) Estimation Using Wavelet Based Denoising

    DTIC Science & Technology

    1999-03-01

    NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS TIME DIFFERENCE OF ARRIVAL (TDOA) ESTIMATION USING WAVELET BASED DENOISING by Unal Aktas...4. TITLE AND SUBTITLE TIME DIFFERENCE OF ARRIVAL (TDOA) ESTIMATION USING WAVELET BASED DENOISING 6. AUTHOR(S) Unal Aktas 7...difference of arrival (TDOA) method. The wavelet transform is used to increase the accuracy of TDOA estimation. Several denoising techniques based on

  1. Significance tests for the wavelet power and the wavelet power spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Z.

    2007-11-01

    Significance tests usually address the issue how to distinguish statistically significant results from those due to pure randomness when only one sample of the population is studied. This issue is also important when the results obtained using the wavelet analysis are to be interpreted. Torrence and Compo (1998) is one of the earliest works that has systematically discussed this problem. Their results, however, were based on Monte Carlo simulations, and hence, failed to unveil many interesting and important properties of the wavelet analysis. In the present work, the sampling distributions of the wavelet power and power spectrum of a Gaussian White Noise (GWN) were derived in a rigorous statistical framework, through which the significance tests for these two fundamental quantities in the wavelet analysis were established. It was found that the results given by Torrence and Compo (1998) are numerically accurate when adjusted by a factor of the sampling period, while some of their statements require reassessment. More importantly, the sampling distribution of the wavelet power spectrum of a GWN was found to be highly dependent on the local covariance structure of the wavelets, a fact that makes the significance levels intimately related to the specific wavelet family. In addition to simulated signals, the significance tests developed in this work were demonstrated on an actual wave elevation time series observed from a buoy on Lake Michigan. In this simple application in geophysics, we showed how proper significance tests helped to sort out physically meaningful peaks from those created by random noise. The derivations in the present work can be readily extended to other wavelet-based quantities or analyses using other wavelet families.

  2. Compressed sensing MRI exploiting complementary dual decomposition.

    PubMed

    Park, Suhyung; Park, Jaeseok

    2014-04-01

    Compressed sensing (CS) MRI exploits the sparsity of an image in a transform domain to reconstruct the image from incoherently under-sampled k-space data. However, it has been shown that CS suffers particularly from loss of low-contrast image features with increasing reduction factors. To retain image details in such degraded experimental conditions, in this work we introduce a novel CS reconstruction method exploiting feature-based complementary dual decomposition with joint estimation of local scale mixture (LSM) model and images. Images are decomposed into dual block sparse components: total variation for piecewise smooth parts and wavelets for residuals. The LSM model parameters of residuals in the wavelet domain are estimated and then employed as a regional constraint in spatially adaptive reconstruction of high frequency subbands to restore image details missing in piecewise smooth parts. Alternating minimization of the dual image components subject to data consistency is performed to extract image details from residuals and add them back to their complementary counterparts while the LSM model parameters and images are jointly estimated in a sequential fashion. Simulations and experiments demonstrate the superior performance of the proposed method in preserving low-contrast image features even at high reduction factors.

  3. Analysis of acceleration signals using wavelet transform.

    PubMed

    Sekine, M; Tamura, T; Akay, M; Togawa, T; Fukui, Y

    2000-06-01

    In this study, we attempted to discriminate the acceleration signal for horizontal level and stairway walking using wavelet-based fractal analysis method. The acceleration signal was measured close to the center of gravity of the body, while the subjects walked continuously in the corridor and up and down the stairs. We used the wavelet-based fractal analysis method to discriminate walking pattern. The parameter H which is related directly to the fractal dimension was estimated by the wavelet coefficient and was changed into low value during walking upstairs. By manually setting the threshold level for individual, it was possible to discriminate walking upstairs from the other walking type. However, the common feature among subjects was not shown between level walking and walking downstairs.

  4. Applicability analysis of wavelet-transform profilometry.

    PubMed

    Zhang, Zibang; Zhong, Jingang

    2013-08-12

    The applicability of the wavelet-transform profilometry is examined in detail. The wavelet-ridge-based phase demodulation is an integral operation of the fringe signal in the spatial domain. The accuracy of the phase demodulation is related to the local linearity of the phase modulated by the object surface. We present a more robust applicability condition which is based on the evaluation of the local linearity. Since high carrier frequency leads to the phase demodulation integral in a narrow interval and the narrow interval results in the high local linearity of modulated phase, we propose to increase the carrier fringe frequency to improve the applicability of the wavelet-transform profilometry and the measurement accuracy. The numerical simulations and the experiment are presented.

  5. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  6. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  7. Image compression using the W-transform

    SciTech Connect

    Reynolds, W.D. Jr.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  8. Signal extrapolation based on wavelet representation

    NASA Astrophysics Data System (ADS)

    Xia, Xiang-Gen; Kuo, C.-C. Jay; Zhang, Zhen

    1993-11-01

    The Papoulis-Gerchberg (PG) algorithm is well known for band-limited signal extrapolation. We consider the generalization of the PG algorithm to signals in the wavelet subspaces in this research. The uniqueness of the extrapolation for continuous-time signals is examined, and sufficient conditions on signals and wavelet bases for the generalized PG (GPG) algorithm to converge are given. We also propose a discrete GPG algorithm for discrete-time signal extrapolation, and investigate its convergence. Numerical examples are given to illustrate the performance of the discrete GPG algorithm.

  9. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  10. Turbulence dynamics in the wavelet representation

    NASA Technical Reports Server (NTRS)

    Meneveau, C.

    1990-01-01

    The phenomenon of small-scale intermittency is shown to motivate the decomposition of the velocity fields into modes that exhibit both localization in wavenumber and physical space. We review some basic properties of such a decomposition, called the wavelet transform. The wavelet-transformed Navier-Stokes equations are derived, and we define a new quantity Pi(r, vector-x, t), which is the flux of kinetic energy to scales smaller than r at position vector-x (at time t). The main goals of this research are also summarized.

  11. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  12. a Wavelet Model for Vocalic Speech Coarticulation

    NASA Astrophysics Data System (ADS)

    Lange, Robert Charles

    A known aspect of human speech is that a vowel produced in isolation (for example, "ee") is acoustically different from a production of the same vowel in the company of two consonants ("deed"). This phenomenon, natural to the speech of any language, is known as consonant-vowel -consonant coarticulation. The effect of coarticulation results when a speech segment ("d") dynamically influences the articulation of an adjacent segment ("ee" within "deed"). A recent development in the theory of wavelet signal processing is wavelet system characterization. In wavelet system theory, the wavelet transform is used to describe the time-frequency behavior of a transmission channel, by virtue of its ability to describe the time -frequency content of the system's input and output signals. The present research proposes a wavelet-system model for speech coarticulation; wherein, the system is the process of transformation from a control speech state (input) to an effected speech state (output). Specifically, a vowel produced in isolation is transformed into an effected version of the same vowel produced in consonant-vowel-consonant, via the "coarticulation channel". Quantitatively, the channel is determined by the wavelet transform of the effected vowel's signal, using the control vowel's signal as the mother wavelet. A practical experiment is conducted to evaluate the coarticulation channel using samples of real speech. The results show that the model is capable of depicting coarticulation effects associated with certain vowel-consonant combinations. They suggest that elements of the vowel's acoustic composition are continuously present, in a modified form, throughout the consonant-vowel transition. For other phonetic combinations, however, the model does not respond to instances of segmental transition in a characteristic way. The conclusions drawn from the study are that the wavelet techniques employed here are effective tools for the general analysis of speech sounds, and can

  13. Scalable still image coding based on wavelet

    NASA Astrophysics Data System (ADS)

    Yan, Yang; Zhang, Zhengbing

    2005-02-01

    The scalable image coding is an important objective of the future image coding technologies. In this paper, we present a kind of scalable image coding scheme based on wavelet transform. This method uses the famous EZW (Embedded Zero tree Wavelet) algorithm; we give a high-quality encoding to the ROI (region of interest) of the original image and a rough encoding to the rest. This method is applied well in limited memory space condition, and we encode the region of background according to the memory capacity. In this way, we can store the encoded image in limited memory space easily without losing its main information. Simulation results show it is effective.

  14. Wavelet analysis applied to the IRAS cirrus

    NASA Technical Reports Server (NTRS)

    Langer, William D.; Wilson, Robert W.; Anderson, Charles H.

    1994-01-01

    The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.

  15. [Application of wavelet transform to infrared analysis].

    PubMed

    Li, Dan-ting; Zhang, Chang-jiang; Wang, Jin; Cheng, Cun-gui

    2006-11-01

    In the present article the FTIR spectra of the xylems of Smilax glabra Roxb. and its three kinds of counterfeits were obtained by Fourier transform infrared spectroscopy (FTIR) with OMNI-sampler directly, fast and accurately. By adopting wavelet transform analytical method the samples were studied in detail. The results showed that wavelet transform could remove the noises and condense variable, and have the advantages of fast operating speed, high degree of accuracy, and no noise disposal. It will have a good application prospect in infrared spectroscopic analysis.

  16. Efficient entropy estimation based on doubly stochastic models for quantized wavelet image data.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-04-01

    Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.

  17. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  18. Mass spectrometry cancer data classification using wavelets and genetic algorithm.

    PubMed

    Nguyen, Thanh; Nahavandi, Saeid; Creighton, Douglas; Khosravi, Abbas

    2015-12-21

    This paper introduces a hybrid feature extraction method applied to mass spectrometry (MS) data for cancer classification. Haar wavelets are employed to transform MS data into orthogonal wavelet coefficients. The most prominent discriminant wavelets are then selected by genetic algorithm (GA) to form feature sets. The combination of wavelets and GA yields highly distinct feature sets that serve as inputs to classification algorithms. Experimental results show the robustness and significant dominance of the wavelet-GA against competitive methods. The proposed method therefore can be applied to cancer classification models that are useful as real clinical decision support systems for medical practitioners.

  19. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  20. Compressed gas manifold

    DOEpatents

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  1. Study on the FOG's signal based on wavelet

    NASA Astrophysics Data System (ADS)

    Tang, Ji-qiang; Fang, Jian-cheng; Zhang, Yan-shun

    2006-11-01

    In order to study on the fiber optical gyro (abbreviated as FOG) signal based on wavelet, this paper researches the FOG signal drift model and the properties of wavelet analyzed noise, introduces the wavelet filtering method, wavelet base selection, soft and hard threshold value de-noising algorithm and compulsive filtering based on The Haar wavelet. These threshold value filtering results of both of the soft and of the hard threshold value for the same wavelet base of db4 with the same Donoho threshold values and these results of compulsive filtering based on The Haar wavelet and db4 wavelet are presented also in this paper and then these main conclusions based on foregoing analysis are reached: Larger the resolving scale is, the filtering effect is more perfect. The soft threshold value filtering effect is better than that of the hard threshold value filtering at the cost of calculation when the threshold value is same. The zero shift of the compulsive filtering is least when both the wavelet and the resolving scale are same for these filtering methods. For the compulsive filtering with same wavelets, the filtering effect of Harr is better than that of db4 and the calculation of the former is fewer. Finally the author point out that applying the compulsive filtering with the Harr wavelet base and suitable resolving scale to the signal processing of FOG be helpful for the FOG's design and manufacturing.

  2. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  3. Research of Gear Fault Detection in Morphological Wavelet Domain

    NASA Astrophysics Data System (ADS)

    Hong, Shi; Fang-jian, Shan; Bo, Cong; Wei, Qiu

    2016-02-01

    For extracting mutation information from gear fault signal and achieving a valid fault diagnosis, a gear fault diagnosis method based on morphological mean wavelet transform was designed. Morphological mean wavelet transform is a linear wavelet in the framework of morphological wavelet. Decomposing gear fault signal by this morphological mean wavelet transform could produce signal synthesis operators and detailed synthesis operators. For signal synthesis operators, it was just close to orginal signal, and for detailed synthesis operators, it contained fault impact signal or interference signal and could be catched. The simulation experiment result indicates that, compared with Fourier transform, the morphological mean wavelet transform method can do time-frequency analysis for original signal, effectively catch impact signal appears position; and compared with traditional linear wavelet transform, it has simple structure, easy realization, signal local extremum sensitivity and high denoising ability, so it is more adapted to gear fault real-time detection.

  4. Denoising and robust non-linear wavelet analysis

    NASA Astrophysics Data System (ADS)

    Bruce, Andrew G.; Donoho, David L.; Gao, Hong-Ye; Martin, R. D.

    1994-04-01

    In a series of papers, Donoho and Johnstone develop a powerful theory based on wavelets for extracting non-smooth signals from noisy data. Several nonlinear smoothing algorithms are presented which provide high performance for removing Gaussian noise from a wide range of spatially inhomogeneous signals. However, like other methods based on the linear wavelet transform, these algorithms are very sensitive to certain types of non-Gaussian noise, such as outliers. In this paper, we develop outlier resistance wavelet transforms. In these transforms, outliers and outlier patches are localized to just a few scales. By using the outlier resistant wavelet transforms, we improve upon the Donoho and Johnstone nonlinear signal extraction methods. The outlier resistant wavelet algorithms are included with the S+Wavelets object-oriented toolkit for wavelet analysis.

  5. Lossy to lossless compressions of hyperspectral images using three-dimensional set partitioning algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Wu, Zhensen; Wu, Chengke

    2005-02-01

    In this paper, we present a three-dimensional (3D) hyperspectral image compression algorithm based on zeroblock coding and wavelet transforms. An efficient Asymmetric 3D wavelet Transform (AT) based on the lifting technique and packet transform is used to reduce redundancies in both the spectral and spatial dimensions. The implementation via 3D integer lifting scheme allows to map integer-to-integer values, enabling lossy and lossless decompression from the same bit stream. To encode these coefficients after Asymmetric 3D wavelet transform, a modified 3DSPECK algorithm - Asymmetric Transform 3D Set Partitioning Embedded bloCK (AT-3DSPECK) is proposed. According to the distribution of energy of the transformed coefficients, the 3DSPECK's 3D set partitioning block algorithm and the 3D octave band partitioning scheme are efficiently combined in the proposed AT-3DSPECK algorithm. Several AVIRIS images are used to evaluate the compression performance. Compared with the JPEG2000, AT-3DSPIHT and 3DSPECK lossless compression techniques, the AT-3DSPECK achieves the best lossless performance. In lossy mode, the AT-3DSPECK algorithm outperforms AT-3DSPIHT and 3DSPECK at all rates. Besides the high compression performance, AT-3DSPECK supports progressive transmission. Clearly, the proposed AT-3DSPECK algorithm is a better candidate than several conventional methods.

  6. The impact of JPEG2000 lossy compression on the scientific quality of radio astronomy imagery

    NASA Astrophysics Data System (ADS)

    Peters, S. M.; Kitaeff, V. V.

    2014-10-01

    The sheer volume of data anticipated to be captured by future radio telescopes, such as, the Square Kilometer Array (SKA) and its precursors present new data challenges, including the cost and technical feasibility of data transport and storage. Image and data compression are going to be important techniques to reduce the data size. We provide a quantitative analysis of the effects of JPEG2000's lossy wavelet image compression algorithm on the quality of the radio astronomy imagery data. This analysis is completed by evaluating the completeness, soundness and source parameterisation of the Duchamp source finder using compressed data. Here we found the JPEG2000 image compression has the potential to denoise image cubes, however this effect is only significant at high compression rates where the accuracy of source parameterisation is decreased.

  7. A configurable realtime DWT-based neural data compression and communication VLSI system for wireless implants.

    PubMed

    Yang, Yuning; Kamboh, Awais M; Mason, Andrew J

    2014-04-30

    This paper presents the design of a complete multi-channel neural recording compression and communication system for wireless implants that addresses the challenging simultaneous requirements for low power, high bandwidth and error-free communication. The compression engine implements discrete wavelet transform (DWT) and run length encoding schemes and offers a practical data compression solution that faithfully preserves neural information. The communication engine encodes data and commands separately into custom-designed packet structures utilizing a protocol capable of error handling. VLSI hardware implementation of these functions, within the design constraints of a 32-channel neural compression implant, is presented. Designed in 0.13μm CMOS, the core of the neural compression and communication chip occupies only 1.21mm(2) and consumes 800μW of power (25μW per channel at 26KS/s) demonstrating an effective solution for intra-cortical neural interfaces.

  8. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-10-01

    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  9. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  10. Information retrieval system utilizing wavelet transform

    DOEpatents

    Brewster, Mary E.; Miller, Nancy E.

    2000-01-01

    A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.

  11. Wavelet Analysis of Long GRB Profiles

    NASA Astrophysics Data System (ADS)

    Greene, J. E.; Norris, J. P.; Bonnell, J. T.

    1997-12-01

    Previously, time-dilation studies have been performed in which gamma-ray burst (GRB) time profiles were analyzed for average wavelet amplitude integrated over time, or in which widths of average profiles in peak registration were measured (e.g., Norris et al. 1994, ApJ, 424, 540). Here we investigate average wavelet amplitude as a function of position and timescale, using the 'Mexican Hat' as the orthonormal basis function. The sample consists of more than 825 long GRBs (duration > 2 s) recorded by the Burst and Transient Source Experiment (BATSE). In order to nullify brightness bias, signal-to-noise levels and intensities were equalized for all bursts in the sample. In the time dimension, profiles were peak-aligned, and ranked according to intensity into six brightness groups. We find that the average, peak-aligned wavelet transforms of dim bursts evince greater activity on longer timescales at all times than do bright bursts. We quantify the degree of self-similarity of this time-dilation effect by stretching and redshifting profiles of bright bursts by a factor of two and comparing their average wavelet transforms with those of dimmer burst groups.

  12. Information retrieval system utilizing wavelet transform

    SciTech Connect

    Brewster, M.E.; Miller, N.E.

    2000-05-30

    A method is disclosed for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.

  13. Mathematical theorems of adaptive wavelet transform

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Telfer, Brian A.

    1994-03-01

    The computational efficiency of the adaptive wavelet transform (AWT) is due both to the compact support closely matching with signal characteristics, and to a larger redundancy factor of the superposition-mother (s(x), or in short super-mother, created adaptively by a linear superposition of other admissible mother wavelets. We prove that the super-mother always forms a complete basis, but usually associated with a higher redundancy number than its constituent C.O.N. bases. Then, in terms of Daubechies frame redundancy, we prove that the robustness of super-mother in suffering less noise contamination (since noise is everywhere, and a redundant sampling by band-passings can suppress the noise and enhance the signal). Since the continuous function of super- mother has been created with least-mean-squares (LMS) off-line using neural nets and is formulated in discrete approximation in terms of high-pass and low-pass filter bank coefficients, then such a digital subband coding via QMF saves the in-situ computational time of AWT. Moreover, the power of such an adaptive wavelet transform is due to the potential of massive parallel real-time implementation by means of artificial neural networks, where each node is a daughter wavelet similar to a radial basis function using dyadic affine scaling.

  14. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  15. Parallel adaptive wavelet collocation method for PDEs

    NASA Astrophysics Data System (ADS)

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 20483 using as many as 2048 CPU cores.

  16. Block-based adaptive lifting schemes for multiband image compression

    NASA Astrophysics Data System (ADS)

    Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2004-02-01

    In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.

  17. Transversal versus lifting approach to motion-compensated temporal discrete wavelet transform of image sequences: equivalence and tradeoffs

    NASA Astrophysics Data System (ADS)

    Konrad, Janusz

    2004-01-01

    Lifting-based implementations of various discrete wavelet transforms applied in the temporal direction under motion compensation have recently become a very powerful tool in video compression research. We present in this paper a theoretical analysis of motion compensation in both transversal and lifted implementations of such transforms. We derive conditions for perfect reconstruction in the case of motion-compensated transversal discrete wavelet transform. We also derive conditions on motion transformation assuring that a motion-compensated lifting scheme is exactly equivalent to its transversal counterpart. In general, these conditions require that motion transformation allow composition and be invertible. Unfortunately, many motion models do not obey these properties, thus inducing subband decomposition errors (prior to compression). We propose an alternative approach to motion compensation in the case of Haar transform. This new approach poses no constraints on motion; motion-compensated lifted Haar transform exactly implements its transversal implementation, and the latter obeys perfect reconstruction, both regardless of motion transformation used. This new approach, however, does not extend to the 5/3 or any higher-order discrete wavelet transform.

  18. Classification of mammographic microcalcifications using wavelets

    NASA Astrophysics Data System (ADS)

    Chitre, Yateen S.; Dhawan, Atam P.; Moskowitz, Myron; Sarwal, Alok; Bonasso, Christine; Narayan, Suresh B.

    1995-05-01

    Breast cancer is the leading cause of death among women. Breast cancer can be detected earlier by mammography than any other non-invasive examination. About 30% to 50% of breast cancers demonstrate tiny granulelike deposits of calcium called microcalcifications. It is difficult to distinguish between benign and malignant cases based on an examination of calcification regions, especially in hard-to-diagnose cases. We investigate the potential of using energy and entropy features computed from wavelet packets for their correlation with malignancy. Two types of Daubechies discrete filters were used as prototype wavelets. The energy and entropy features were computed for 128 benign and 63 malignant cases and analyzed using a multivariate cluster analysis and a univariate statistical analysis to reduce the feature set to a `five best set of features.' The efficacy of the reduced feature set to discriminate between the malignant and benign categories was evaluated using different multilayer perceptron architectures. The multilayer perceptron was trained using the backpropagation algorithm for various training and test set sizes. For each case 40 partitions of the data set were used to set up the training and test sets. The performance of the features was evaluated by computing the best area under the relative operating characteristic (ROC) curve and the average area under the ROC curve. The performance of the features computed from the wavelet packets was compared to a second set of features consisting of the wavelet packet features, image structure features and cluster features. The classification results are encouraging and indicate the potential of using features derived from wavelet packets in discriminating microcalcification regions into benign and malignant categories.

  19. Priorization of region-of-interest (ROI) using embedded coding of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Martel, Luc; Zaccarin, Andre

    1998-07-01

    This paper addresses the problem of prioritizing, i.e., preserve with higher fidelity, region-of-interest during image compression. Regions-of-interest are found, for example, in medical imagery where only a small area is useful for diagnostic, or in surveillance images where targets have to be identified and tracked. These ROI are often characterized by their fine details which therefore need to be preserved if the image is to be of any use after it is decompressed. Wavelet- based image compression is appropriate for such tasks because of its localization property. We present an algorithm, based on Shapiro's popular EZW (Embedded image coding using Zerotree of Wavelet coefficients) to prioritize region-of-interest. A non-uniform quantizer with smaller steps for smaller coefficients is used on the coefficients of the ROI. This allows to transmit initially the fine details of the ROI and to use successive approximation quantization to reduce the quantization error on larger coefficients of the image, ROI or non-ROI. Simulation results show that this approach allows to efficiently preserve the fine details of the ROI.

  20. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  1. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  2. Spatiotemporal compression for efficient storage and transmission of high-resolution electrocorticography data.

    PubMed

    Kim, Taehoon; Artan, N Sertac; Viventi, Jonathan; Chao, H Jonathan

    2012-01-01

    High-resolution Electrocorticography (HR-ECoG) has emerged as a key strategic technology for recording localized neural activity with high temporal and spatial resolution with potential applications in brain-computer interfaces (BCI), and seizure detection for epilepsy. However, HR-ECoG has 400 times the resolution of conventional ECoG, making it a challenge to process, transmit and store the HR-ECoG data. Therefore, simple and efficient compression algorithms are vital for the feasibility of implantable wireless medical devices for HR-ECoG recordings. In this paper, following the observation that HR-ECoG signals have both high spatial and temporal correlations similar to video/image signals, various compression methods suitable for video/image- compression based on motion estimation, discrete cosine transform (DCT) and discrete wavelet transform (DWT)- are investigated for compressing HR-ECoG data. We first simplify these methods to satisfy the low-power requirements for implantable devices. Then, we demonstrate that spatiotemporal compression methods produce up to 46% more data reduction on HR-ECoG data than compression methods using only spatial compression do. We further show that this data reduction can be achieved with low hardware complexity. In particular, among the methods investigated, spatiotemporal compression using DCT-based methods provide the best trade-off between hardware complexity and compression performance, and thus we conclude that DCT-based compression is a promising solution for ultralow-power implantable devices for HR-ECoG.

  3. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  4. A symmetrical image encryption scheme in wavelet and time domain

    NASA Astrophysics Data System (ADS)

    Luo, Yuling; Du, Minghui; Liu, Junxiu

    2015-02-01

    There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.

  5. Wavelet-enabled progressive data Access and Storage Protocol (WASP)

    NASA Astrophysics Data System (ADS)

    Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.

    2015-12-01

    Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.

  6. Efficient architecture for adaptive directional lifting-based wavelet transform

    NASA Astrophysics Data System (ADS)

    Yin, Zan; Zhang, Li; Shi, Guangming

    2010-07-01

    Adaptive direction lifting-based wavelet transform (ADL) has better performance than conventional lifting both in image compression and de-noising. However, no architecture has been proposed to hardware implement it because of its high computational complexity and huge internal memory requirements. In this paper, we propose a four-stage pipelined architecture for 2 Dimensional (2D) ADL with fast computation and high data throughput. The proposed architecture comprises column direction estimation, column lifting, row direction estimation and row lifting which are performed in parallel in a pipeline mode. Since the column processed data is transposed, the row processor can reuse the column processor which can decrease the design complexity. In the lifting step, predict and update are also performed in parallel. For an 8×8 image sub-block, the proposed architecture can finish the ADL forward transform within 78 clock cycles. The architecture is implemented on Xilinx Virtex5 device on which the frequency can achieve 367 MHz. The processed time is 212.5 ns, which can meet the request of real-time system.

  7. Prediction of coefficients for lossless compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Ruedin, Ana M. C.; Acevedo, Daniel G.

    2005-08-01

    We present a lossless compressor for multispectral Landsat images that exploits interband and intraband correlations. The compressor operates on blocks of 256 x 256 pixels, and performs two kinds of predictions. For bands 1, 2, 3, 4, 5, 6.2 and 7, the compressor performs an integer-to-integer wavelet transform, which is applied to each block separately. The wavelet coefficients that have not yet been encoded are predicted by means of a linear combination of already coded coefficients that belong to the same orientation and spatial location in the same band, and coefficients of the same location from other spectral bands. A fast block classification is performed in order to use the best weights for each landscape. The prediction errors or differences are finally coded with an entropy - based coder. For band 6.1, we do not use wavelet transforms, instead, a median edge detector is applied to predict a pixel, with the information of the neighbouring pixels and the equalized pixel from band 6.2. This technique exploits better the great similarity between histograms of bands 6.1 and 6.2. The prediction differences are finally coded with a context-based entropy coder. The two kinds of predictions used reduce both spatial and spectral correlations, increasing the compression rates. Our compressor has shown to be superior to the lossless compressors Winzip, LOCO-I, PNG and JPEG2000.

  8. A Robust and Non-Blind Watermarking Scheme for Gray Scale Images Based on the Discrete Wavelet Transform Domain

    NASA Astrophysics Data System (ADS)

    Bakhouche, A.; Doghmane, N.

    2008-06-01

    In this paper, a new adaptive watermarking algorithm is proposed for still image based on the wavelet transform. The two major applications for watermarking are protecting copyrights and authenticating photographs. Our robust watermarking [3] [22] is used for copyright protection owners. The main reason for protecting copyrights is to prevent image piracy when the provider distributes the image on the Internet. Embed watermark in low frequency band is most resistant to JPEG compression, blurring, adding Gaussian noise, rescaling, rotation, cropping and sharpening but embedding in high frequency is most resistant to histogram equalization, intensity adjustment and gamma correction. In this paper, we extend the idea to embed the same watermark in two bands (LL and HH bands or LH and HL bands) at the second level of Discrete Wavelet Transform (DWT) decomposition. Our generalization includes all the four bands (LL, HL, LH, and HH) by modifying coefficients of the all four bands in order to compromise between acceptable imperceptibility level and attacks' resistance.

  9. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  10. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  11. Centralized and interactive compression of multiview images

    NASA Astrophysics Data System (ADS)

    Gelman, Andriy; Dragotti, Pier Luigi; Velisavljević, Vladan

    2011-09-01

    In this paper, we propose two multiview image compression methods. The basic concept of both schemes is the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission of all data to the users. By contrast, in an interactive setting, the users request only a subset of the captured images, but in an unknown order a priori. We address this scenario in the second algorithm using Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random access at the image level. We demonstrate that the proposed centralized and interactive methods outperform H.264/MVC and JPEG 2000, respectively.

  12. Efficient lossless compression scheme for multispectral images

    NASA Astrophysics Data System (ADS)

    Benazza-Benyahia, Amel; Hamdi, Mohamed; Pesquet, Jean-Christophe

    2001-12-01

    Huge amounts of data are generated thanks to the continuous improvement of remote sensing systems. Archiving this tremendous volume of data is a real challenge which requires lossless compression techniques. Furthermore, progressive coding constitutes a desirable feature for telebrowsing. To this purpose, a compact and pyramidal representation of the input image has to be generated. Separable multiresolution decompositions have already been proposed for multicomponent images allowing each band to be decomposed separately. It seems however more appropriate to exploit also the spectral correlations. For hyperspectral images, the solution is to apply a 3D decomposition according to the spatial and to the spectral dimensions. This approach is not appropriate for multispectral images because of the reduced number of spectral bands. In recent works, we have proposed a nonlinear subband decomposition scheme with perfect reconstruction which exploits efficiently both the spatial and the spectral redundancies contained in multispectral images. In this paper, the problem of coding the coefficients of the resulting subband decomposition is addressed. More precisely, we propose an extension to the vector case of Shapiro's embedded zerotrees of wavelet coefficients (V-EZW) with achieves further saving in the bit stream. Simulations carried out on SPOT images indicate the outperformance of the global compression package we performed.

  13. HYDRODYNAMIC COMPRESSIVE FORGING.

    DTIC Science & Technology

    HYDRODYNAMICS), (*FORGING, COMPRESSIVE PROPERTIES, LUBRICANTS, PERFORMANCE(ENGINEERING), DIES, TENSILE PROPERTIES, MOLYBDENUM ALLOYS , STRAIN...MECHANICS), BERYLLIUM ALLOYS , NICKEL ALLOYS , CASTING ALLOYS , PRESSURE, FAILURE(MECHANICS).

  14. Image Compression Algorithm Altered to Improve Stereo Ranging

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2008-01-01

    A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.

  15. Fingerprint spoof detection using wavelet based local binary pattern

    NASA Astrophysics Data System (ADS)

    Kumpituck, Supawan; Li, Dongju; Kunieda, Hiroaki; Isshiki, Tsuyoshi

    2017-02-01

    In this work, a fingerprint spoof detection method using an extended feature, namely Wavelet-based Local Binary Pattern (Wavelet-LBP) is introduced. Conventional wavelet-based methods calculate wavelet energy of sub-band images as the feature for discrimination while we propose to use Local Binary Pattern (LBP) operation to capture the local appearance of the sub-band images instead. The fingerprint image is firstly decomposed by two-dimensional discrete wavelet transform (2D-DWT), and then LBP is applied on the derived wavelet sub-band images. Furthermore, the extracted features are used to train Support Vector Machine (SVM) classifier to create the model for classifying the fingerprint images into genuine and spoof. Experiments that has been done on Fingerprint Liveness Detection Competition (LivDet) datasets show the improvement of the fingerprint spoof detection by using the proposed feature.

  16. Multiwavelets on the interval and divergence-free wavelets

    NASA Astrophysics Data System (ADS)

    Lakey, Joseph D.; Pereyra, M. Cristina

    1999-10-01

    This manuscript gives a construction of divergence-free multiwavelets which combines the Hardin-Marasovich (HM) construction with a recipe of Strela for increasing or decreasing regularity of biorthogonal wavelets. Strela's process preserves symmetry of the HM wavelets. This enables the divergence-free wavelets to be suitably adapted to the analysis of divergence-free vector fields whose boundary traces are tangent vectors.

  17. Implementation of aeronautic image compression technology on DSP

    NASA Astrophysics Data System (ADS)

    Wang, Yujing; Gao, Xueqiang; Wang, Mei

    2007-11-01

    According to the designed characteristics and demands of aeronautic image compression system, lifting scheme wavelet and SPIHT algorithm was selected as the key part of software implementation, which was introduced with details. In order to improve execution efficiency, border processing was simplified reasonably and SPIHT (Set Partitioning in Hierarchical Trees) algorithm was also modified partly. The results showed that the selected scheme has a 0.4dB improvement in PSNR(peak-peak-ratio) compared with classical Shaprio's scheme. To improve the operating speed, the hardware system was then designed based on DSP and many optimization measures were then applied successfully. Practical test showed that the system can meet the real-time demand with good quality of reconstruct image, which has been used in an aeronautic image compression system practically.

  18. Compressed Sensing Based Fingerprint Identification for Wireless Transmitters

    PubMed Central

    Zhao, Caidan; Wu, Xiongpeng; Huang, Lianfen; Yao, Yan; Chang, Yao-Chung

    2014-01-01

    Most of the existing fingerprint identification techniques are unable to distinguish different wireless transmitters, whose emitted signals are highly attenuated, long-distance propagating, and of strong similarity to their transient waveforms. Therefore, this paper proposes a new method to identify different wireless transmitters based on compressed sensing. A data acquisition system is designed to capture the wireless transmitter signals. Complex analytical wavelet transform is used to obtain the envelope of the transient signal, and the corresponding features are extracted by using the compressed sensing theory. Feature selection utilizing minimum redundancy maximum relevance (mRMR) is employed to obtain the optimal feature subsets for identification. The results show that the proposed method is more efficient for the identification of wireless transmitters with similar transient waveforms. PMID:24892053

  19. Wavelets and their applications past and future

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.

    2009-04-01

    As this is a conference on mathematical tools for defense, I would like to dedicate this talk to the memory of Louis Auslander, who through his insights and visionary leadership, brought powerful new mathematics into DARPA, he has provided the main impetus to the development and insertion of wavelet based processing in defense. My goal here is to describe the evolution of a stream of ideas in Harmonic Analysis, ideas which in the past have been mostly applied for the analysis and extraction of information from physical data, and which now are increasingly applied to organize and extract information and knowledge from any set of digital documents, from text to music to questionnaires. This form of signal processing on digital data, is part of the future of wavelet analysis.

  20. Analyzing Planck-Like Data with Wavelets

    NASA Astrophysics Data System (ADS)

    Sanz, J. L.; Barreiro, R. B.; Cayón, L.; Martinez-González, E.; Ruiz, G. A.; Diaz, F. J.; Argüeso, F.; Toffolatti, L.

    Basics on the continuous and discrete wavelet transform with two scales are outlined. We study maps representing anisotropies in the cosmic microwave background radiation (CMB) and the relation to the standard approach, based on the Cl's, is establised through the introduction of a wavelet spectrum. We apply this technique to small angular scale CMB map simulations of size 12.8 x 12.8 degrees and filtered with a 4'.5 Gaussian beam. This resolution resembles the experimental one expected for future high resolution experiments (e.g. the Planck mission). We consider temperature fluctuations derived from standard, open and flat-Lambda CDM models. We also introduce Gaussian noise (uniform and non-uniform) at different S/N levels and results are given regarding denoising.

  1. Continuous wavelet analysis of coherent structures

    NASA Technical Reports Server (NTRS)

    Farge, M.; Guezennec, Y.; Ho, C. M.; Meneveau, C.

    1990-01-01

    We perform an analysis of planar cuts through three dimensional turbulent fields (planar channel flow and mixing layer) using the 2D continuous wavelet transform. We propose two new diagnostics: (1) a measure of intermittency I(r, vector x), which is the ratio of local energy and average energy at a given scale r; and (2) a local Reynolds number, defined on the local velocity contribution at a given scale, computed from the wavelet transform of the three velocity components, the scale of the transform, and molecular viscosity; this gives a representation of the local non-linearity of the flow viewed in both space and scale. We find, for the analyzed flows, strong small-scale intermittency located in the ejection regions for the channel flow and in the vortex core of the mixing layer.

  2. ECG signal denoising via empirical wavelet transform.

    PubMed

    Singh, Omkar; Sunkaria, Ramesh Kumar

    2016-12-29

    This paper presents new methods for baseline wander correction and powerline interference reduction in electrocardiogram (ECG) signals using empirical wavelet transform (EWT). During data acquisition of ECG signal, various noise sources such as powerline interference, baseline wander and muscle artifacts contaminate the information bearing ECG signal. For better analysis and interpretation, the ECG signal must be free of noise. In the present work, a new approach is used to filter baseline wander and power line interference from the ECG signal. The technique utilized is the empirical wavelet transform, which is a new method used to compute the building modes of a given signal. Its performance as a filter is compared to the standard linear filters and empirical mode decomposition.The results show that EWT delivers a better performance.

  3. Wavelets for full reconfigurable ECG acquisition system

    NASA Astrophysics Data System (ADS)

    Morales, D. P.; García, A.; Castillo, E.; Meyer-Baese, U.; Palma, A. J.

    2011-06-01

    This paper presents the use of wavelet cores for a full reconfigurable electrocardiogram signal (ECG) acquisition system. The system is compound by two reconfigurable devices, a FPGA and a FPAA. The FPAA is in charge of the ECG signal acquisition, since this device is a versatile and reconfigurable analog front-end for biosignals. The FPGA is in charge of FPAA configuration, digital signal processing and information extraction such as heart beat rate and others. Wavelet analysis has become a powerful tool for ECG signal processing since it perfectly fits ECG signal shape. The use of these cores has been integrated in the LabVIEW FPGA module development tool that makes possible to employ VHDL cores within the usual LabVIEW graphical programming environment, thus freeing the designer from tedious and time consuming design of communication interfaces. This enables rapid test and graphical representation of results.

  4. Wavelet analysis of the impedance cardiogram waveforms

    NASA Astrophysics Data System (ADS)

    Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.

    2012-12-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  5. Wavelets, Fractals, and Radial Basis Functions

    DTIC Science & Technology

    2007-11-02

    of Signal Processing. San Diego, CA: Aca- demic, 1998. [13] G. Strang and T. Q. Nguyen, Wavelets and Filter Banks . Cambridge, MA: Wellesley-Cambridge...localization, multiresolution, radial basis functions, re- finement filter , scaling functions, self-similarity, splines, tempered distributions, two-scale...one linear constraint per basis function, and the corresponding linear system of equations is invertible under relatively mild conditions [11]. The

  6. Wavelets, Signal Processing and Matrix Computations

    DTIC Science & Technology

    1994-09-01

    Multirate and Multirate Time-Frequency Analysis Multirate systems , which find application in the design and analysis of filter banks , are...R9] P. P. Vaidyanathan, Multirate Systems and Filter Banks , Englewood Cliffs, New Jersey: Prentice Hall, 1993. [R10] R. E. Crochiere, S. A. Webber and...Varying Filter Banks ; Vector Filter Banks and Vector-Valued Wavelets; and Computational Multirate and Multirate

  7. Wavelet Transforms in Parallel Image Processing

    DTIC Science & Technology

    1994-01-27

    operation is performed to go either up or down a level on the pyramid. The algorithm can be extended to operate on higher dimensional input signals ...quantization to the multi- dimensional case. A block of pixels, for example, 4 x 4 pixels, forming a vector of k(= 16) dimensions are quantized together to...multiscale approach for representation and characterization of signals and images. One can select a suitable or an optimal wavelet and its associated

  8. Detection of microcalcifications in mammograms using wavelets

    NASA Astrophysics Data System (ADS)

    Strickland, Robin N.; Hahn, Hee I.

    1994-10-01

    Clusters of fine, granular microcalcifications in mammograms may be an early sign of disease. Individual grains are difficult to detect and segment due to size and shape variability and because the background mammogram texture is typically inhomogeneous. We present a two- stage method based on wavelet transforms for detecting and segmenting calcifications. The first stage consists of a full resolution wavelet transform, which is simply the conventional filter bank implementation without downsampling, so that all sub-bands remain at full size. Four octaves are computed with two inter-octave voices for finer scale resolution. By appropriate selection of the wavelet basis the detection of microcalcifications in the relevant size range can be nearly optimized in the details sub-bands. In fact, the separable 2D filters which transform the input image into the HH details sub-bands are closely related to pre- whitening matched filters for detecting Gaussian objects (idealized microcalcifications) in Markov noise (background noise). The second stage is designed to overcome the limitations of the simplistic Gaussian assumption and provides a useful segmentation of calcifications boundaries. Detected pixel sites in the LH, HL, and HH sub-bands are heavily weighted before computing the inverse wavelet transform. The LL component is omitted since gross spatial variations are of little interest. Individual microcalcifications are often greatly enhanced in the output image, to the point where straightforward thresholding can be applied to segment them. FROC curves are computed from tests using a well-known database of digitized mammograms. A true positive fraction of 85% is achieved at 0.5 false positives per image.

  9. Some Contributions to Wavelet Based Image Coding

    DTIC Science & Technology

    2000-07-01

    MSE or PSNR. However, it is noted that JZW does not implement the embedded property as in EZW and SPIHT and that the embedded property can be...achieved by passing the JND quantized wavelet coefficients to EZW or SPIHT. It is noted that the lowest frequency (the coarsest scale) band (LL band) is the...other higher frequency subbands can be efficiently encoded using our zero-tree encoding scheme which is derived from EZW and [improved version of EZW by

  10. Development of wavelet analysis tools for turbulence

    NASA Technical Reports Server (NTRS)

    Bertelrud, A.; Erlebacher, G.; Dussouillez, PH.; Liandrat, M. P.; Liandrat, J.; Bailly, F. Moret; Tchamitchian, PH.

    1992-01-01

    Presented here is the general framework and the initial results of a joint effort to derive novel research tools and easy to use software to analyze and model turbulence and transition. Given here is a brief review of the issues, a summary of some basic properties of wavelets, and preliminary results. Technical aspects of the implementation, the physical conclusions reached at this time, and current developments are discussed.

  11. A Wavelet Neural Network for SAR Image Segmentation

    PubMed Central

    Wen, Xian-Bin; Zhang, Hua; Wang, Fa-Yu

    2009-01-01

    This paper proposes a wavelet neural network (WNN) for SAR image segmentation by combining the wavelet transform and an artificial neural network. The WNN combines the multiscale analysis ability of the wavelet transform and the classification capability of the artificial neural network by setting the wavelet function as the transfer function of the neural network. Several SAR images are segmented by the network whose transfer functions are the Morlet and Mexihat functions, respectively. The experimental results show the proposed method is very effective and accurate. PMID:22400005

  12. Image denoising with the dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.

    2016-04-01

    The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.

  13. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  14. Wavelet modulation: An alternative modulation with low energy consumption

    NASA Astrophysics Data System (ADS)

    Chafii, Marwa; Palicot, Jacques; Gribonval, Rémi

    2017-02-01

    This paper presents wavelet modulation, based on the discrete wavelet transform, as an alternative modulation with low energy consumption. The transmitted signal has low envelope variations, which induces a good efficiency for the power amplifier. Wavelet modulation is analyzed and compared for different wavelet families with orthogonal frequency division multiplexing (OFDM) in terms of peak-to-average power ratio (PAPR), power spectral density (PSD) properties, and the impact of the power amplifier on the spectral regrowth. The performance in terms of bit error rate and complexity of implementation are also evaluated, and several trade-offs are characterized. xml:lang="fr"

  15. Biorthogonal wavelet-based method of moments for electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhang, Qinke

    Wavelet analysis is a technique developed in recent years in mathematics and has found usefulness in signal processing and many other engineering areas. The practical use of wavelets for the solution of partial differential and integral equations in computational electromagnetics has been investigated in this dissertation, with the emphasis on development of biorthogonal wavelet based method of moments for the solution of electric and magnetic field integral equations. The fundamentals and numerical analysis aspects of wavelet theory have been studied. In particular, a family of compactly supported biorthogonal spline wavelet bases on the n-cube (0,1) n has been studied in detail. The wavelet bases were used in this work as a building block to construct biorthogonal wavelet bases on general domain geometry. A specific and practical way of adapting the wavelet bases to certain n- dimensional blocks or elements is proposed based on the domain decomposition and local transformation techniques used in traditional finite element methods and computer aided graphics. The element, with the biorthogonal wavelet base embedded in it, is called a wavelet element in this work. The physical domains which can be treated with this method include general curves, surfaces in 2D and 3D, and 3D volume domains. A two-step mapping is proposed for the purpose of taking full advantage of the zero-moments of wavelets. The wavelet element approach appears to offer several important advantages. It avoids the need of generating very complicated meshes required in traditional finite element based methods, and makes the adaptive analysis easy to implement. A specific implementation procedure for performing adaptive analysis is proposed. The proposed biorthogonal wavelet based method of moments (BWMoM) has been implemented by using object-oriented programming techniques. The main computational issues have been detailed, discussed, and implemented in the whole package. Numerical examples show

  16. Wavelet Transform of Fixed Pattern Noise in Focal Plane Arrays

    DTIC Science & Technology

    1994-02-01

    AD-A276 963 1111111111 I NAWCWPNS TP 8185 Wavelet Transform of Fixed Pattern Noise in Focal Plane Arrays OTIC by ELECTE Dr. Gary Hewer MAR 151994 and...REPORT TYPE AND DATES COVERED IFebruary 1994 Final; 199 ,L TTLE ND SBTILE LFUNDNG UBER Wavelet Transform of Fixed Pattern Noise in Focal Plane Arrays...nonlinearity 71,(w) = sgn(w)(IwI-t). with threshold t to each empirical sample value w in the wavelet transform d scales. After thresholding the wavelet

  17. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  18. Correlation Filtering of Modal Dynamics using the Laplace Wavelet

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.

    1997-01-01

    Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.

  19. Applications of weakly compressible model to turbulent flow problem towards adaptive turbulence simulation

    NASA Astrophysics Data System (ADS)

    Tsuji, Takuya; Yokomine, Takehiko; Shimizu, Akihiko

    2002-11-01

    We have been engaged in the development of multi-scale adaptive simulation technique for incompressible turbulent flow. This is designed as that important scale components in the flow field are detected automatically by lifting wavelet and solved selectively. In conventional incompressible scheme, it is very common to solve Poisson equation of pressure to meet the divergence free constraints of incompressible flow. It may be not impossible to solve the Poisson eq. in the adaptive way, but this is very troublesome because it requires generation of control volume at each time step. We gave an eye on weakly compressible model proposed by Bao(2001). This model was derived from zero Mach limit asymptotic analysis of compressible Navier-Stokes eq. and does not need to solve the Poisson eq. at all. But it is relatively new and it requires demonstration study before the combination with the adaptation by wavelet. In present study, 2-D and 3-D Backstep flow were selected as test problems and applicability to turbulent flow is verified in detail. Besides, combination of adaptation by wavelet with weakly compressible model towards the adaptive turbulence simulation is discussed.

  20. Bayesian Wavelet Shrinkage of the Haar-Fisz Transformed Wavelet Periodogram.

    PubMed

    Nason, Guy; Stevens, Kara

    2015-01-01

    It is increasingly being realised that many real world time series are not stationary and exhibit evolving second-order autocovariance or spectral structure. This article introduces a Bayesian approach for modelling the evolving wavelet spectrum of a locally stationary wavelet time series. Our new method works by combining the advantages of a Haar-Fisz transformed spectrum with a simple, but powerful, Bayesian wavelet shrinkage method. Our new method produces excellent and stable spectral estimates and this is demonstrated via simulated data and on differenced infant electrocardiogram data. A major additional benefit of the Bayesian paradigm is that we obtain rigorous and useful credible intervals of the evolving spectral structure. We show how the Bayesian credible intervals provide extra insight into the infant electrocardiogram data.

  1. Analysis of phonocardiogram signals using wavelet transform.

    PubMed

    Meziani, F; Debbal, S M; Atbi, A

    2012-08-01

    Phonocardiograms (PCG) are recordings of the acoustic waves produced by the mechanical action of the heart. They generally consist of two kinds of acoustic vibrations: heart sounds and heart murmurs. Heart murmurs are often the first signs of pathological changes of the heart valves, and are usually found during auscultation in primary health care. Heart auscultation has been recognized for a long time as an important tool for the diagnosis of heart disease, although its accuracy is still insufficient to diagnose some heart diseases. It does not enable the analyst to obtain both qualitative and quantitative characteristics of the PCG signals. The efficiency of diagnosis can be improved considerably by using modern digital signal processing techniques. Therefore, these last can provide useful and valuable information on these signals. The aim of this study is to analyse PCG signals using wavelet transform. This analysis is based on an algorithm for the detection of heart sounds (the first and second sounds, S1 and S2) and heart murmurs using the PCG signal as the only source. The segmentation algorithm, which separates the components of the heart signal, is based on denoising by wavelet transform (DWT). This algorithm makes it possible to isolate individual sounds (S1 or S2) and murmurs. Thus, the analysis of various PCGs signals using wavelet transform can provide a wide range of statistical parameters related to the phonocardiogram signal.

  2. Synchrosqueezed wavelet transform for damping identification

    NASA Astrophysics Data System (ADS)

    Mihalec, Marko; Slavič, Janko; Boltežar, Miha

    2016-12-01

    Synchrosqueezing is a procedure for improving the frequency localization of a continuous wavelet transform. This research focuses on using a synchrosqueezed wavelet transform (SWT) to determine the damping ratios of a vibrating system using a free-response signal. While synchrosqueezing is advantageous due to its localisation in the frequency, damping identification with the original SWT is not sufficiently accurate. Here, the synchrosqueezing was researched in detail, and it was found that an error in the frequency occurs as a result of the numerical calculation of the preliminary frequencies. If this error were to be compensated, a better damping identification would be expected. To minimize the frequency-shift error, three different strategies are investigated: the scale-dependent coefficient method, the shifted-coefficient method and the autocorrelated-frequency method. Furthermore, to improve the SWT, two synchrosqueezing criteria are introduced: the average SWT and the proportional SWT. Finally, the proposed modifications are tested against close modes and the noise in the signals. It was numerically and experimentally confirmed that the SWT with the proportional criterion offers better frequency localization and performs better than the continuous wavelet transform when tested against noisy signals.

  3. Wavelet formulation of the polarizable continuum model.

    PubMed

    Weijo, Ville; Randrianarivony, Maharavo; Harbrecht, Helmut; Frediani, Luca

    2010-05-01

    The first implementation of a wavelet discretization of the Integral Equation Formalism (IEF) for the Polarizable Continuum Model (PCM) is presented here. The method is based on the application of a general purpose wavelet solver on the cavity boundary to solve the integral equations of the IEF-PCM problem. Wavelet methods provide attractive properties for the solution of the electrostatic problem at the cavity boundary: the system matrix is highly sparse and iterative solution schemes can be applied efficiently; the accuracy of the solver can be increased systematically and arbitrarily; for a given system, discretization error accuracy is achieved at a computational expense that scales linearly with the number of unknowns. The scaling of the computational time with the number of atoms N is formally quadratic but a N(1.5) scaling has been observed in practice. The current bottleneck is the evaluation of the potential integrals at the cavity boundary which scales linearly with the system size. To reduce this overhead, interpolation of the potential integrals on the cavity surface has been successfully used.

  4. Multiscale Medical Image Fusion in Wavelet Domain

    PubMed Central

    Khare, Ashish

    2013-01-01

    Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868

  5. Wavelet-based functional mixed models

    PubMed Central

    Morris, Jeffrey S.; Carroll, Raymond J.

    2009-01-01

    Summary Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects’ wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. PMID:19759841

  6. Wavelet frame accelerated reduced support vector machines.

    PubMed

    Ratsch, Matthias; Teschke, Gerd; Romdhani, Sami; Vetter, Thomas

    2008-12-01

    In this paper, a novel method for reducing the runtime complexity of a support vector machine classifier is presented. The new training algorithm is fast and simple. This is achieved by an over-complete wavelet transform that finds the optimal approximation of the support vectors. The presented derivation shows that the wavelet theory provides an upper bound on the distance between the decision function of the support vector machine and our classifier. The obtained classifier is fast, since a Haar wavelet approximation of the support vectors is used, enabling efficient integral image-based kernel evaluations. This provides a set of cascaded classifiers of increasing complexity for an early rejection of vectors easy to discriminate. This excellent runtime performance is achieved by using a hierarchical evaluation over the number of incorporated and additional over the approximation accuracy of the reduced set vectors. Here, this algorithm is applied to the problem of face detection, but it can also be used for other image-based classifications. The algorithm presented, provides a 530-fold speedup over the support vector machine, enabling face detection at more than 25 fps on a standard PC.

  7. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  8. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  9. Pedestrian detection based on redundant wavelet transform

    NASA Astrophysics Data System (ADS)

    Huang, Lin; Ji, Liping; Hu, Ping; Yang, Tiejun

    2016-10-01

    Intelligent video surveillance is to analysis video or image sequences captured by a fixed or mobile surveillance camera, including moving object detection, segmentation and recognition. By using it, we can be notified immediately in an abnormal situation. Pedestrian detection plays an important role in an intelligent video surveillance system, and it is also a key technology in the field of intelligent vehicle. So pedestrian detection has very vital significance in traffic management optimization, security early warn and abnormal behavior detection. Generally, pedestrian detection can be summarized as: first to estimate moving areas; then to extract features of region of interest; finally to classify using a classifier. Redundant wavelet transform (RWT) overcomes the deficiency of shift variant of discrete wavelet transform, and it has better performance in motion estimation when compared to discrete wavelet transform. Addressing the problem of the detection of multi-pedestrian with different speed, we present an algorithm of pedestrian detection based on motion estimation using RWT, combining histogram of oriented gradients (HOG) and support vector machine (SVM). Firstly, three intensities of movement (IoM) are estimated using RWT and the corresponding areas are segmented. According to the different IoM, a region proposal (RP) is generated. Then, the features of a RP is extracted using HOG. Finally, the features are fed into a SVM trained by pedestrian databases and the final detection results are gained. Experiments show that the proposed algorithm can detect pedestrians accurately and efficiently.

  10. 26. Central compression lock, north span facing north. Compression lock ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    26. Central compression lock, north span facing north. Compression lock locks two spans together at highest point. There are three compression locks. - Henry Ford Bridge, Spanning Cerritos Channel, Los Angeles-Long Beach Harbor, Los Angeles, Los Angeles County, CA

  11. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  12. Comprehensive numerical methodology for direct numerical simulations of compressible Rayleigh-Taylor instability

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott J.; Livescu, Daniel; Vasilyev, Oleg V.

    2016-05-01

    An investigation of compressible Rayleigh-Taylor instability (RTI) using Direct Numerical Simulations (DNS) requires efficient numerical methods, advanced boundary conditions, and consistent initialization in order to capture the wide range of scales and vortex dynamics present in the system, while reducing the computational impact associated with acoustic wave generation and the subsequent interaction with the flow. An advanced computational framework is presented that handles the challenges introduced by considering the compressive nature of RTI systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification dependent vorticity production. The foundation of the numerical methodology described here is the wavelet-based grid adaptivity of the Parallel Adaptive Wavelet Collocation Method (PAWCM) that maintains symmetry in single-mode RTI systems to extreme late-times. PAWCM is combined with a consistent initialization, which reduces the generation of acoustic disturbances, and effective boundary treatments, which prevent acoustic reflections. A dynamic time integration scheme that can handle highly nonlinear and potentially stiff systems, such as compressible RTI, completes the computational framework. The numerical methodology is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.

  13. Solar wind compressible structures at ion scales

    NASA Astrophysics Data System (ADS)

    Perrone, D.; Alexandrova, O.; Rocoto, V.; Pantellini, F. G. E.; Zaslavsky, A.; Maksimovic, M.; Issautier, K.; Mangeney, A.

    2014-12-01

    In the solar wind turbulent cascade, the energy partition between fluid and kinetic degrees of freedom, in the vicinity of plasma characteristic scales, i.e. ion and electron Larmor radius and inertial lengths, is still under debate. In a neighborhood of the ion scales, it has been observed that the spectral shape changes and fluctuations become more compressible. Nowadays, a huge scientific effort is directed to the comprehension of the link between macroscopic and microscopic scales and to disclose the nature of compressive fluctuations, meaning that if space plasma turbulence is a mixture of quasi-linear waves (as whistler or kinetic Alfvèn waves) or if turbulence is strong with formation of coherent structures responsible for dissipation. Here we present an automatic method to identify compressible coherent structures around the ion spectral break, using Morlet wavelet decomposition of magnetic signal from Cluster spacecraft and reconstruction of magnetic fluctuations in a selected scale range. Different kind of coherent structures have been detected: from soliton-like one-dimensional structures to current sheet- or wave-like two-dimensional structures. Using a multi-satellite analysis, in order to characterize 3D geometry and propagation in plasma rest frame, we recover that these structures propagate quasi-perpendicular to the mean magnetic field, with finite velocity. Moreover, without using the Taylor hypothesis, the spatial scales of coherent structures have been estimated. Our observations in the solar wind can provide constraints on theoretical modeling of small scale turbulence and dissipation in collisionless magnetized plasmas.

  14. Efficient transmission of compressed data for remote volume visualization.

    PubMed

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  15. Wavelet based image visibility enhancement of IR images

    NASA Astrophysics Data System (ADS)

    Jiang, Qin; Owechko, Yuri; Blanton, Brendan

    2016-05-01

    Enhancing the visibility of infrared images obtained in a degraded visibility environment is very important for many applications such as surveillance, visual navigation in bad weather, and helicopter landing in brownout conditions. In this paper, we present an IR image visibility enhancement system based on adaptively modifying the wavelet coefficients of the images. In our proposed system, input images are first filtered by a histogram-based dynamic range filter in order to remove sensor noise and convert the input images into 8-bit dynamic range for efficient processing and display. By utilizing a wavelet transformation, we modify the image intensity distribution and enhance image edges simultaneously. In the wavelet domain, low frequency wavelet coefficients contain original image intensity distribution while high frequency wavelet coefficients contain edge information for the original images. To modify the image intensity distribution, an adaptive histogram equalization technique is applied to the low frequency wavelet coefficients while to enhance image edges, an adaptive edge enhancement technique is applied to the high frequency wavelet coefficients. An inverse wavelet transformation is applied to the modified wavelet coefficients to obtain intensity images with enhanced visibility. Finally, a Gaussian filter is used to remove blocking artifacts introduced by the adaptive techniques. Since wavelet transformation uses down-sampling to obtain low frequency wavelet coefficients, histogram equalization of low-frequency coefficients is computationally more efficient than histogram equalization of the original images. We tested the proposed system with degraded IR images obtained from a helicopter landing in brownout conditions. Our experimental results show that the proposed system is effective for enhancing the visibility of degraded IR images.

  16. Vascular compression syndromes.

    PubMed

    Czihal, Michael; Banafsche, Ramin; Hoffmann, Ulrich; Koeppel, Thomas

    2015-11-01

    Dealing with vascular compression syndromes is one of the most challenging tasks in Vascular Medicine practice. This heterogeneous group of disorders is characterised by external compression of primarily healthy arteries and/or veins as well as accompanying nerval structures, carrying the risk of subsequent structural vessel wall and nerve damage. Vascular compression syndromes may severely impair health-related quality of life in affected individuals who are typically young and otherwise healthy. The diagnostic approach has not been standardised for any of the vascular compression syndromes. Moreover, some degree of positional external compression of blood vessels such as the subclavian and popliteal vessels or the celiac trunk can be found in a significant proportion of healthy individuals. This implies important difficulties in differentiating physiological from pathological findings of clinical examination and diagnostic imaging with provocative manoeuvres. The level of evidence on which treatment decisions regarding surgical decompression with or without revascularisation can be relied on is generally poor, mostly coming from retrospective single centre studies. Proper patient selection is critical in order to avoid overtreatment in patients without a clear association between vascular compression and clinical symptoms. With a focus on the thoracic outlet-syndrome, the median arcuate ligament syndrome and the popliteal entrapment syndrome, the present article gives a selective literature review on compression syndromes from an interdisciplinary vascular point of view.

  17. Dental Compressed Air Systems.

    DTIC Science & Technology

    1992-03-01

    I AL-TR-IWI-0uuu AD-A249 954 DENTAL COMPRESSED AIMYTM R Curtis D. Weyrmuch, Mejor, USAP, D Samuel P.Dvs iueatclpi SF.O N AEROSPACE MwaEDIN mwr~ComA G...FUNDING NUMBERS Dental Compressed Air Systems PE - 87714F PR - 7350 TA - 22 D. Weyrauch WU - XX Samuel P. Davis George W. Gaines 7. PERFORMING...words) The purpose of this report is to update guidelines on dental compressed air systems (DCA). Much of the information was obtained from a survey

  18. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  19. Schrödinger like equation for wavelets

    NASA Astrophysics Data System (ADS)

    Zúñiga-Segundo, A.; Moya-Cessa, H. M.; Soto-Eguibar, F.

    2016-01-01

    An explicit phase space representation of the wave function is build based on a wavelet transformation. The wavelet transformation allows us to understand the relationship between s - ordered Wigner function, (or Wigner function when s = 0), and the Torres-Vega-Frederick's wave functions. This relationship is necessary to find a general solution of the Schrödinger equation in phase-space.

  20. Identification of Infinite Dimensional Systems via Adaptive Wavelet Neural Networks

    DTIC Science & Technology

    1993-01-01

    We consider identification of distributed systems via adaptive wavelet neural networks (AWNNs). We take advantage of the multiresolution property of...wavelet systems and the computational structure of neural networks to approximate the unknown plant successively. A systematic approach is developed