Science.gov

Sample records for non-perfect wavelet compression

  1. Wavelet compression of medical imagery.

    PubMed

    Reiter, E

    1996-01-01

    Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355

  2. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  3. Data compression by wavelet transforms

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1992-01-01

    A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.

  4. Wavelet transform in electrocardiography--data compression.

    PubMed

    Provazník, I; Kozumplík, J

    1997-06-01

    An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper. PMID:9291025

  5. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  6. Wavelet analysis and high quality JPEG2000 compression using Daubechies wavelet

    NASA Astrophysics Data System (ADS)

    Khalid, Azra; Afsheen, Uzma; Umer Baig, Saad

    2011-10-01

    Wavelet analysis and its application has found much attention in recent times. It is vastly applied in many applications such as involving transient signal analysis, image processing, signal processing and data compression. It has gained popularity because of its multiresolution, subband coding and feature extraction features. The paper describes efficient application of wavelet analysis for image compression, exploring Daubechies wavelet as the basis function. Wavelets have scaling properties. They are localized in time and frequency. Wavelets separate the image into different scales on the basis of frequency content. The resulting compressed image can then be easily stored or transmitted saving crucial communication bandwidth. Wavelet analysis because of its high quality compression is one of the feature blocks in the new JPEG2000 image compression standard. The paper proposes Daubechies wavelet analysis, quantization and Huffman encoding scheme which results in high compression and good quality reconstruction.

  7. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  8. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  9. Wavelet compression efficiency investigation for medical images

    NASA Astrophysics Data System (ADS)

    Moryc, Marcin; Dziech, Wiera

    2006-03-01

    Medical images are acquired or stored digitally. These images can be very large in size and number, and compression can increase the speed of transmission and reduce the cost of storage. In the paper analysis of medical images' approximation using the transform method based on wavelet functions is investigated. The tested clinical images are taken from multiple anatomical regions and modalities (Computer Tomography CT, Magnetic Resonance MR, Ultrasound, Mammography and X-Ray images). To compress medical images, the threshold criterion has been applied. The mean square error (MSE) is used as a measure of approximation quality. Plots of the MSE versus compression percentage and approximated images are included for comparison of approximation efficiency.

  10. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  11. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  12. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  13. Coresident sensor fusion and compression using the wavelet transform

    SciTech Connect

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  14. Perceptually lossless wavelet-based compression for medical images

    NASA Astrophysics Data System (ADS)

    Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.

    1997-05-01

    In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.

  15. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  16. Three-dimensional compression scheme based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Yang, Wu; Xu, Hui; Liao, Mengyang

    1999-03-01

    In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.

  17. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  18. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  19. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  20. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  1. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  2. Compressed-domain video segmentation using wavelet transformation

    NASA Astrophysics Data System (ADS)

    Yu, Hong H.

    1999-10-01

    Video segmentation is an important first step towards automatic video indexing, retrieval, editing, and etc. However, the 'large' property of video makes it hard to handle in real time. To fulfill the goal of real-time processing, several factors need to be considered. First of all, indexing video directly in the compressed-domain offers the advantages of fast processing upon efficient storage. Secondly, extracting simple features with fast algorithms is no doubt helpful in speeding up the process. The questions are what kind of simple feature can characterize the changing statistics and what kind of algorithm can provide such feature with fast executability. In this paper, we propose a new automatic video segmentation scheme that utilizes wavelet transformation based on the following consideration: wavelet is a nice tool for subband decomposition, it encodes both frequency and spatial information; more over, it is easy to program and fast to execute. In the last decade or so, wavelet transform is emerged to image/video signal processing for analyzing functions at different levels of details. In particular, wavelet, as a tool, has been widely used in the area of image compression. In image compression, it is possible to recover a fairly accurate representation of the image by saving the few largest wavelet coefficients (and throwing away part or all of the smaller coefficients). By using this property, we extract a discrimination signature of each image from a few large coefficients for each color channel. The system works on the compressed video that does not require full decoding of the video and performs a wavelet transformation on the extracted video data. The signature (as feature) is extracted from the wavelet coefficients to characterize the changing statistics of shot transitions. Cuts, fades, and dissolve are detected based on the analysis of the changing statistics curve.

  3. Oncologic image compression using both wavelet and masking techniques.

    PubMed

    Yin, F F; Gao, Q

    1997-12-01

    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown. PMID:9434988

  4. Research of the wavelet based ECW remote sensing image compression technology

    NASA Astrophysics Data System (ADS)

    Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua

    2007-11-01

    This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.

  5. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  6. Wavelet for Ultrasonic Flaw Enhancement and Image Compression

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Tsukada, K.; Li, L. Q.; Hanasaki, K.

    2003-03-01

    Ultrasonic imaging has been widely used in Non-destructive Testing (NDT) and medical application. However, the image is always degraded by blur and noise. Besides, the pressure on both storage and transmission gives rise to the need of image compression. We apply 2-D Discrete Wavelet Transform (DWT) to C-scan 2-D images to realize flaw enhancement and image compression, taking advantage of DWT scale and orientation selectivity. The Wavelet coefficient thresholding and scalar quantization are employed respectively. Furthermore, we realize the unification of flaw enhancement and image compression in one process. The reconstructed image from the compressed data gives a clearer interpretation of the flaws at a much smaller bit rate.

  7. Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.

    2004-02-01

    Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.

  8. Medical image compression algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin

    2005-02-01

    With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.

  9. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  10. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  11. Electroencephalographic compression based on modulated filter banks and wavelet transform.

    PubMed

    Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando

    2011-01-01

    Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality. PMID:22255966

  12. Improved wavelet packet compression of electrocardiogram data: 1. noise filtering

    NASA Astrophysics Data System (ADS)

    Bradie, Brian D.

    1995-09-01

    The improvement in the performance of a wavelet packet based compression scheme for single lead electrocardiogram (ECG) data, obtained by prefiltering noise from the ECG signals, is investigated. The removal of powerline interference and the attenuation of high-frequency muscle noise are considered. Selected records from the MIT-BIH Arrhythmia Database are used as test signals. After both types of noise artifact were filtered, an average data rate of 167.6 bits per second (corresponding to a compression ratio of 23.62), with an average root mean-square (rms) error of 15.886 (mu) V, was achieved. These figures represent better than a 9% improvement in data rate and a 13.5% reduction in rms error over compressing the unfiltered signals.

  13. Compression of 3D integral images using wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  14. HVS-motivated quantization schemes in wavelet image compression

    NASA Astrophysics Data System (ADS)

    Topiwala, Pankaj N.

    1996-11-01

    Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.

  15. Hyperspectral image compression using bands combination wavelet transformation

    NASA Astrophysics Data System (ADS)

    Wang, Wenjie; Zhao, Zhongming; Zhu, Haiqing

    2009-10-01

    Hyperspectral imaging technology is the foreland of the remote sensing development in the 21st century and is one of the most important focuses of the remote sensing domain. Hyperspectral images can provide much more information than multispectral images do and can solve many problems which can't be solved by multispectral imaging technology. However this advantage is at the cost of massy quantity of data that brings difficulties of images' process, storage and transmission. Research on hyperspectral image compression method has important practical significance. This paper intends to do some improvement of the famous KLT-WT-2DSPECK (Karhunen-Loeve transform+ wavelet transformation+ two-dimensional set partitioning embedded block compression) algorithm and advances KLT + bands combination 2DWT + 2DSPECK algorithm. Experiment proves that this method is effective.

  16. Noise-robust low-contrast retinal recognition using compression-based joint wavelet transform correlator

    NASA Astrophysics Data System (ADS)

    Widjaja, Joewono

    2015-11-01

    A new method is proposed for recognizing noise corrupted low-contrast retinal images that employs joint wavelet transform correlator with compressed reference and target. Noise robustness is achieved by correlating wavelet-transformed retinal target and reference images. Simulation results show that besides being robust to noise, its recognition performance can become independent upon compression qualities when low spatial-frequency components of joint power spectrum are enhanced by appropriately dilated wavelet filter.

  17. Adaptive segmentation of wavelet transform coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Wasilewski, Piotr

    2000-04-01

    This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.

  18. Wavelet Compression of Satellite-Transmitted Digital Mammograms

    NASA Technical Reports Server (NTRS)

    Zheng, Yuan F.

    2001-01-01

    Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by

  19. Compression of fingerprint data using the wavelet vector quantization image compression algorithm. 1992 progress report

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1992-04-11

    This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.

  20. Dynamic contrast-based quantization for lossy wavelet image compression.

    PubMed

    Chandler, Damon M; Hemami, Sheila S

    2005-04-01

    This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented. PMID:15825476

  1. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  2. Medical image processing using novel wavelet filters based on atomic functions: optimal medical image compression.

    PubMed

    Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres

    2011-01-01

    The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t),  fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties. PMID:21431590

  3. Random wavelet transforms, algebraic geometric coding, and their applications in signal compression and de-noising

    SciTech Connect

    Bieleck, T.; Song, L.M.; Yau, S.S.T.; Kwong, M.K.

    1995-07-01

    The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.

  4. Best parameters selection for wavelet packet-based compression of magnetic resonance images.

    PubMed

    Abu-Rezq, A N; Tolba, A S; Khuwaja, G A; Foda, S G

    1999-10-01

    Transmission of compressed medical images is becoming a vital tool in telemedicine. Thus new methods are needed for efficient image compression. This study discovers the best design parameters for a data compression scheme applied to digital magnetic resonance (MR) images. The proposed technique aims at reducing the transmission cost while preserving the diagnostic information. By selecting the wavelet packet's filters, decomposition level, and subbands that are better adapted to the frequency characteristics of the image, one may achieve better image representation in the sense of lower entropy or minimal distortion. Experimental results show that the selection of the best parameters has a dramatic effect on the data compression rate of MR images. In all cases, decomposition at three or four levels with the Coiflet 5 wavelet (Coif 5) results in better compression performance than the other wavelets. Image resolution is found to have a remarkable effect on the compression rate. PMID:10529302

  5. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  6. Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity

    PubMed Central

    Du, Huiqian; Han, Yu; Mei, Wenbo

    2014-01-01

    Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704

  7. Compressed sensing MR image reconstruction exploiting TGV and wavelet sparsity.

    PubMed

    Zhao, Di; Du, Huiqian; Han, Yu; Mei, Wenbo

    2014-01-01

    Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704

  8. [Detection of reducing sugar content of potato granules based on wavelet compression by near infrared spectroscopy].

    PubMed

    Dong, Xiao-Ling; Sun, Xu-Dong

    2013-12-01

    The feasibility was explored in determination of reducing sugar content of potato granules based on wavelet compression algorithm combined with near-infrared spectroscopy. The spectra of 250 potato granules samples were recorded by Fourier transform near-infrared spectrometer in the range of 4000- 10000 cm-1. The three parameters of vanishing moments, wavelet coefficients and principal component factor were optimized. The optimization results of three parameters were 10, 100 and 20, respectively. The original spectra of 1501 spectral variables were transfered to 100 wavelet coefficients using db wavelet function. The partial least squares (PLS) calibration models were developed by 1501 spectral variables and 100 wavelet coefficients. Sixty two unknown samples of prediction set were applied to evaluate the performance of PLS models. By comparison, the optimal result was obtained by wavelet compression combined with PLS calibration model. The correlation coefficient of prediction and root mean square error of prediction were 0.98 and 0.181%, respectively. Experimental results show that the dimensions of spectral data were reduced, scarcely losing effective information by wavelet compression algorithm combined with near-infrared spectroscopy technology in determination of reducing sugar in potato granules. The PLS model is simplified, and the predictive ability is improved. PMID:24611373

  9. Medical image compression based on a morphological representation of wavelet coefficients.

    PubMed

    Phelan, N C; Ennis, J T

    1999-08-01

    Image compression is fundamental to the efficient and cost-effective use of digital medical imaging technology and applications. Wavelet transform techniques currently provide the most promising approach to high-quality image compression which is essential for diagnostic medical applications. A novel approach to image compression based on the wavelet decomposition has been developed which utilizes the shape or morphology of wavelet transform coefficients in the wavelet domain to isolate and retain significant coefficients corresponding to image structure and features. The remaining coefficients are further compressed using a combination of run-length and Huffman coding. The technique has been implemented and applied to full 16 bit medical image data for a range of compression ratios. Objective peak signal-to-noise ratio performance of the compression technique was analyzed. Results indicate that good reconstructed image quality can be achieved at compression ratios of up to 15:1 for the image types studied. This technique represents an effective approach to the compression of diagnostic medical images and is worthy of further, more thorough, evaluation of diagnostic quality and accuracy in a clinical setting. PMID:10501061

  10. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900

  11. Compressed sensing based on the improved wavelet transform for image processing

    NASA Astrophysics Data System (ADS)

    Pang, Peng; Gao, Wei; Song, Zongxi; XI, Jiang-bo

    2014-09-01

    Compressed sensing theory is a new sampling theory that can sample signal in a below sampling rate than the traditional Nyquist sampling theory. Compressed sensing theory that has given a revolutionary solution is a novel sampling and processing theory under the condition that the signal is sparse or compressible. This paper investigates how to improve the theory of CS and its application in imaging system. According to the properties of wavelet transform sub-bands, an improved compressed sensing algorithm based on the single layer wavelet transform was proposed. Based on the feature that the most information was preserved on the low-pass layer after the wavelet transform, the improved compressed sensing algorithm only measured the low-pass wavelet coefficients of the image but preserving the high-pass wavelet coefficients. The signal can be restricted exactly by using the appropriate reconstruction algorithms. The reconstruction algorithm is the key point that most researchers focus on and significant progress has been made. For the reconstruction, in order to improve the orthogonal matching pursuit (OMP) algorithm, increased the iteration layers make sure low-pass wavelet coefficients could be recovered by measurements exactly. Then the image could be reconstructed by using the inverse wavelet transform. Compared the original compressed sensing algorithm, simulation results demonstrated that the proposed algorithm decreased the processed data, signal processed time decreased obviously and the recovered image quality improved to some extent. The PSNR of the proposed algorithm was improved about 2 to 3 dB. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.

  12. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  13. Wavelet-based vector quantization for high-fidelity compression and fast transmission of medical images.

    PubMed

    Mitra, S; Yang, S; Kustov, V

    1998-11-01

    Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058

  14. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  15. Generalized b-spline subdivision-surface wavelets and lossless compression

    SciTech Connect

    Bertram, M; Duchaineau, M A; Hamann, B; Joy, K I

    1999-11-24

    We present a new construction of wavelets on arbitrary two-manifold topology for geometry compression. The constructed wavelets generalize symmetric tensor product wavelets with associated B-spline scaling functions to irregular polygonal base mesh domains. The wavelets and scaling functions are tensor products almost everywhere, except in the neighborhoods of some extraordinary points (points of valence unequal four) in the base mesh that defines the topology. The compression of arbitrary polygonal meshes representing isosurfaces of scalar-valued trivariate functions is a primary application. The main contribution of this paper is the generalization of lifted symmetric tensor product B-spline wavelets to two-manifold geometries. Surfaces composed of B-spline patches can easily be converted to this scheme. We present a lossless compression method for geometries with or without associated functions like color, texture, or normals. The new wavelet transform is highly efficient and can represent surfaces at any level of resolution with high degrees of continuity, except at a finite number of extraordinary points in the base mesh. In the neighborhoods of these points detail can be added to the surface to approximate any degree of continuity.

  16. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  17. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  18. Faster techniques to evolve wavelet coefficients for better fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Shanavaz, K. T.; Mythili, P.

    2013-05-01

    In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 × 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.

  19. Optimal block boundary pre/postfiltering for wavelet-based image and video compression.

    PubMed

    Liang, Jie; Tu, Chengjie; Tran, Trac D

    2005-12-01

    This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467

  20. Multispectral image compression technology based on dual-tree discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Fang, Zhijun; Luo, Guihua; Liu, Zhicheng; Gan, Yun; Lu, Yu

    2009-10-01

    The paper proposes a combination of DCT and the Dual-Tree Discrete Wavelet Transform (DDWT) to solve the problems in multi-spectral image data storage and transmission. The proposed method not only removes spectral redundancy by1D DCT, but also removes spatial redundancy by 2D Dual-Tree Discrete Wavelet Transform. Therefore, it achieves low distortion under the conditions of high compression and high-quality reconstruction of the multi-spectral image. Tested by DCT, Haar and DDWT, the results show that the proposed method eliminates the blocking effect of wavelet and has strong visual sense and smooth image, which means the superiors with DDWT has more prominent quality of reconstruction and less noise.

  1. DSP accelerator for the wavelet compression/decompression of high- resolution images

    SciTech Connect

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  2. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained. PMID:17228703

  3. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  4. Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network.

    PubMed

    Kim, Byung S; Yoo, Sun K

    2007-09-01

    The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824

  5. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  6. A linear quality control design for high efficient wavelet-based ECG data compression.

    PubMed

    Hung, King-Chu; Tsai, Chin-Feng; Ku, Cheng-Tung; Wang, Huan-Sheng

    2009-05-01

    In ECG data compression, maintaining reconstructed signal with desired quality is crucial for clinical application. In this paper, a linear quality control design based on the reversible round-off non-recursive discrete periodized wavelet transform (RRO-NRDPWT) is proposed for high efficient ECG data compression. With the advantages of error propagation resistance and octave coefficient normalization, RRO-NRDPWT enables the non-linear quantization control to obtain an approximately linear distortion by using a single control variable. Based on the linear programming, a linear quantization scale prediction model is presented for the quality control of reconstructed ECG signal. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better quality control performance than that of other wavelet-based systems. PMID:19070935

  7. Review of digital fingerprint acquisition systems and wavelet compression

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    2003-04-01

    Over the last decade many criminal justice agencies have replaced their fingerprint card based systems with electronic processing. We examine these new systems and find that image acquisition to support the identification application is consistently a challenge. Image capture and compression are widely dispersed and relatively new technologies within criminal justice information systems. Image quality assurance programs are just beginning to mature.

  8. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  9. Wavelet-based ECG compression by bit-field preserving and running length encoding.

    PubMed

    Chan, Hsiao-Lung; Siao, You-Chen; Chen, Szi-Wen; Yu, Shih-Fan

    2008-04-01

    Efficient electrocardiogram (ECG) compression can reduce the payload of real-time ECG transmission as well as reduce the amount of data storage in long-term ECG recording. In this paper an ECG compression/decompression architecture based on the bit-field preserving (BFP) and running length encoding (RLE)/decoding schemes incorporated with the discrete wavelet transform (DWT) is proposed. Compared to complex and repetitive manipulations in the set partitioning in hierarchical tree (SPIHT) coding and the vector quantization (VQ), the proposed algorithm has advantages of simple manipulations and a feedforward structure that would be suitable to implement on very-large-scale integrated circuits and general microcontrollers. PMID:18164098

  10. Speech coding and compression using wavelets and lateral inhibitory networks

    NASA Astrophysics Data System (ADS)

    Ricart, Richard

    1990-12-01

    The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.

  11. Applications of wavelet-based compression to multidimensional earth science data

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  12. Applications of wavelet-based compression to multidimensional earth science data

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  13. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  14. SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-12-01

    The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.

  15. Extension of wavelet compression algorithms to 3D and 4D image data: exploitation of data coherence in higher dimensions allows very high compression ratios

    NASA Astrophysics Data System (ADS)

    Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick

    2001-12-01

    High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.

  16. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented. PMID:16764265

  17. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    PubMed

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol. PMID:26779641

  18. Clinical evaluation of wavelet compression of digitized chest x-rays

    NASA Astrophysics Data System (ADS)

    Erickson, Bradley J.; Manduca, Armando; Persons, Kenneth R.

    1997-05-01

    In this paper we assess lossy image compression of digitalized chest x-rays using radiologist assessment of anatomic structures and numerical measurements of image accuracy. Forty chest x-rays were digitized and compressed using an irreversible wavelet technique at 10, 20, 40 and 80:1. These were presented in a blinded fashion with an uncompressed image for subjective A-B comparison of 11 anatomic structures as well as overall quality. Mean error, RMS error, maximum pixel error, and number of pixels within 1 percent of original value were also computed for compression ratios from 10:1 to 80:1. We found that at low compression there was a slight preference for compressed images. There was no significant difference at 20:1 and 40:1. There was a slight preference on some structures for the original compared with 80:1 compressed images. Numerical measures demonstrated high image faithfulness, both in terms of number of pixels that were within 1 percent of their original value, and by the average error for all pixels. Our findings suggest that lossy compression at 40:1 or more can be used without perceptible loss in the demonstration of anatomic structures.

  19. Video compression of coronary angiograms based on discrete wavelet transform with block classification

    SciTech Connect

    Ho, B.K.T.; Tsai, M.J.; Wei, J.; Ma, M.; Saipetch, P.

    1996-12-01

    A new method of video compression for angiographic images has been developed to achieve high compression ratio ({approximately}20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group`s (MPEG`s) motion compensated prediction to take advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain cases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.

  20. Block based image compression technique using rank reduction and wavelet difference reduction

    NASA Astrophysics Data System (ADS)

    Bolotnikova, Anastasia; Rasti, Pejman; Traumann, Andres; Lusi, Iiris; Daneshmand, Morteza; Noroozi, Fatemeh; Samuel, Kadri; Sarkar, Suman; Anbarjafari, Gholamreza

    2015-12-01

    In this paper a new block based lossy image compression technique which is using rank reduction of the image and wavelet difference reduction (WDR) technique, is proposed. Rank reduction is obtained by applying singular value decomposition (SVD). The input image is divided into blocks of equal sizes after which quantization by SVD is carried out on each block followed by WDR technique. Reconstruction is carried out by decompressing each blocks bit streams and then merging all of them to obtain the decompressed image. The visual and quantitative experimental results of the proposed image compression technique are shown and also compared with those of the WDR technique and JPEG2000. From the results of the comparison, the proposed image compression technique outperforms the WDR and JPEG2000 techniques.

  1. Muscles data compression in body sensor network using the principal component analysis in wavelet domain

    PubMed Central

    Yekani Khoei, Elmira; Hassannejad, Reza; Mozaffari Tazehkand, Behzad

    2015-01-01

    Introduction: Body sensor network is a key technology that is used for supervising the physiological information from a long distance that enables physicians to predict and diagnose effectively the different conditions. These networks include small sensors with the ability of sensing where there are some limitations in calculating and energy. Methods: In the present research, a new compression method based on the analysis of principal components and wavelet transform is used to increase the coherence. In the present method, the first analysis of the main principles is to find the principal components of the data in order to increase the coherence for increasing the similarity between the data and compression rate. Then, according to the ability of wavelet transform, data are decomposed to different scales. In restoration process of data only special parts are restored and some parts of the data that include noise are omitted. By noise omission, the quality of the sent data increases and good compression could be obtained. Results: Pilates practices were executed among twelve patients with various dysfunctions. The results showed 0.7210, 0.8898, 0.6548, 0.6765, 0.6009, 0.7435, 0.7651, 0.7623, 0.7736, 0.8596, 0.8856 and 0.7102 compression ratios in proposed method and 0.8256, 0.9315, 0.9340, 0.9509, 0.8998, 0.9556, 0.9732, 0.9580, 0.8046, 0.9448, 0.9573 and 0.9440 compression ratios in previous method (Tseng algorithm). Conclusion: Comparing compression rates and prediction errors with the available results show the exactness of the proposed method. PMID:25901292

  2. Nonlinear wavelet compression of ion mobility spectra from ion mobility spectrometers mounted in an unmanned aerial vehicle.

    PubMed

    Cao, Libo; Harrington, Peter de B; Harden, Charles S; McHugh, Vincent M; Thomas, Martin A

    2004-02-15

    Linear and nonlinear wavelet compression of ion mobility spectrometry (IMS) data are compared and evaluated. IMS provides low detection limits and rapid response for many compounds. Nonlinear wavelet compression of ion mobility spectra reduced the data to 4-5% of its original size, while eliminating artifacts in the reconstructed spectra that occur with linear compression, and the root-mean-square reconstruction error was 0.17-0.20% of the maximum intensity of the uncompressed spectra. Furthermore, nonlinear wavelet compression precisely preserves the peak location (i.e., drift time). Small variations in peak location may occur in the reconstructed spectra that were linearly compressed. A method was developed and evaluated for optimizing the compression. The compression method was evaluated with in-flight data recorded from ion mobility spectrometers mounted in an unmanned aerial vehicle (UAV). Plumes of dimethyl methylphosphonate were disseminated for interrogation by the UAV-mounted IMS system. The daublet 8 wavelet filter exhibited the best performance for these evaluations. PMID:14961740

  3. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  4. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  5. FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas

    1993-08-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  6. Embedded zeroblock coding algorithm based on KLT and wavelet transform for hyperspectral image compression

    NASA Astrophysics Data System (ADS)

    Hou, Ying

    2009-10-01

    In this paper, a hyperspectral image lossy coder using three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm based on Karhunen-Loève transform (KLT) and wavelet transform (WT) is proposed. This coding scheme adopts 1D KLT as spectral decorrelator and 2D WT as spatial decorrelator. Furthermore, the computational complexity and the coding performance of the low-complexity KLT are compared and evaluated. In comparison with several stateof- the-art coding algorithms, experimental results indicate that our coder can achieve better lossy compression performance.

  7. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  8. Multibaseline polarimetric synthetic aperture radar tomography of forested areas using wavelet-based distribution compressive sensing

    NASA Astrophysics Data System (ADS)

    Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong

    2015-01-01

    The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.

  9. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS

    SciTech Connect

    Kowal, Grzegorz; Lazarian, A. E-mail: lazarian@astro.wisc.ed

    2010-09-01

    We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.

  10. Accelerating patch-based directional wavelets with multicore parallel computing in compressed sensing MRI.

    PubMed

    Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong

    2015-06-01

    Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds. PMID:25620521

  11. Dataflow and remapping for wavelet compression and realtime view-dependent optimization of billion-triangle isosurfaces

    SciTech Connect

    Duchaineau, M A; Porumbescu, S D; Bertram, M; Hamann, B; Joy, K I

    2000-10-06

    Currently, large physics simulations produce 3D fields whose individual surfaces, after conventional extraction processes, contain upwards of hundreds of millions of triangles. Detailed interactive viewing of these surfaces requires powerful compression to minimize storage, and fast view-dependent optimization of display triangulations to drive high-performance graphics hardware. In this work we provide an overview of an end-to-end multiresolution dataflow strategy whose goal is to increase efficiencies in practice by several orders of magnitude. Given recent advancements in subdivision-surface wavelet compression and view-dependent optimization, we present algorithms here that provide the ''glue'' that makes this strategy hold together. Shrink-wrapping converts highly detailed unstructured surfaces of arbitrary topology to the semi-structured form needed for wavelet compression. Remapping to triangle bintrees minimizes disturbing ''pops'' during real-time display-triangulation optimization and provides effective selective-transmission compression for out-of-core and remote access to these huge surfaces.

  12. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  13. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  14. Comparison of wavelet scalar quantization and JPEG for fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Kidd, Robert C.

    1995-01-01

    An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.

  15. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  16. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  17. Lossless to lossy compression for hyperspectral imagery based on wavelet and integer KLT transforms with 3D binary EZW

    NASA Astrophysics Data System (ADS)

    Cheng, Kai-jen; Dill, Jeffrey

    2013-05-01

    In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.

  18. Using wavelet fusion approach at panchromatic imagery to achieve dynamic range compression

    NASA Astrophysics Data System (ADS)

    Wan, Hung-Sen; Hsu, Chau-Yun; Hsu, Yuan Hung

    2008-10-01

    The image fusion technique is to maximize the information in images at same area or object taken by different sensors. It enhances unapparent features at each image and wildly applied at remote sensing, medical image, machine vision, and military identification. In remote sensing, the latest sensors usually provide 11-bit panchromatic data which compose more radiometric information; however the standard visual equipment can only produce 8-bit resolution content that limits the analysis of imagery on the screen or paper. This paper shows how to preserve the original 11-bit information after the DRA (Dynamic Range Adjustment) approaches and keep the output from color distortion during the following pan/multi-spectrum image fusion process. We propose a good dynamic range compression method converting the original IKONOS panchromatic image into high, low luminance and typical linear stretched images and using wavelet fusion to enhance the radiometric visualization and keeping good correlation with the multi-spectrum images in order to produce fine pan-sharpened product.

  19. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    PubMed Central

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  20. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift.

    PubMed

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  1. High performance projectile seal development for non perfect railgun bores

    SciTech Connect

    Wolfe, T.R.; Vine, F.E. Le; Riedy, P.E.; Panlasigui, A.; Hawke, R.S.; Susoeff, A.R.

    1997-01-01

    The sealing of high pressure gas behind an accelerating projectile has been developed over centuries of use in conventional guns and cannons. The principal concern was propulsion efficiency and trajectory accuracy and repeatability. The development of guns for use as high pressure equation-of-state (EOS) research tools, increased the importance of better seals to prevent gas leakage from interfering with the experimental targets. The development of plasma driven railguns has further increased the need for higher quality seals to prevent gas and plasma blow-by. This paper summarizes more than a decade of effort to meet these increased requirements. In small bore railguns, the first improvement was prompted by the need to contain the propulsive plasma behind the projectile to avoid the initiation of current conducting paths in front of the projectile. The second major requirements arose from the development of a railgun to serve as an EOS tool where it was necessary to maintain an evacuated region in front of the projectile throughout the acceleration process. More recently, the techniques developed for the small bore guns have been applied to large bore railguns and electro-thermal chemical guns in order to maximize their propulsion efficiency. Furthermore, large bore railguns are often less rigid and less straight than conventional homogeneous material guns. Hence, techniques to maintain seals in non perfect, non homogeneous material launchers have been developed and are included in this paper.

  2. An optimal unequal error protection scheme with turbo product codes for wavelet compression of ultraspectral sounder data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.

    2006-08-01

    Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.

  3. Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms

    NASA Astrophysics Data System (ADS)

    Gurkan, Hakan

    2012-12-01

    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.

  4. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    NASA Technical Reports Server (NTRS)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; Moore, Thomas E.

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  5. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  6. Gravity inversion using wavelet-based compression on parallel hybrid CPU/GPU systems: application to southwest Ghana

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Monteiller, Vadim; Komatitsch, Dimitri; Perrouty, Stéphane; Jessell, Mark; Bonvalot, Sylvain; Lindsay, Mark

    2013-12-01

    We solve the 3-D gravity inverse problem using a massively parallel voxel (or finite element) implementation on a hybrid multi-CPU/multi-GPU (graphics processing units/GPUs) cluster. This allows us to obtain information on density distributions in heterogeneous media with an efficient computational time. In a new software package called TOMOFAST3D, the inversion is solved with an iterative least-square or a gradient technique, which minimizes a hybrid L1-/L2-norm-based misfit function. It is drastically accelerated using either Haar or fourth-order Daubechies wavelet compression operators, which are applied to the sensitivity matrix kernels involved in the misfit minimization. The compression process behaves like a pre-conditioning of the huge linear system to be solved and a reduction of two or three orders of magnitude of the computational time can be obtained for a given number of CPU processor cores. The memory storage required is also significantly reduced by a similar factor. Finally, we show how this CPU parallel inversion code can be accelerated further by a factor between 3.5 and 10 using GPU computing. Performance levels are given for an application to Ghana, and physical information obtained after 3-D inversion using a sensitivity matrix with around 5.37 trillion elements is discussed. Using compression the whole inversion process can last from a few minutes to less than an hour for a given number of processor cores instead of tens of hours for a similar number of processor cores when compression is not used.

  7. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression.

    PubMed

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-05-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  8. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression

    PubMed Central

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-01-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  9. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    SciTech Connect

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  10. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    PubMed Central

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-01-01

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636

  11. Wavelet-based watermarking and compression for ECG signals with verification evaluation.

    PubMed

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-01-01

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636

  12. Wavelet-based very low bandwidth video compression for physical security

    NASA Astrophysics Data System (ADS)

    Cox, Paul G.

    1998-12-01

    Video cameras have become a key component for physical security and continue to grow in importance in today's environment. Video cameras often must be installed in remote locations or locations where physical tampering may be a factor. The solution is to transmit the video over wireless communication links. Often, the communication bandwidths are very narrow (typical less than 9.6 kbits). In addition, the image transmission must be made in real time or near time, while still maintaining the integrity or quality of the imagery. This poses a very challenging problem for the transmission of imagery - in particular motion imagery or video. Tridents WaveNet program offers a solution to this problem where the primary objective of this effort is to provide a real time, high quality video compression capability. This paper discusses the WaveNet program with respect to the application of physical security.

  13. Scalable medical data compression and transmission using wavelet transform for telemedicine applications.

    PubMed

    Hwang, Wen-Jyi; Chine, Ching-Fung; Li, Kuo-Jung

    2003-03-01

    In this paper, a novel medical data compression algorithm, termed layered set partitioning in hierarchical trees (LSPIHT) algorithm, is presented for telemedicine applications. In the LSPIHT, the encoded bit streams are divided into a number of layers for transmission and reconstruction. Starting from the base layer, by accumulating bit streams up to different enhancement layers, we can reconstruct medical data with various signal-to-noise ratios (SNRs) and/or resolutions. Receivers with distinct specifications can then share the same source encoder to reduce the complexity of telecommunication networks for telemedicine applications. Numerical results show that, besides having low network complexity, the LSPIHT attains better rate-distortion performance as compared with other algorithms for encoding medical data. PMID:12670019

  14. Wavelets on Planar Tesselations

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-02-25

    We present a new technique for progressive approximation and compression of polygonal objects in images. Our technique uses local parameterizations defined by meshes of convex polygons in the plane. We generalize a tensor product wavelet transform to polygonal domains to perform multiresolution analysis and compression of image regions. The advantage of our technique over conventional wavelet methods is that the domain is an arbitrary tessellation rather than, for example, a uniform rectilinear grid. We expect that this technique has many applications image compression, progressive transmission, radiosity, virtual reality, and image morphing.

  15. Study on Optimization Method of Quantization Step and the Image Quality Evaluation for Medical Ultrasonic Echo Image Compression by Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Khieovongphachanh, Vimontha; Hamamoto, Kazuhiko; Kondo, Shozo

    In this paper, we investigate optimized quantization method in JPEG2000 application for medical ultrasonic echo images. JPEG2000 has been issued as the new standard for image compression technique, which is based on Wavelet Transform (WT) and JPEG2000 incorporated into DICOM (Digital Imaging and Communications in Medicine). There are two quantization methods. One is the scalar derived quantization (SDQ), which is usually used in standard JPEG2000. The other is the scalar expounded quantization (SEQ), which can be optimized by user. Therefore, this paper is an optimization of quantization step, which is determined by Genetic Algorithm (GA). Then, the results are compared with SDQ and SEQ determined by arithmetic average method. The purpose of this paper is to improve image quality and compression ratio for medical ultrasonic echo images. The image quality is evaluated by objective assessment, PSNR (Peak Signal to Noise Ratio) and subjective assessment is evaluated by ultrasonographers from Tokai University Hospital and Tokai University Hachioji Hospital. The results show that SEQ determined by GA provides better image quality than SDQ and SEQ determined by arithmetic average method. Additionally, three optimization methods of quantization step apply to thin wire target image for analysis of point spread function.

  16. Using PACS and wavelet-based image compression in a wide-area network to support radiation therapy imaging applications for satellite hospitals

    NASA Astrophysics Data System (ADS)

    Smith, Charles L.; Chu, Wei-Kom; Wobig, Randy; Chao, Hong-Yang; Enke, Charles

    1999-07-01

    An ongoing PACS project at our facility has been expanded to include providing and managing images used for routine clinical operation of the department of radiation oncology. The intent of our investigation has been to enable out clinical radiotherapy service to enter the tele-medicine environment through the use of a PACS system initially implemented in the department of radiology. The backbone for the imaging network includes five CT and three MR scanners located across three imaging centers. A PC workstation in the department of radiation oncology was used to transmit CT imags to a satellite facility located approximately 60 miles from the primary center. Chest CT images were used to analyze network transmission performance. Connectivity established between the primary department and satellite has fulfilled all image criteria required by the oncologist. Establishing the link tot eh oncologist at the satellite diminished bottlenecking of imaging related tasks at the primary facility due to physician absence. A 30:1 compression ratio using a wavelet-based algorithm provided clinically acceptable images treatment planning. Clinical radiotherapy images can be effectively managed in a wide- area-network to link satellite facilities to larger clinical centers.

  17. Why are wavelets so effective

    SciTech Connect

    Resnikoff, H.L. )

    1993-01-01

    The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.

  18. Periodized wavelets

    SciTech Connect

    Schlossnagle, G.; Restrepo, J.M.; Leaf, G.K.

    1993-12-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and contrasted against their counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrate by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and several tabulated values are included.

  19. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  20. Low-Oscillation Complex Wavelets

    NASA Astrophysics Data System (ADS)

    ADDISON, P. S.; WATSON, J. N.; FENG, T.

    2002-07-01

    In this paper we explore the use of two low-oscillation complex wavelets—Mexican hat and Morlet—as powerful feature detection tools for data analysis. These wavelets, which have been largely ignored to date in the scientific literature, allow for a decomposition which is more “temporal than spectral” in wavelet space. This is shown to be useful for the detection of small amplitude, short duration signal features which are masked by much larger fluctuations. Wavelet transform-based methods employing these wavelets (based on both wavelet ridges and modulus maxima) are developed and applied to sonic echo NDT signals used for the analysis of structural elements. A new mobility scalogram and associated reflectogram is defined for analysis of impulse response characteristics of structural elements and a novel signal compression technique is described in which the pertinent signal information is contained within a few modulus maxima coefficients. As an example of its usefulness, the signal compression method is employed as a pre-processor for a neural network classifier. The authors believe that low oscillation complex wavelets have wide applicability to other practical signal analysis problems. Their possible application to two such problems is discussed briefly—the interrogation of arrhythmic ECG signals and the detection and characterization of coherent structures in turbulent flow fields.

  1. Wavelet theory and its applications

    SciTech Connect

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  2. Optical wavelet transform for fingerprint identification

    NASA Astrophysics Data System (ADS)

    MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.

    1994-03-01

    The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.

  3. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  4. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  5. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  6. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  7. Proximity sensing with wavelet generated video

    NASA Astrophysics Data System (ADS)

    Noel, Steven E.; Szu, Harold H.

    1998-10-01

    In this paper we introduce wavelet video processing of proximity sensor signals. Proximity sensing is required for a wide range of military and commercial applications, including weapon fuzzing, robotics, and automotive collision avoidance. While our proposed method temporarily increases signal dimension, it eventually performs data compression through the extraction of salient signal features. This data compression in turn reduces the necessary complexity of the remaining computational processing. We demonstrate our method of wavelet video processing via the proximity sensing of nearby objects through their Doppler shift. In doing this we perform a continuous wavelet transform on the Doppler signal, after subjecting it to a time-varying window. We then extract signal features from the resulting wavelet video, which we use as input to pattern recognition neural networks. The networks are trained to estimate the time- varying Doppler shift from the extracted features. We test the estimation performance of the networks, using different degrees of nonlinearity in the frequency shift over time and different levels of noise. We give the analytical result that the signal-to-noise enhancement of our proposed method is at least as good as the square root of the number of video frames, although more work is needed to completely quantify this. Real-time wavelet-based video processing and compression technology recently developed under the DOD WAVENET program offers an exciting opportunity to more fully investigate our proposed method.

  8. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  9. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  10. Directional spherical multipole wavelets

    SciTech Connect

    Hayn, Michael; Holschneider, Matthias

    2009-07-15

    We construct a family of admissible analysis reconstruction pairs of wavelet families on the sphere. The construction is an extension of the isotropic Poisson wavelets. Similar to those, the directional wavelets allow a finite expansion in terms of off-center multipoles. Unlike the isotropic case, the directional wavelets are not a tight frame. However, at small scales, they almost behave like a tight frame. We give an explicit formula for the pseudodifferential operator given by the combination analysis-synthesis with respect to these wavelets. The Euclidean limit is shown to exist and an explicit formula is given. This allows us to quantify the asymptotic angular resolution of the wavelets.

  11. The Sea of Wavelets

    NASA Astrophysics Data System (ADS)

    Jones, B. J. T.

    Wavelet analysis has become a major tool in many aspects of data handling, whether it be statistical analysis, noise removal or image reconstruction. Wavelet analysis has worked its way into fields as diverse as economics, medicine, geophysics, music and cosmology.

  12. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  13. Adaptive Wavelet Transforms

    SciTech Connect

    Szu, H.; Hsu, C.

    1996-12-31

    Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.

  14. Data analysis using wavelets

    SciTech Connect

    Fryer, M.O.

    1997-05-01

    This paper describes the use of wavelet transform techniques to analyze typical data found in industrial applications. A way of detecting system changes using wavelet transforms is described. The results of applying this method are described for several typical applications. The wavelet technique is compared with the use of Fourier transform methods.

  15. Wavelet encoding and variable resolution progressive transmission

    NASA Technical Reports Server (NTRS)

    Blanford, Ronald P.

    1993-01-01

    Progressive transmission is a method of transmitting and displaying imagery in stages of successively improving quality. The subsampled lowpass image representations generated by a wavelet transformation suit this purpose well, but for best results the order of presentation is critical. Candidate data for transmission are best selected using dynamic prioritization criteria generated from image contents and viewer guidance. We show that wavelets are not only suitable but superior when used to encode data for progressive transmission at non-uniform resolutions. This application does not preclude additional compression using quantization of highpass coefficients, which to the contrary results in superior image approximations at low data rates.

  16. Symplectic wavelet transformation.

    PubMed

    Fan, Hong-Yi; Lu, Hai-Liang

    2006-12-01

    Usually a wavelet transform is based on dilated-translated wavelets. We propose a symplectic-transformed-translated wavelet family psi(*)(r,s)(z-kappa) (r,s are the symplectic transform parameters, |s|(2)-|r|(2)=1, kappa is a translation parameter) generated from the mother wavelet psi and the corresponding wavelet transformation W(psi)f(r,s;kappa)=integral(infinity)(-infinity)(d(2)z/pi)f(z)psi(*)(r,s)(z-kappa). This new transform possesses well-behaved properties and is related to the optical Fresnel transform in quantum mechanical version. PMID:17099740

  17. [An algorithm of a wavelet-based medical image quantization].

    PubMed

    Hou, Wensheng; Wu, Xiaoying; Peng, Chenglin

    2002-12-01

    The compression of medical image is the key to study tele-medicine & PACS. We have studied the statistical distribution of wavelet subimage coefficients and concluded that the distribution of wavelet subimage coefficients is very much similar to that of Laplacian distribution. Based on the statistical properties of image wavelet decomposition, an image quantization algorithm is proposed. In this algorithm, we selected the sample-standard-deviation as the key quantization threshold in every wavelet subimage. The test has proved that, the main advantages of this algorithm are simple computing and the predictability of coefficients in different quantization threshold range. Also, high compression efficiency can be obtained. Therefore, this algorithm can be potentially used in tele-medicine and PACS. PMID:12561372

  18. Option pricing from wavelet-filtered financial series

    NASA Astrophysics Data System (ADS)

    de Almeida, V. T. X.; Moriconi, L.

    2012-10-01

    We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.

  19. New classes of Wavelets

    SciTech Connect

    Manchanda, P.; Meenakshi

    2009-07-02

    Recently Manchanda, Meenakshi and Siddiqi have studied Haar-Vilenkin wavelet and a special type of non-uniform multiresolution analysis. Haar-Vilenkin wavelet is a generalization of Haar wavelet. Motivated by the paper of Gabardo and Nashed we have introduced a class of multiresolution analysis extending the concept of classical multiresolution analysis. We present here a resume of these results. We hope that applications of these concepts to some significant real world problems could be found.

  20. Progressive Compression of Volumetric Subdivision Meshes

    SciTech Connect

    Laney, D; Pascucci, V

    2004-04-16

    We present a progressive compression technique for volumetric subdivision meshes based on the slow growing refinement algorithm. The system is comprised of a wavelet transform followed by a progressive encoding of the resulting wavelet coefficients. We compare the efficiency of two wavelet transforms. The first transform is based on the smoothing rules used in the slow growing subdivision technique. The second transform is a generalization of lifted linear B-spline wavelets to the same multi-tier refinement structure. Direct coupling with a hierarchical coder produces progressive bit streams. Rate distortion metrics are evaluated for both wavelet transforms. We tested the practical performance of the scheme on synthetic data as well as data from laser indirect-drive fusion simulations with multiple fields per vertex. Both wavelet transforms result in high quality trade off curves and produce qualitatively good coarse representations.

  1. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  2. Image coding by way of wavelets

    NASA Technical Reports Server (NTRS)

    Shahshahani, M.

    1993-01-01

    The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.

  3. Simultaneous denoising and compression of multispectral images

    NASA Astrophysics Data System (ADS)

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.

    2013-01-01

    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  4. Source Wavelet Phase Extraction

    NASA Astrophysics Data System (ADS)

    Naghadeh, Diako Hariri; Morley, Christopher Keith

    2016-06-01

    Extraction of propagation wavelet phase from seismic data can be conducted using first, second, third and fourth-order statistics. Three new methods are introduced, which are: (1) Combination of different moments, (2) Windowed continuous wavelet transform and (3) Maximum correlation with cosine function. To compare different methods synthetic data with and without noise were chosen. Results show that first, second and third order statistics are not able to preserve wavelet phase. Kurtosis can preserve propagation wavelet phase but signal-to-noise ratio can affect the extracted phase using this method. So for data set with low signal-to-noise ratio, it will be unstable. Using a combination of different moments to extract the phase is more robust than applying kurtosis. The improvement occurs because zero phase wavelets with reverse polarities have equal maximum kurtosis values hence the correct wavelet polarity cannot be identified. Zero-phase wavelets with reverse polarities have minimum and maximum values for a combination of different-moments method. These properties enable the technique to handle a finite data segment and to choose the correct wavelet polarity. Also, the existence of different moments can decrease sensitivity to outliers. A windowed continuous wavelet transform is more sensitive to signal-to-noise ratio than the combination of different-moments method, also if the scale for the wavelet is incorrect it will encounter with more problems to extract phase. When the effects of frequency bandwidth, signal-to-noise ratio and analyzing window length are considered, the results of extracting phase information from data without and with noise demonstrate that combination of different-moments is superior to the other methods introduced here.

  5. Adaptive compression of image data

    NASA Astrophysics Data System (ADS)

    Hludov, Sergei; Schroeter, Claus; Meinel, Christoph

    1998-09-01

    In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.

  6. Wavelet analysis of electric adjustable speed drive waveforms

    SciTech Connect

    Czarkowski, D.; Domijan, A. Jr.

    1998-10-01

    The three most common adjustable speed drives (ASDs) used in HVAC equipment, namely, pulse-width modulated (PWM) induction drive, brushless-dc drive, and switched-reluctance drive, generate non-periodic and nonstationary electric waveforms with sharp edges and transients. Deficiencies of Fourier transform methods in analysis of such ASD waveforms prompted an application of the wavelet transform. Results of discrete wavelet transform (DWT) analysis of PWM inverter-fed motor waveforms are presented. The best mother wavelet for analysis of the recorded waveforms is selected. Data compression properties of the selected mother wavelet are compared to those of the fast Fourier transform (FFT). Multilevel feature detection of ASD waveforms using the DWT is shown.

  7. Periodized Daubechies wavelets

    SciTech Connect

    Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.

    1996-03-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.

  8. Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

    SciTech Connect

    Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin; Clyne, John; Childs, Hank

    2015-10-25

    I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.

  9. Global and Local Distortion Inference During Embedded Zerotree Wavelet Decompression

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.

    1996-01-01

    This paper presents algorithms for inferring global and spatially local estimates of the squared-error distortion measures for the Embedded Zerotree Wavelet (EZW) image compression algorithm. All distortion estimates are obtained at the decoder without significantly compromising EZW's rate-distortion performance. Two methods are given for propagating distortion estimates from the wavelet domain to the spatial domain, thus giving individual estimates of distortion for each pixel of the decompressed image. These local distortion estimates seem to provide only slight improvement in the statistical characterization of EZW compression error relative to the global measure, unless actual squared errors are propagated. However, they provide qualitative information about the asymptotic nature of the error that may be helpful in wavelet filter selection for low bit rate applications.

  10. Wavelets and electromagnetics

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.

    1992-01-01

    Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.

  11. Entanglement Renormalization and Wavelets.

    PubMed

    Evenbly, Glen; White, Steven R

    2016-04-01

    We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems. PMID:27104687

  12. Entanglement Renormalization and Wavelets

    NASA Astrophysics Data System (ADS)

    Evenbly, Glen; White, Steven R.

    2016-04-01

    We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems.

  13. Lagrange wavelets for signal processing.

    PubMed

    Shi, Z; Wei, G W; Kouri, D J; Hoffman, D K; Bao, Z

    2001-01-01

    This paper deals with the design of interpolating wavelets based on a variety of Lagrange functions, combined with novel signal processing techniques for digital imaging. Halfband Lagrange wavelets, B-spline Lagrange wavelets and Gaussian Lagrange (Lagrange distributed approximating functional (DAF)) wavelets are presented as specific examples of the generalized Lagrange wavelets. Our approach combines the perceptually dependent visual group normalization (VGN) technique and a softer logic masking (SLM) method. These are utilized to rescale the wavelet coefficients, remove perceptual redundancy and obtain good visual performance for digital image processing. PMID:18255493

  14. Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram

    SciTech Connect

    Anant, K.S.

    1997-06-01

    In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the P as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the

  15. Wavelet analysis of atmospheric turbulence

    SciTech Connect

    Hudgins, L.H.

    1992-12-31

    After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scale number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.

  16. Integrated system for image storage, retrieval, and transmission using wavelet transform

    NASA Astrophysics Data System (ADS)

    Yu, Dan; Liu, Yawen; Mu, Ray Y.; Yang, Shi-Qiang

    1998-12-01

    Currently, much work has been done in the area of image storage and retrieval. However, the overall performance has been far from practical. A highly integrated wavelet-based image management system is proposed in this paper. By integrating wavelet-based solutions for image compression and decompression, content-based retrieval and progressive transmission, much higher performance can be achieved. The multiresolution nature of the wavelet transform has been proven to be a powerful tool to represent images. The wavelet transform decomposes the image into a set of subimages with different resolutions. From here three solutions for key aspects of image management are reached. The content-based image retrieval (CBIR) features of our system include the color, contour, texture, sample, keyword and topic information of images. The first four features can be naturally extracted from the wavelet transform coefficients. By scoring the similarity of users' requests with images in the database, those who have higher scores are noted and the user receives feedback. Image compression and decompression. Assuming that details at high resolution and diagonal directions are less visible to the human eye, a good compression ratio can be achieved. In each subimage, the wavelet coefficients are vector quantized (VQ), using the LGB algorithm, which is improved in our approach to accelerate the process. Higher compression ratio can be achieved with DPCM and entropy coding method applied together. With YIQ representation, color images can also be effectively compressed. There is a very low load on the network bandwidth by transmitting compressed image data across the network. Progressive transmission is possible by employment of the multiresolution nature of the wavelet, which makes the system respond faster and the user-interface more friendly. The system shows a high overall performance by exploring the excellent features of wavelet, and integrating key aspects of image management. An

  17. Electromagnetic spatial coherence wavelets.

    PubMed

    Castaneda, Roman; Garcia-Sucerquia, Jorge

    2006-01-01

    The recently introduced concept of spatial coherence wavelets is generalized to describe the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows for the analysis of the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides further insight about the causal relationship between the polarization states at different planes along the propagation path. PMID:16478063

  18. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph

    2004-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.

  19. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  20. Compressive rendering: a rendering application of compressed sensing.

    PubMed

    Sen, Pradeep; Darabi, Soheil

    2011-04-01

    Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics, it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75 percent of the pixel samples using a nonadaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity. PMID:21311092

  1. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  2. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  3. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  4. Wavelet Domain Radiofrequency Pulse Design Applied to Magnetic Resonance Imaging

    PubMed Central

    Huettner, Andrew M.; Mickevicius, Nikolai J.; Ersoz, Ali; Koch, Kevin M.; Muftuler, L. Tugan; Nencka, Andrew S.

    2015-01-01

    A new method for designing radiofrequency (RF) pulses with numerical optimization in the wavelet domain is presented. Numerical optimization may yield solutions that might otherwise have not been discovered with analytic techniques alone. Further, processing in the wavelet domain reduces the number of unknowns through compression properties inherent in wavelet transforms, providing a more tractable optimization problem. This algorithm is demonstrated with simultaneous multi-slice (SMS) spin echo refocusing pulses because reduced peak RF power is necessary for SMS diffusion imaging with high acceleration factors. An iterative, nonlinear, constrained numerical minimization algorithm was developed to generate an optimized RF pulse waveform. Wavelet domain coefficients were modulated while iteratively running a Bloch equation simulator to generate the intermediate slice profile of the net magnetization. The algorithm minimizes the L2-norm of the slice profile with additional terms to penalize rejection band ripple and maximize the net transverse magnetization across each slice. Simulations and human brain imaging were used to demonstrate a new RF pulse design that yields an optimized slice profile and reduced peak energy deposition when applied to a multiband single-shot echo planar diffusion acquisition. This method may be used to optimize factors such as magnitude and phase spectral profiles and peak RF pulse power for multiband simultaneous multi-slice (SMS) acquisitions. Wavelet-based RF pulse optimization provides a useful design method to achieve a pulse waveform with beneficial amplitude reduction while preserving appropriate magnetization response for magnetic resonance imaging. PMID:26517262

  5. Multiple wavelet-tree-based image coding and robust transmission

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Chen, Chang Wen

    2004-10-01

    In this paper, we present techniques based on multiple wavelet-tree coding for robust image transmission. The algorithm of set partitioning in hierarchical trees (SPIHT) is a state-of-the-art technique for image compression. This variable length coding (VLC) technique, however, is extremely sensitive to channel errors. To improve the error resilience capability and in the meantime to keep the high source coding efficiency through VLC, we propose to encode each wavelet tree or a group of wavelet trees using SPIHT algorithm independently. Instead of encoding the entire image as one bitstream, multiple bitstreams are generated. Therefore, error propagation is limited within individual bitstream. Two methods based on subsampling and human visual sensitivity are proposed to group the wavelet trees. The multiple bitstreams are further protected by the rate compatible puncture convolutional (RCPC) codes. Unequal error protection are provided for both different bitstreams and different bit segments inside each bitstream. We also investigate the improvement of error resilience through error resilient entropy coding (EREC) and wavelet tree coding when channels are slightly corruptive. A simple post-processing technique is also proposed to alleviate the effect of residual errors. We demonstrate through simulations that systems with these techniques can achieve much better performance than systems transmitting a single bitstream in noisy environments.

  6. Wavelet Domain Radiofrequency Pulse Design Applied to Magnetic Resonance Imaging.

    PubMed

    Huettner, Andrew M; Mickevicius, Nikolai J; Ersoz, Ali; Koch, Kevin M; Muftuler, L Tugan; Nencka, Andrew S

    2015-01-01

    A new method for designing radiofrequency (RF) pulses with numerical optimization in the wavelet domain is presented. Numerical optimization may yield solutions that might otherwise have not been discovered with analytic techniques alone. Further, processing in the wavelet domain reduces the number of unknowns through compression properties inherent in wavelet transforms, providing a more tractable optimization problem. This algorithm is demonstrated with simultaneous multi-slice (SMS) spin echo refocusing pulses because reduced peak RF power is necessary for SMS diffusion imaging with high acceleration factors. An iterative, nonlinear, constrained numerical minimization algorithm was developed to generate an optimized RF pulse waveform. Wavelet domain coefficients were modulated while iteratively running a Bloch equation simulator to generate the intermediate slice profile of the net magnetization. The algorithm minimizes the L2-norm of the slice profile with additional terms to penalize rejection band ripple and maximize the net transverse magnetization across each slice. Simulations and human brain imaging were used to demonstrate a new RF pulse design that yields an optimized slice profile and reduced peak energy deposition when applied to a multiband single-shot echo planar diffusion acquisition. This method may be used to optimize factors such as magnitude and phase spectral profiles and peak RF pulse power for multiband simultaneous multi-slice (SMS) acquisitions. Wavelet-based RF pulse optimization provides a useful design method to achieve a pulse waveform with beneficial amplitude reduction while preserving appropriate magnetization response for magnetic resonance imaging. PMID:26517262

  7. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  8. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  9. Discrete wavelet analysis of power system transients

    SciTech Connect

    Wilkinson, W.A.; Cox, M.D.

    1996-11-01

    Wavelet analysis is a new method for studying power system transients. Through wavelet analysis, transients are decomposed into a series of wavelet components, each of which is a time-domain signal that covers a specific octave frequency band. This paper presents the basic ideas of discrete wavelet analysis. A variety of actual and simulated transient signals are then analyzed using the discrete wavelet transform that help demonstrate the power of wavelet analysis.

  10. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  11. Hyperspectral image data compression based on DSP

    NASA Astrophysics Data System (ADS)

    Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    2010-11-01

    The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.

  12. Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach

    NASA Technical Reports Server (NTRS)

    Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.

    2000-01-01

    The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.

  13. Wavelets in medical imaging

    SciTech Connect

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  14. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  15. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples. PMID:25571265

  16. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  17. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  18. Digital watermarking algorithm based on HVS in wavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Qiuhong; Xia, Ping; Liu, Xiaomei

    2013-10-01

    As a new technique used to protect the copyright of digital productions, the digital watermark technique has drawn extensive attention. A digital watermarking algorithm based on discrete wavelet transform (DWT) was presented according to human visual properties in the paper. Then some attack analyses were given. Experimental results show that the watermarking scheme proposed in this paper is invisible and robust to cropping, and also has good robustness to cut , compression , filtering , and noise adding .

  19. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  20. The decoding method based on wavelet image En vector quantization

    NASA Astrophysics Data System (ADS)

    Liu, Chun-yang; Li, Hui; Wang, Tao

    2013-12-01

    With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.

  1. Neural network wavelet technology: A frontier of automation

    NASA Technical Reports Server (NTRS)

    Szu, Harold

    1994-01-01

    Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.

  2. Optimized discrete wavelet transforms in the cubed sphere with the lifting scheme—implications for global finite-frequency tomography

    NASA Astrophysics Data System (ADS)

    Chevrot, Sébastien; Martin, Roland; Komatitsch, Dimitri

    2012-12-01

    Wavelets are extremely powerful to compress the information contained in finite-frequency sensitivity kernels and tomographic models. This interesting property opens the perspective of reducing the size of global tomographic inverse problems by one to two orders of magnitude. However, introducing wavelets into global tomographic problems raises the problem of computing fast wavelet transforms in spherical geometry. Using a Cartesian cubed sphere mapping, which grids the surface of the sphere with six blocks or 'chunks', we define a new algorithm to implement fast wavelet transforms with the lifting scheme. This algorithm is simple and flexible, and can handle any family of discrete orthogonal or bi-orthogonal wavelets. Since wavelet coefficients are local in space and scale, aliasing effects resulting from a parametrization with global functions such as spherical harmonics are avoided. The sparsity of tomographic models expanded in wavelet bases implies that it is possible to exploit the power of compressed sensing to retrieve Earth's internal structures optimally. This approach involves minimizing a combination of a ℓ2 norm for data residuals and a ℓ1 norm for model wavelet coefficients, which can be achieved through relatively minor modifications of the algorithms that are currently used to solve the tomographic inverse problem.

  3. Experimental Studies on a Compact Storage Scheme for Wavelet-based Multiresolution Subregion Retrieval

    NASA Technical Reports Server (NTRS)

    Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.

    1996-01-01

    Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.

  4. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  5. A generalized wavelet extrema representation

    SciTech Connect

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  6. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    NASA Astrophysics Data System (ADS)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The

  7. Mother wavelets for complex wavelet transform derived by Einstein-Podolsky-Rosen entangled state representation.

    PubMed

    Fan, Hong-Yi; Lu, Hai-Liang

    2007-03-01

    The Einstein-Podolsky-Rosen entangled state representation is applied to studying the admissibility condition of mother wavelets for complex wavelet transforms, which leads to a family of new mother wavelets. Mother wavelets thus are classified as the Hermite-Gaussian type for real wavelet transforms and the Laguerre-Gaussian type for the complex case. PMID:17392919

  8. Wavelet periodicity detection algorithms

    NASA Astrophysics Data System (ADS)

    Benedetto, John J.; Pfander, Goetz E.

    1998-10-01

    This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.

  9. Wavelets and spacetime squeeze

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1993-01-01

    It is shown that the wavelet is the natural language for the Lorentz covariant description of localized light waves. A model for covariant superposition is constructed for light waves with different frequencies. It is therefore possible to construct a wave function for light waves carrying a covariant probability interpretation. It is shown that the time-energy uncertainty relation (Delta(t))(Delta(w)) is approximately 1 for light waves is a Lorentz-invariant relation. The connection between photons and localized light waves is examined critically.

  10. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  11. An Introduction to Wavelet Theory and Analysis

    SciTech Connect

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  12. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  13. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  14. Wavelet networks for face processing

    NASA Astrophysics Data System (ADS)

    Krüger, V.; Sommer, G.

    2002-06-01

    Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.

  15. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  16. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  17. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  18. Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images.

    PubMed

    Yue, Yong; Croitoru, Mihai M; Bidani, Akhil; Zwischenberger, Joseph B; Clark, John W

    2006-03-01

    This paper introduces a novel nonlinear multiscale wavelet diffusion method for ultrasound speckle suppression and edge enhancement. This method is designed to utilize the favorable denoising properties of two frequently used techniques: the sparsity and multiresolution properties of the wavelet, and the iterative edge enhancement feature of nonlinear diffusion. With fully exploited knowledge of speckle image models, the edges of images are detected using normalized wavelet modulus. Relying on this feature, both the envelope-detected speckle image and the log-compressed ultrasonic image can be directly processed by the algorithm without need for additional preprocessing. Speckle is suppressed by employing the iterative multiscale diffusion on the wavelet coefficients. With a tuning diffusion threshold strategy, the proposed method can improve the image quality for both visualization and auto-segmentation applications. We validate our method using synthetic speckle images and real ultrasonic images. Performance improvement over other despeckling filters is quantified in terms of noise suppression and edge preservation indices. PMID:16524086

  19. An overview of the quantum wavelet transform, focused on earth science applications

    NASA Astrophysics Data System (ADS)

    Shehab, O.; LeMoigne, J.; Lomonaco, S.; Halem, M.

    2015-12-01

    Registering the images from the MODIS system and the OCO-2 satellite is currently being done by classical image registration techniques. One such technique is wavelet transformation. Besides image registration, wavelet transformation is also used in other areas of earth science, for example, processinga and compressing signal variation, etc. In this talk, we investigate the applicability of few quantum wavelet transformation algorithms to perform image registration on the MODIS and OCO-2 data. Most of the known quantum wavelet transformation algorithms are data agnostic. We investigate their applicability in transforming Flexible Representation for Quantum Images. Similarly, we also investigate the applicability of the algorithms in signal variation analysis. We also investigate the transformation of the models into pseudo-boolean functions to implement them on commercially available quantum annealing computers, such as the D-Wave computer located at NASA Ames.

  20. Generalized orthogonal wavelet phase reconstruction.

    PubMed

    Axtell, Travis W; Cristi, Roberto

    2013-05-01

    Phase reconstruction is used for feedback control in adaptive optics systems. To achieve performance metrics for high actuator density or with limited processing capabilities on spacecraft, a wavelet signal processing technique is advantageous. Previous derivations of this technique have been limited to the Haar wavelet. This paper derives the relationship and algorithms to reconstruct phase with O(n) computational complexity for wavelets with the orthogonal property. This has additional benefits for performance with noise in the measurements. We also provide details on how to handle the boundary condition for telescope apertures. PMID:23695316

  1. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  2. Birdsong Denoising Using Wavelets.

    PubMed

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  3. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  4. Joint wavelet-based coding and packetization for video transport over packet-switched networks

    NASA Astrophysics Data System (ADS)

    Lee, Hung-ju

    1996-02-01

    In recent years, wavelet theory applied to image, and audio and video compression has been extensively studied. However, only gaining compression ratio without considering the underlying networking systems is unrealistic, especially for multimedia applications over networks. In this paper, we present an integrated approach, which attempts to preserve the advantages of wavelet-based image coding scheme and to provide robustness to a certain extent for lost packets over packet-switched networks. Two different packetization schemes, called the intrablock-oriented (IAB) and interblock-oriented (IRB) schemes, in conjunction with wavelet-based coding, are presented. Our approach is evaluated under two different packet loss models with various packet loss probabilities through simulations which are driven by real video sequences.

  5. Wavelet packets feasibility study for the design of an ECG compressor.

    PubMed

    Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Godino-Llorente, Juan Ignacio; Barner, Kenneth E

    2007-04-01

    Most of the recent electrocardiogram (ECG) compression approaches developed with the wavelet transform are implemented using the discrete wavelet transform. Conversely, wavelet packets (WP) are not extensively used, although they are an adaptive decomposition for representing signals. In this paper, we present a thresholding-based method to encode ECG signals using WP. The design of the compressor has been carried out according to two main goals: (1) The scheme should be simple to allow real-time implementation; (2) quality, i.e., the reconstructed signal should be as similar as possible to the original signal. The proposed scheme is versatile as far as neither QRS detection nor a priori signal information is required. As such, it can thus be applied to any ECG. Results show that WP perform efficiently and can now be considered as an alternative in ECG compression applications. PMID:17405386

  6. A signal invariant wavelet function selection algorithm.

    PubMed

    Garg, Girisha

    2016-04-01

    This paper addresses the problem of mother wavelet selection for wavelet signal processing in feature extraction and pattern recognition. The problem is formulated as an optimization criterion, where a wavelet library is defined using a set of parameters to find the best mother wavelet function. For estimating the fitness function, adopted to evaluate the performance of the wavelet function, analysis of variance is used. Genetic algorithm is exploited to optimize the determination of the best mother wavelet function. For experimental evaluation, solutions for best mother wavelet selection are evaluated on various biomedical signal classification problems, where the solutions of the proposed algorithm are assessed and compared with manual hit-and-trial methods. The results show that the solutions of automated mother wavelet selection algorithm are consistent with the manual selection of wavelet functions. The algorithm is found to be invariant to the type of signals used for classification. PMID:26253283

  7. A wavelet phase filter for emission tomography

    SciTech Connect

    Olsen, E.T.; Lin, B.

    1995-07-01

    The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2{pi}). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods.

  8. Heart Disease Detection Using Wavelets

    NASA Astrophysics Data System (ADS)

    González S., A.; Acosta P., J. L.; Sandoval M., M.

    2004-09-01

    We develop a wavelet based method to obtain standardized gray-scale chart of both healthy hearts and of hearts suffering left ventricular hypertrophy. The hypothesis that early bad functioning of heart can be detected must be tested by comparing the wavelet analysis of the corresponding ECD with the limit cases. Several important parameters shall be taken into account such as age, sex and electrolytic changes.

  9. Wavelet analysis in virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Greenblum, Sharon; Li, Jiang; Huang, Adam; Summers, Ronald M.

    2006-03-01

    The computed tomographic colonography (CTC) computer aided detection (CAD) program is a new method in development to detect colon polyps in virtual colonoscopy. While high sensitivity is consistently achieved, additional features are desired to increase specificity. In this paper, a wavelet analysis was applied to CTCCAD outputs in an attempt to filter out false positive detections. 52 CTCCAD detection images were obtained using a screen capture application. 26 of these images were real polyps, confirmed by optical colonoscopy and 26 were false positive detections. A discrete wavelet transform of each image was computed with the MATLAB wavelet toolbox using the Haar wavelet at levels 1-5 in the horizontal, vertical and diagonal directions. From the resulting wavelet coefficients at levels 1-3 for all directions, a 72 feature vector was obtained for each image, consisting of descriptive statistics such as mean, variance, skew, and kurtosis at each level and orientation, as well as error statistics based on a linear predictor of neighboring wavelet coefficients. The vectors for each of the 52 images were then run through a support vector machine (SVM) classifier using ten-fold cross-validation training to determine its efficiency in distinguishing polyps from false positives. The SVM results showed 100% sensitivity and 51% specificity in correctly identifying the status of detections. If this technique were added to the filtering process of the CTCCAD polyp detection scheme, the number of false positive results could be reduced significantly.

  10. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  11. Tests for Wavelets as a Basis Set

    NASA Astrophysics Data System (ADS)

    Baker, Thomas; Evenbly, Glen; White, Steven

    A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.

  12. General inversion formulas for wavelet transforms

    NASA Astrophysics Data System (ADS)

    Holschneider, Matthias

    1993-09-01

    This article is the continuation of a series of articles about group theory and wavelet analysis [A. Grossmann, J. Morlet, and T. Paul, J. Math. Phys. 26, 2473 (1985)]. As is well-known in the case of the afine group, the reconstruction wavelet and the analyzing wavelet need not be identic. In this article it is shown that this holds for arbitrary groups. In addition it is shown that even for nonadmissible analyzing wavelets the wavelet transform may be inverted. Accordingly the image of the wavelet transform can be characterized by many different reproducing kernels.

  13. An Attack on Wavelet Tree Shuffling Encryption Schemes

    NASA Astrophysics Data System (ADS)

    Assegie, Samuel; Salama, Paul; King, Brian

    With the ubiquity of the internet and advances in technology, especially digital consumer electronics, demand for online multimedia services is ever increasing. While it's possible to achieve a great reduction in bandwidth utilization of multimedia data such as image and video through compression, security still remains a great concern. Traditional cryptographic algorithms/systems for data security are often not fast enough to process the vast amounts of data generated by the multimedia applications to meet the realtime constraints. Selective encryption is a new scheme for multimedia content protection. It involves encrypting only a portion of the data to reduce computational complexity(the amount of data to encrypt)while preserving a sufficient level of security. To achieve this, many selective encryption schemes are presented in different literatures. One of them is Wavelet Tree Shuffling. In this paper we assess the security of a wavelet tree shuffling encryption scheme.

  14. Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability

    NASA Astrophysics Data System (ADS)

    Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.

    2010-12-01

    Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.

  15. Hierarchical structure analysis of interstellar clouds using nonorthogonal wavelets

    NASA Technical Reports Server (NTRS)

    Langer, William D.; Wilson, Robert W.; Anderson, Charles H.

    1993-01-01

    We introduce the use of Laplacian pyramid transforms, a form of nonorthogonal wavelets, to analyze the structure of interstellar clouds. These transforms are generally better suited for analyzing structure than orthogonal wavelets because they provide more flexibility in the structure of the encoding functions - here circularly symmetric bandpass filters. This technique is applied to CO maps of Barnard 5. In the (C-13)O maps, for example, we identify 60 different fragments and clumps, as well as several cavities, or bubbles. Many features show evidence of hierarchical structure, with most of the power in the largest wavelengths. The clumps have a more chaotic structure at small wavelengths than expected for Kolmogorov turbulence, and a mass distribution proportional to M exp -5/3. The structure analysis is consistent with a picture where gravity, energy injection, compressible turbulence, and coalescence play an important role in the dynamics of B5.

  16. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  17. Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets

    SciTech Connect

    Duchaineau, M A; Bertram, M; Porumbescu, S; Hamann, B; Joy, K I

    2001-10-03

    Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitable approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.

  18. Spherical wavelet transform: linking global seismic tomography and imaging

    NASA Astrophysics Data System (ADS)

    Pan, J.

    2001-12-01

    Each year, numerous seismic tomographic images are published based on either new parameterization, damping schemes or datasets. Though people agree generally on the longer- wavelength seismic structures, large discrepencies still exist among various models. Normally the data is noisy, thus the inverse problem is often ill-conditioned. Sampling rate may be enough to resolve for long-wavelength structures when we parameterize the earth to a low harmonic order. However, higher order signals (slabs, plume-like structures, and local seismic velocity anomalies (SVA)) on a global scale remain under-sampled. Finer discretization of the model space increases the problem size dramatically but does not alleviate the nature of the problem. The main challenge thus is to find an efficient representation of the model space to solve for the lower- and higher- degree SVAs simultaneously. Spherical wavelets are a good choice because of their compact support (locaized) in both spatial and frequency domains. If SVAs can be viewed as an image, they consist of smooth-varying signals superpositioned by small-scale local changes and can be compressed greatly and represented better using spherical wavelets. By mapping the model parameters into a nested multi-resolution analysis (MRA) space, the signals become comparable in size therefore stable solutions can be achieved at every level of the resolution without introducing subjective damping. The efficiency of using wavelets and MRA to denoise and compress signals can be used to reduce the problem size and eliminate effects of noisy data. This new algorithm can achieve better resolving power for 2D and 3D seismic tomography, by linking image processing with inverse theory. Advances in spherical wavelets enable the introduction of wavelet analysis and a new parameterization of MRA into global tomography studies. In this paper, we present the new inversion method based on spherical wavelet transform. An application to 2D surface wave

  19. An ECG signal compressor based on the selection of optimal threshold levels of discrete wavelet transform coefficients.

    PubMed

    Al-Ajlouni, A F; Abo-Zahhad, M; Ahmed, S M; Schilling, R J

    2008-01-01

    Compression of electrocardiography (ECG) is necessary for efficient storage and transmission of the digitized ECG signals. Discrete wavelet transform (DWT) has recently emerged as a powerful technique for ECG signal compression due to its multi-resolution signal decomposition and locality properties. This paper presents an ECG compressor based on the selection of optimum threshold levels of DWT coefficients in different subbands that achieve maximum data volume reduction while preserving the significant signal morphology features upon reconstruction. First, the ECG is wavelet transformed into m subbands and the wavelet coefficients of each subband are thresholded using an optimal threshold level. Thresholding removes excessively small features and replaces them with zeroes. The threshold levels are defined for each signal so that the bit rate is minimized for a target distortion or, alternatively, the distortion is minimized for a target compression ratio. After thresholding, the resulting significant wavelet coefficients are coded using multi embedded zero tree (MEZW) coding technique. In order to assess the performance of the proposed compressor, records from the MIT-BIH Arrhythmia Database were compressed at different distortion levels, measured by the percentage rms difference (PRD), and compression ratios (CR). The method achieves good CR values with excellent reconstruction quality that compares favourably with various classical and state-of-the-art ECG compressors. Finally, it should be noted that the proposed method is flexible in controlling the quality of the reconstructed signals and the volume of the compressed signals by establishing a target PRD and a target CR a priori, respectively. PMID:19005960

  20. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  1. Group-normalized wavelet packet signal processing

    NASA Astrophysics Data System (ADS)

    Shi, Zhuoer; Bao, Zheng

    1997-04-01

    Since the traditional wavelet and wavelet packet coefficients do not exactly represent the strength of signal components at the very time(space)-frequency tilling, group- normalized wavelet packet transform (GNWPT), is presented for nonlinear signal filtering and extraction from the clutter or noise, together with the space(time)-frequency masking technique. The extended F-entropy improves the performance of GNWPT. For perception-based image, soft-logic masking is emphasized to remove the aliasing with edge preserved. Lawton's method for complex valued wavelets construction is extended to generate the complex valued compactly supported wavelet packets for radar signal extraction. This kind of wavelet packets are symmetry and unitary orthogonal. Well-defined wavelet packets are chosen by the analysis remarks on their time-frequency characteristics. For real valued signal processing, such as images and ECG signal, the compactly supported spline or bi- orthogonal wavelet packets are preferred for perfect de- noising and filtering qualities.

  2. A Mellin transform approach to wavelet analysis

    NASA Astrophysics Data System (ADS)

    Alotta, Gioacchino; Di Paola, Mario; Failla, Giuseppe

    2015-11-01

    The paper proposes a fractional calculus approach to continuous wavelet analysis. Upon introducing a Mellin transform expression of the mother wavelet, it is shown that the wavelet transform of an arbitrary function f(t) can be given a fractional representation involving a suitable number of Riesz integrals of f(t), and corresponding fractional moments of the mother wavelet. This result serves as a basis for an original approach to wavelet analysis of linear systems under arbitrary excitations. In particular, using the proposed fractional representation for the wavelet transform of the excitation, it is found that the wavelet transform of the response can readily be computed by a Mellin transform expression, with fractional moments obtained from a set of algebraic equations whose coefficient matrix applies for any scale a of the wavelet transform. Robustness and computationally efficiency of the proposed approach are shown in the paper.

  3. Wavelet correlations in the [ital p] model

    SciTech Connect

    Greiner, M. Institut fuer Theoretische Physik, Justus Liebig Universitaet, 35392 Geien ); Lipa, P.; Carruthers, P. )

    1995-03-01

    We suggest applying the concept of wavelet transforms to the study of correlations in multiparticle physics. Both the usual correlation functions as well as the wavelet transformed ones are calculated for the [ital p] model, which is a simple but tractable random cascade model. For this model, the wavelet transform decouples correlations between fluctuations defined on different scales. The advantageous properties of factorial moments are also shared by properly defined factorial wavelet correlations.

  4. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742

  5. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  6. A 1D wavelet filtering for ultrasound images despeckling

    NASA Astrophysics Data System (ADS)

    Dahdouh, Sonia; Dubois, Mathieu; Frenoux, Emmanuelle; Osorio, Angel

    2010-03-01

    Ultrasound images appearance is characterized by speckle, shadows, signal dropout and low contrast which make them really difficult to process and leads to a very poor signal to noise ratio. Therefore, for main imaging applications, a denoising step is necessary to apply successfully medical imaging algorithms on such images. However, due to speckle statistics, denoising and enhancing edges on these images without inducing additional blurring is a real challenging problem on which usual filters often fail. To deal with such problems, a large number of papers are working on B-mode images considering that the noise is purely multiplicative. Making such an assertion could be misleading, because of internal pre-processing such as log compression which are done in the ultrasound device. To address those questions, we designed a novel filtering method based on 1D Radiofrequency signal. Indeed, since B-mode images are initially composed of 1D signals and since the log compression made by ultrasound devices modifies noise statistics, we decided to filter directly the 1D Radiofrequency signal envelope before log compression and image reconstitution, in order to conserve as much information as possible. A bi-orthogonal wavelet transform is applied to the log transform of each signal and an adaptive 1D split and merge like algorithm is used to denoise wavelet coefficients. Experiments were carried out on synthetic data sets simulated with Field II simulator and results show that our filter outperforms classical speckle filtering methods like Lee, non-linear means or SRAD filters.

  7. Discrete wavelet transform core for image processing applications

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas E.; Carbone, Richard

    2005-02-01

    This paper presents a flexible hardware architecture for performing the Discrete Wavelet Transform (DWT) on a digital image. The proposed architecture uses a variation of the lifting scheme technique and provides advantages that include small memory requirements, fixed-point arithmetic implementation, and a small number of arithmetic computations. The DWT core may be used for image processing operations, such as denoising and image compression. For example, the JPEG2000 still image compression standard uses the Cohen-Daubechies-Favreau (CDF) 5/3 and CDF 9/7 DWT for lossless and lossy image compression respectively. Simple wavelet image denoising techniques resulted in improved images up to 27 dB PSNR. The DWT core is modeled using MATLAB and VHDL. The VHDL model is synthesized to a Xilinx FPGA to demonstrate hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons. The execution time for performing both DWTs is nearly identical at approximately 14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is around 15,000 gates using only 5% of the Xilinx FPGA hardware area, at 2.185 MHz max clock speed and 24 mW power consumption.

  8. Wavelet approach to accelerator problems. 2: Metaplectic wavelets

    SciTech Connect

    Fedorova, A.; Zeitlin, M.; Parsa, Z.

    1997-05-01

    This is the second part of a series of talks in which the authors present applications of wavelet analysis to polynomial approximations for a number of accelerator physics problems. According to the orbit method and by using construction from the geometric quantization theory they construct the symplectic and Poisson structures associated with generalized wavelets by using metaplectic structure and corresponding polarization. The key point is a consideration of semidirect product of Heisenberg group and metaplectic group as subgroup of automorphisms group of dual to symplectic space, which consists of elements acting by affine transformations.

  9. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  10. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  11. Recent advances in wavelet technology

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  13. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  14. Visual masking in wavelet compression for JPEG-2000

    NASA Astrophysics Data System (ADS)

    Daly, Scott J.; Zeng, Wenjun; Li, Jin; Lei, Shawmin

    2000-04-01

    We describe a nonuniform quantization scheme for JPEG2000 that leverages the masking properties of the visual system, in which visibility to distortions declines as image energy increases. Derivatives of contrast transducer functions convey visual threshold changes due to local image content (i.e. the mask). For any frequency region, these functions have approximately the same shape, once the threshold and mask contrast axes are normalized to the frequency's threshold. We have developed two methods that can work together to take advantage of masking. One uses a nonlinearity interposed between the visual weighting and uniform quantization stage at the encoder. In the decoder, the inverse nonlinearity is applied before the inverse transform. The resulting image- adaptive behavior is achieved with only a small overhead (the masking table), and without adding image assessment computations. This approach, however, underestimates masking near zero crossings within a frequency band, so an additional technique pools coefficient energy in a small local neighborhood around each coefficient within a frequency band. It does this in a causal manner to avoid overhead. The first effect of these techniques is to improve the image quality as the image becomes more complex, and these techniques allow image quality increases in applications where using the visual system's frequency response provides little advantage. A key area of improvement is in low amplitude textures, in areas such as facial skin. The second effect relates to operational attributes, since for a given bitrate, the image quality is more robust against variations in image complexity.

  15. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  16. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  17. Wavelets based on Hermite cubic splines

    NASA Astrophysics Data System (ADS)

    Cvejnová, Daniela; Černá, Dana; Finěk, Václav

    2016-06-01

    In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.

  18. Wavelet-based face verification for constrained platforms

    NASA Astrophysics Data System (ADS)

    Sellahewa, Harin; Jassim, Sabah A.

    2005-03-01

    Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.

  19. Iterative image coding with overcomplete complex wavelet transforms

    NASA Astrophysics Data System (ADS)

    Kingsbury, Nick G.; Reeves, Tanya

    2003-06-01

    Overcomplete transforms, such as the Dual-Tree Complex Wavelet Transform, can offer more flexible signal representations than critically-sampled transforms such as the Discrete Wavelet Transform. However the process of selecting the optimal set of coefficients to code is much more difficult because many different sets of transform coefficients can represent the same decoded image. We show that large numbers of transform coefficients can be set to zero without much reconstruction quality loss by forcing compensatory changes in the remaining coefficients. We develop a system for achieving these coding aims of coefficient elimination and compensation, based on iterative projection of signals between the image domain and transform domain with a non-linear process (e.g.~centre-clipping or quantization) applied in the transform domain. The convergence properties of such non-linear feedback loops are discussed and several types of non-linearity are proposed and analyzed. The compression performance of the overcomplete scheme is compared with that of the standard Discrete Wavelet Transform, both objectively and subjectively, and is found to offer advantages of up to 0.65 dB in PSNR and significant reduction in visibility of some types of coding artifacts.

  20. Simultaneous compression and encryption for secure real-time secure transmission of sensitive video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.

    2014-05-01

    Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.

  1. A Wavelet Perspective on the Allan Variance.

    PubMed

    Percival, Donald B

    2016-04-01

    The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757

  2. Foveated wavelet image quality index

    NASA Astrophysics Data System (ADS)

    Wang, Zhou; Bovik, Alan C.; Lu, Ligang; Kouloheris, Jack L.

    2001-12-01

    The human visual system (HVS) is highly non-uniform in sampling, coding, processing and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. Currently, most image quality measurement methods are designed for uniform resolution images. These methods do not correlate well with the perceived foveated image quality. Wavelet analysis delivers a convenient way to simultaneously examine localized spatial as well as frequency information. We developed a new image quality metric called foveated wavelet image quality index (FWQI) in the wavelet transform domain. FWQI considers multiple factors of the HVS, including the spatial variance of the contrast sensitivity function, the spatial variance of the local visual cut-off frequency, the variance of human visual sensitivity in different wavelet subbands, and the influence of the viewing distance on the display resolution and the HVS features. FWQI can be employed for foveated region of interest (ROI) image coding and quality enhancement. We show its effectiveness by using it as a guide for optimal bit assignment of an embedded foveated image coding system. The coding system demonstrates very good coding performance and scalability in terms of foveated objective as well as subjective quality measurement.

  3. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  4. Uncertainty Principle and Elementary Wavelet

    NASA Astrophysics Data System (ADS)

    Bliznetsov, M.

    This paper is aimed to define time-and-spectrum characteristics of elementary wavelet. An uncertainty relation between the width of a pulse amplitude spectrum and its time duration and extension in space is investigated in the paper. Analysis of uncertainty relation is carried out for the causal pulses with minimum-phase spectrum. Amplitude spectra of elementary pulses are calculated using modified Fourier spectral analysis. Modification of Fourier analysis is justified by the necessity of solving zero frequency paradox in amplitude spectra that are calculated with the help of standard Fourier anal- ysis. Modified Fourier spectral analysis has the same resolution along the frequency axis and excludes physically unobservable values from time-and-spectral presenta- tions and determines that Heaviside unit step function has infinitely wide spectrum equal to 1 along the whole frequency range. Dirac delta function has the infinitely wide spectrum in the infinitely high frequency scope. Difference in propagation of wave and quasi-wave forms of energy motion is established from the analysis of un- certainty relation. Unidirectional pulse velocity depends on the relative width of the pulse spectra. Oscillating pulse velocity is constant in given nondispersive medium. Elementary wavelet has the maximum relative spectrum width and minimum time du- ration among all the oscillating pulses whose velocity is equal to the velocity of casual harmonic components of the pulse spectra. Relative width of elementary wavelet spec- trum in regard to resonance frequency is square root of 4/3 and approximately equal to 1.1547.... Relative width of this wavelet spectrum in regard to the center frequency is equal to 1. The more relative width of unidirectional pulse spectrum exceeds rela- tive width of elementary wavelet spectrum the higher velocity of unidirectional pulse propagation. The concept of velocity exceeding coefficient is introduced for pulses presenting quasi-wave form of energy

  5. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  6. Compressive Holography

    NASA Astrophysics Data System (ADS)

    Lim, Se Hoon

    Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field

  7. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  8. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  9. A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding

    NASA Astrophysics Data System (ADS)

    Alesanco, Álvaro; García, José

    2007-12-01

    Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD), which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT) algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10[InlineEquation not available: see fulltext.] preserves the signal quality and thus they recommend this value to be used in the compression system.

  10. Next gen wavelets down-sampling preserving statistics

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Miao, Lidan; Chanyagon, Pornchai; Cader, Masud

    2007-04-01

    We extend the 2 nd Gen Discrete Wavelet Transform (DWT) of Swelden to the Next Generations (NG) Digital Wavelet Transform (DWT) preserving the statistical salient features. The lossless NG DWT accomplishes the data compression of "wellness baseline profiles (WBP)" of aging population at homes. For medical monitoring system at home fronts we translate the military experience to dual usage of veterans & civilian alike with the following three requirements: (i) Data Compression: The necessary down sampling reduces the immense amount of data of individual WBP from hours to days and to weeks for primary caretakers in terms of moments, e.g. mean value, variance, etc., without the artifacts caused by FFT arbitrary windowing. (ii) Lossless: our new NG_DWT must preserve the original data sets. (iii) Phase Transition: NG_DWT must capture the critical phase transition of the wellness toward the sickness with simultaneous display of local statistical moments. According to the Nyquist sampling theory, assuming a band-limited wellness physiology, we must sample the WBP at least twice per day since it is changing diurnally and seasonally. Since NG_DWT, like the 2 nd Gen, is lossless, we can reconstruct the original time series for the physicians' second looks. This technique of NG_DWT can also help stock market day-traders monitoring the volatility of multiple portfolios without artificial horizon artifacts.