Wavelet compression of medical imagery.
Reiter, E
1996-01-01
Wavelet compression is a transform-based compression technique recently shown to provide diagnostic-quality images at compression ratios as great as 30:1. Based on a recently developed field of applied mathematics, wavelet compression has found success in compression applications from digital fingerprints to seismic data. The underlying strength of the method is attributable in large part to the efficient representation of image data by the wavelet transform. This efficient or sparse representation forms the basis for high-quality image compression by providing subsequent steps of the compression scheme with data likely to result in long runs of zero. These long runs of zero in turn compress very efficiently, allowing wavelet compression to deliver substantially better performance than existing Fourier-based methods. Although the lack of standardization has historically been an impediment to widespread adoption of wavelet compression, this situation may begin to change as the operational benefits of the technology become better known. PMID:10165355
Perceptually Lossless Wavelet Compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John
1996-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Data compression by wavelet transforms
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1992-01-01
A wavelet transform algorithm is applied to image compression. It is observed that the algorithm does not suffer from the blockiness characteristic of the DCT-based algorithms at compression ratios exceeding 25:1, but the edges do not appear as sharp as they do with the latter method. Some suggestions for the improved performance of the wavelet transform method are presented.
Wavelet transform in electrocardiography--data compression.
Provazník, I; Kozumplík, J
1997-06-01
An application of the wavelet transform to electrocardiography is described in the paper. The transform is used as a first stage of a lossy compression algorithm for efficient coding of rest ECG signals. The proposed technique is based on the decomposition of the ECG signal into a set of basic functions covering the time-frequency domain. Thus, non-stationary character of ECG data is considered. Some of the time-frequency signal components are removed because of their low influence to signal characteristics. Resulting components are efficiently coded by quantization, composition into a sequence of coefficients and compression by a run-length coder and a entropic Huffman coder. The proposed wavelet-based compression algorithm can compress data to average code length about 1 bit/sample. The algorithm can be also implemented to a real-time processing system when wavelet transform is computed by fast linear filters described in the paper. PMID:9291025
Compression of echocardiographic scan line data using wavelet packet transform
NASA Technical Reports Server (NTRS)
Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.
2001-01-01
An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.
Wavelet analysis and high quality JPEG2000 compression using Daubechies wavelet
NASA Astrophysics Data System (ADS)
Khalid, Azra; Afsheen, Uzma; Umer Baig, Saad
2011-10-01
Wavelet analysis and its application has found much attention in recent times. It is vastly applied in many applications such as involving transient signal analysis, image processing, signal processing and data compression. It has gained popularity because of its multiresolution, subband coding and feature extraction features. The paper describes efficient application of wavelet analysis for image compression, exploring Daubechies wavelet as the basis function. Wavelets have scaling properties. They are localized in time and frequency. Wavelets separate the image into different scales on the basis of frequency content. The resulting compressed image can then be easily stored or transmitted saving crucial communication bandwidth. Wavelet analysis because of its high quality compression is one of the feature blocks in the new JPEG2000 image compression standard. The paper proposes Daubechies wavelet analysis, quantization and Huffman encoding scheme which results in high compression and good quality reconstruction.
Compressive sensing exploiting wavelet-domain dependencies for ECG compression
NASA Astrophysics Data System (ADS)
Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.
2012-06-01
Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet
Wavelet compression efficiency investigation for medical images
NASA Astrophysics Data System (ADS)
Moryc, Marcin; Dziech, Wiera
2006-03-01
Medical images are acquired or stored digitally. These images can be very large in size and number, and compression can increase the speed of transmission and reduce the cost of storage. In the paper analysis of medical images' approximation using the transform method based on wavelet functions is investigated. The tested clinical images are taken from multiple anatomical regions and modalities (Computer Tomography CT, Magnetic Resonance MR, Ultrasound, Mammography and X-Ray images). To compress medical images, the threshold criterion has been applied. The mean square error (MSE) is used as a measure of approximation quality. Plots of the MSE versus compression percentage and approximated images are included for comparison of approximation efficiency.
Improved Compression of Wavelet-Transformed Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Klimesh, Matthew
2005-01-01
A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average
Embedded wavelet packet transform technique for texture compression
NASA Astrophysics Data System (ADS)
Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay
1995-09-01
A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.
Wavelet based ECG compression with adaptive thresholding and efficient coding.
Alshamali, A
2010-01-01
This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811
Coresident sensor fusion and compression using the wavelet transform
Yocky, D.A.
1996-03-11
Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.
Perceptually lossless wavelet-based compression for medical images
NASA Astrophysics Data System (ADS)
Lin, Nai-wen; Yu, Tsaifa; Chan, Andrew K.
1997-05-01
In this paper, we present a wavelet-based medical image compression scheme so that images displayed on different devices are perceptually lossless. Since visual sensitivity of human varies with different subbands, we apply the perceptual lossless criteria to quantize the wavelet transform coefficients of each subband such that visual distortions are reduced to unnoticeable. Following this, we use a high compression ratio hierarchical tree to code these coefficients. Experimental results indicate that our perceptually lossless coder achieves a compression ratio 2-5 times higher than typical lossless compression schemes while producing perceptually identical image content on the target display device.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
Three-dimensional compression scheme based on wavelet transform
NASA Astrophysics Data System (ADS)
Yang, Wu; Xu, Hui; Liao, Mengyang
1999-03-01
In this paper, a 3D compression method based on separable wavelet transform is discussed in detail. The most commonly used digital modalities generate multiple slices in a single examination, which are normally anatomically or physiologically correlated to each other. 3D wavelet compression methods can achieve more efficient compression by exploring the correlation between slices. The first step is based on a separable 3D wavelet transform. Considering the difference between pixel distances within a slice and those between slices, one biorthogonal Antoninin filter bank is applied within 2D slices and a second biorthogonal Villa4 filter bank on the slice direction. Then, S+P transform is applied in the low-resolution wavelet components and an optimal quantizer is presented after analysis of the quantization noise. We use an optimal bit allocation algorithm, which, instead of eliminating the coefficients of high-resolution components in smooth areas, minimizes the system reconstruction distortion at a given bit-rate. Finally, to remain high coding efficiency and adapt to different properties of each component, a comprehensive entropy coding method is proposed, in which arithmetic coding method is applied in high-resolution components and adaptive Huffman coding method in low-resolution components. Our experimental results are evaluated by several image measures and our 3D wavelet compression scheme is proved to be more efficient than 2D wavelet compression.
Wavelet-based audio embedding and audio/video compression
NASA Astrophysics Data System (ADS)
Mendenhall, Michael J.; Claypoole, Roger L., Jr.
2001-12-01
Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.
The effects of wavelet compression on Digital Elevation Models (DEMs)
Oimoen, M.J.
2004-01-01
This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.
Adaptive video compressed sampling in the wavelet domain
NASA Astrophysics Data System (ADS)
Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi
2016-07-01
In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.
Wavelet/scalar quantization compression standard for fingerprint images
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.
Wavelet based hierarchical coding scheme for radar image compression
NASA Astrophysics Data System (ADS)
Sheng, Wen; Jiao, Xiaoli; He, Jifeng
2007-12-01
This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.
Compressed-domain video segmentation using wavelet transformation
NASA Astrophysics Data System (ADS)
Yu, Hong H.
1999-10-01
Video segmentation is an important first step towards automatic video indexing, retrieval, editing, and etc. However, the 'large' property of video makes it hard to handle in real time. To fulfill the goal of real-time processing, several factors need to be considered. First of all, indexing video directly in the compressed-domain offers the advantages of fast processing upon efficient storage. Secondly, extracting simple features with fast algorithms is no doubt helpful in speeding up the process. The questions are what kind of simple feature can characterize the changing statistics and what kind of algorithm can provide such feature with fast executability. In this paper, we propose a new automatic video segmentation scheme that utilizes wavelet transformation based on the following consideration: wavelet is a nice tool for subband decomposition, it encodes both frequency and spatial information; more over, it is easy to program and fast to execute. In the last decade or so, wavelet transform is emerged to image/video signal processing for analyzing functions at different levels of details. In particular, wavelet, as a tool, has been widely used in the area of image compression. In image compression, it is possible to recover a fairly accurate representation of the image by saving the few largest wavelet coefficients (and throwing away part or all of the smaller coefficients). By using this property, we extract a discrimination signature of each image from a few large coefficients for each color channel. The system works on the compressed video that does not require full decoding of the video and performs a wavelet transformation on the extracted video data. The signature (as feature) is extracted from the wavelet coefficients to characterize the changing statistics of shot transitions. Cuts, fades, and dissolve are detected based on the analysis of the changing statistics curve.
Oncologic image compression using both wavelet and masking techniques.
Yin, F F; Gao, Q
1997-12-01
A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown. PMID:9434988
Research of the wavelet based ECW remote sensing image compression technology
NASA Astrophysics Data System (ADS)
Zhang, Lan; Gu, Xingfa; Yu, Tao; Dong, Yang; Hu, Xinli; Xu, Hua
2007-11-01
This paper mainly study the wavelet based ECW remote sensing image compression technology. Comparing with the tradition compression technology JPEG and new compression technology JPEG2000 witch based on wavelet we can find that when compress quite large remote sensing image the ER Mapper Compressed Wavelet (ECW) can has significant advantages. The way how to use the ECW SDK was also discussed and prove that it's also the best and faster way to compress China-Brazil Earth Resource Satellite (CBERS) image.
Low-complexity wavelet filter design for image compression
NASA Technical Reports Server (NTRS)
Majani, E.
1994-01-01
Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.
Wavelet for Ultrasonic Flaw Enhancement and Image Compression
NASA Astrophysics Data System (ADS)
Cheng, W.; Tsukada, K.; Li, L. Q.; Hanasaki, K.
2003-03-01
Ultrasonic imaging has been widely used in Non-destructive Testing (NDT) and medical application. However, the image is always degraded by blur and noise. Besides, the pressure on both storage and transmission gives rise to the need of image compression. We apply 2-D Discrete Wavelet Transform (DWT) to C-scan 2-D images to realize flaw enhancement and image compression, taking advantage of DWT scale and orientation selectivity. The Wavelet coefficient thresholding and scalar quantization are employed respectively. Furthermore, we realize the unification of flaw enhancement and image compression in one process. The reconstructed image from the compressed data gives a clearer interpretation of the flaws at a much smaller bit rate.
Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization
NASA Astrophysics Data System (ADS)
Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.
2004-02-01
Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.
Medical image compression algorithm based on wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin
2005-02-01
With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a
Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method
NASA Astrophysics Data System (ADS)
Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph
2008-11-01
This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.
Electroencephalographic compression based on modulated filter banks and wavelet transform.
Bazán-Prieto, Carlos; Cárdenas-Barrera, Julián; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando
2011-01-01
Due to the large volume of information generated in an electroencephalographic (EEG) study, compression is needed for storage, processing or transmission for analysis. In this paper we evaluate and compare two lossy compression techniques applied to EEG signals. It compares the performance of compression schemes with decomposition by filter banks or wavelet Packets transformation, seeking the best value for compression, best quality and more efficient real time implementation. Due to specific properties of EEG signals, we propose a quantization stage adapted to the dynamic range of each band, looking for higher quality. The results show that the compressor with filter bank performs better than transform methods. Quantization adapted to the dynamic range significantly enhances the quality. PMID:22255966
Improved wavelet packet compression of electrocardiogram data: 1. noise filtering
NASA Astrophysics Data System (ADS)
Bradie, Brian D.
1995-09-01
The improvement in the performance of a wavelet packet based compression scheme for single lead electrocardiogram (ECG) data, obtained by prefiltering noise from the ECG signals, is investigated. The removal of powerline interference and the attenuation of high-frequency muscle noise are considered. Selected records from the MIT-BIH Arrhythmia Database are used as test signals. After both types of noise artifact were filtered, an average data rate of 167.6 bits per second (corresponding to a compression ratio of 23.62), with an average root mean-square (rms) error of 15.886 (mu) V, was achieved. These figures represent better than a 9% improvement in data rate and a 13.5% reduction in rms error over compressing the unfiltered signals.
HVS-motivated quantization schemes in wavelet image compression
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj N.
1996-11-01
Wavelet still image compression has recently been a focus of intense research, and appears to be maturing as a subject. Considerable coding gains over older DCT-based methods have been achieved, while the computational complexity has been made very competitive. We report here on a high performance wavelet still image compression algorithm optimized for both mean-squared error (MSE) and human visual system (HVS) characteristics. We present the problem of optimal quantization from a Lagrange multiplier point of view, and derive novel solutions. Ideally, all three components of a typical image compression system: transform, quantization, and entropy coding, should be optimized simultaneously. However, the highly nonlinear nature of quantization and encoding complicates the formulation of the total cost function. In this report, we consider optimizing the filter, and then the quantizer, separately, holding the other two components fixed. While optimal bit allocation has been treated in the literature, we specifically address the issue of setting the quantization stepsizes, which in practice is quite different. In this paper, we select a short high- performance filter, develop an efficient scalar MSE- quantizer, and four HVS-motivated quantizers which add some value visually without incurring any MSE losses. A combination of run-length and empirically optimized Huffman coding is fixed in this study.
Compression of 3D integral images using wavelet decomposition
NASA Astrophysics Data System (ADS)
Mazri, Meriem; Aggoun, Amar
2003-06-01
This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.
Hyperspectral image compression using bands combination wavelet transformation
NASA Astrophysics Data System (ADS)
Wang, Wenjie; Zhao, Zhongming; Zhu, Haiqing
2009-10-01
Hyperspectral imaging technology is the foreland of the remote sensing development in the 21st century and is one of the most important focuses of the remote sensing domain. Hyperspectral images can provide much more information than multispectral images do and can solve many problems which can't be solved by multispectral imaging technology. However this advantage is at the cost of massy quantity of data that brings difficulties of images' process, storage and transmission. Research on hyperspectral image compression method has important practical significance. This paper intends to do some improvement of the famous KLT-WT-2DSPECK (Karhunen-Loeve transform+ wavelet transformation+ two-dimensional set partitioning embedded block compression) algorithm and advances KLT + bands combination 2DWT + 2DSPECK algorithm. Experiment proves that this method is effective.
NASA Astrophysics Data System (ADS)
Widjaja, Joewono
2015-11-01
A new method is proposed for recognizing noise corrupted low-contrast retinal images that employs joint wavelet transform correlator with compressed reference and target. Noise robustness is achieved by correlating wavelet-transformed retinal target and reference images. Simulation results show that besides being robust to noise, its recognition performance can become independent upon compression qualities when low spatial-frequency components of joint power spectrum are enhanced by appropriately dilated wavelet filter.
Adaptive segmentation of wavelet transform coefficients for video compression
NASA Astrophysics Data System (ADS)
Wasilewski, Piotr
2000-04-01
This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.
Wavelet Compression of Satellite-Transmitted Digital Mammograms
NASA Technical Reports Server (NTRS)
Zheng, Yuan F.
2001-01-01
Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by
Bradley, J.N.; Brislawn, C.M.
1992-04-11
This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.
Dynamic contrast-based quantization for lossy wavelet image compression.
Chandler, Damon M; Hemami, Sheila S
2005-04-01
This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented. PMID:15825476
Remotely sensed image compression based on wavelet transform
NASA Technical Reports Server (NTRS)
Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.
1995-01-01
In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.
Landin, Cristina Juarez; Reyes, Magally Martinez; Martin, Anabelem Soberanes; Rosas, Rosa Maria Valdovinos; Ramirez, Jose Luis Sanchez; Ponomaryov, Volodymyr; Soto, Maria Dolores Torres
2011-01-01
The analysis of different Wavelets including novel Wavelet families based on atomic functions are presented, especially for ultrasound (US) and mammography (MG) images compression. This way we are able to determine with what type of filters Wavelet works better in compression of such images. Key properties: Frequency response, approximation order, projection cosine, and Riesz bounds were determined and compared for the classic Wavelets W9/7 used in standard JPEG2000, Daubechies8, Symlet8, as well as for the complex Kravchenko-Rvachev Wavelets ψ(t) based on the atomic functions up(t), fup (2)(t), and eup(t). The comparison results show significantly better performance of novel Wavelets that is justified by experiments and in study of key properties. PMID:21431590
Bieleck, T.; Song, L.M.; Yau, S.S.T.; Kwong, M.K.
1995-07-01
The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.
Best parameters selection for wavelet packet-based compression of magnetic resonance images.
Abu-Rezq, A N; Tolba, A S; Khuwaja, G A; Foda, S G
1999-10-01
Transmission of compressed medical images is becoming a vital tool in telemedicine. Thus new methods are needed for efficient image compression. This study discovers the best design parameters for a data compression scheme applied to digital magnetic resonance (MR) images. The proposed technique aims at reducing the transmission cost while preserving the diagnostic information. By selecting the wavelet packet's filters, decomposition level, and subbands that are better adapted to the frequency characteristics of the image, one may achieve better image representation in the sense of lower entropy or minimal distortion. Experimental results show that the selection of the best parameters has a dramatic effect on the data compression rate of MR images. In all cases, decomposition at three or four levels with the Coiflet 5 wavelet (Coif 5) results in better compression performance than the other wavelets. Image resolution is found to have a remarkable effect on the compression rate. PMID:10529302
The wavelet/scalar quantization compression standard for digital fingerprint images
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity
Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Compressed sensing MR image reconstruction exploiting TGV and wavelet sparsity.
Zhao, Di; Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Dong, Xiao-Ling; Sun, Xu-Dong
2013-12-01
The feasibility was explored in determination of reducing sugar content of potato granules based on wavelet compression algorithm combined with near-infrared spectroscopy. The spectra of 250 potato granules samples were recorded by Fourier transform near-infrared spectrometer in the range of 4000- 10000 cm-1. The three parameters of vanishing moments, wavelet coefficients and principal component factor were optimized. The optimization results of three parameters were 10, 100 and 20, respectively. The original spectra of 1501 spectral variables were transfered to 100 wavelet coefficients using db wavelet function. The partial least squares (PLS) calibration models were developed by 1501 spectral variables and 100 wavelet coefficients. Sixty two unknown samples of prediction set were applied to evaluate the performance of PLS models. By comparison, the optimal result was obtained by wavelet compression combined with PLS calibration model. The correlation coefficient of prediction and root mean square error of prediction were 0.98 and 0.181%, respectively. Experimental results show that the dimensions of spectral data were reduced, scarcely losing effective information by wavelet compression algorithm combined with near-infrared spectroscopy technology in determination of reducing sugar in potato granules. The PLS model is simplified, and the predictive ability is improved. PMID:24611373
Medical image compression based on a morphological representation of wavelet coefficients.
Phelan, N C; Ennis, J T
1999-08-01
Image compression is fundamental to the efficient and cost-effective use of digital medical imaging technology and applications. Wavelet transform techniques currently provide the most promising approach to high-quality image compression which is essential for diagnostic medical applications. A novel approach to image compression based on the wavelet decomposition has been developed which utilizes the shape or morphology of wavelet transform coefficients in the wavelet domain to isolate and retain significant coefficients corresponding to image structure and features. The remaining coefficients are further compressed using a combination of run-length and Huffman coding. The technique has been implemented and applied to full 16 bit medical image data for a range of compression ratios. Objective peak signal-to-noise ratio performance of the compression technique was analyzed. Results indicate that good reconstructed image quality can be achieved at compression ratios of up to 15:1 for the image types studied. This technique represents an effective approach to the compression of diagnostic medical images and is worthy of further, more thorough, evaluation of diagnostic quality and accuracy in a clinical setting. PMID:10501061
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900
Compressed sensing based on the improved wavelet transform for image processing
NASA Astrophysics Data System (ADS)
Pang, Peng; Gao, Wei; Song, Zongxi; XI, Jiang-bo
2014-09-01
Compressed sensing theory is a new sampling theory that can sample signal in a below sampling rate than the traditional Nyquist sampling theory. Compressed sensing theory that has given a revolutionary solution is a novel sampling and processing theory under the condition that the signal is sparse or compressible. This paper investigates how to improve the theory of CS and its application in imaging system. According to the properties of wavelet transform sub-bands, an improved compressed sensing algorithm based on the single layer wavelet transform was proposed. Based on the feature that the most information was preserved on the low-pass layer after the wavelet transform, the improved compressed sensing algorithm only measured the low-pass wavelet coefficients of the image but preserving the high-pass wavelet coefficients. The signal can be restricted exactly by using the appropriate reconstruction algorithms. The reconstruction algorithm is the key point that most researchers focus on and significant progress has been made. For the reconstruction, in order to improve the orthogonal matching pursuit (OMP) algorithm, increased the iteration layers make sure low-pass wavelet coefficients could be recovered by measurements exactly. Then the image could be reconstructed by using the inverse wavelet transform. Compared the original compressed sensing algorithm, simulation results demonstrated that the proposed algorithm decreased the processed data, signal processed time decreased obviously and the recovered image quality improved to some extent. The PSNR of the proposed algorithm was improved about 2 to 3 dB. Experimental results show that the proposed algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same measurement rates, while with a faster convergence speed.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Mitra, S; Yang, S; Kustov, V
1998-11-01
Compression of medical images has always been viewed with skepticism, since the loss of information involved is thought to affect diagnostic information. However, recent research indicates that some wavelet-based compression techniques may not effectively reduce the image quality, even when subjected to compression ratios up to 30:1. The performance of a recently designed wavelet-based adaptive vector quantization is compared with a well-known wavelet-based scalar quantization technique to demonstrate the superiority of the former technique at compression ratios higher than 30:1. The use of higher compression with high fidelity of the reconstructed images allows fast transmission of images over the Internet for prompt inspection by radiologists at remote locations in an emergency situation, while higher quality images follow in a progressive manner if desired. Such fast and progressive transmission can also be used for downloading large data sets such as the Visible Human at a quality desired by the users for research or education. This new adaptive vector quantization uses a neural networks-based clustering technique for efficient quantization of the wavelet-decomposed subimages, yielding minimal distortion in the reconstructed images undergoing high compression. Results of compression up to 100:1 are shown for 24-bit color and 8-bit monochrome medical images. PMID:9848058
Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture
NASA Astrophysics Data System (ADS)
Anandkumar, Janavikulam; Szu, Harold H.
1995-04-01
This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.
Generalized b-spline subdivision-surface wavelets and lossless compression
Bertram, M; Duchaineau, M A; Hamann, B; Joy, K I
1999-11-24
We present a new construction of wavelets on arbitrary two-manifold topology for geometry compression. The constructed wavelets generalize symmetric tensor product wavelets with associated B-spline scaling functions to irregular polygonal base mesh domains. The wavelets and scaling functions are tensor products almost everywhere, except in the neighborhoods of some extraordinary points (points of valence unequal four) in the base mesh that defines the topology. The compression of arbitrary polygonal meshes representing isosurfaces of scalar-valued trivariate functions is a primary application. The main contribution of this paper is the generalization of lifted symmetric tensor product B-spline wavelets to two-manifold geometries. Surfaces composed of B-spline patches can easily be converted to this scheme. We present a lossless compression method for geometries with or without associated functions like color, texture, or normals. The new wavelet transform is highly efficient and can represent surfaces at any level of resolution with high degrees of continuity, except at a finite number of extraordinary points in the base mesh. In the neighborhoods of these points detail can be added to the surface to approximate any degree of continuity.
Clinical utility of wavelet compression for resolution-enhanced chest radiography
NASA Astrophysics Data System (ADS)
Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.
2000-05-01
This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.
An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Faster techniques to evolve wavelet coefficients for better fingerprint image compression
NASA Astrophysics Data System (ADS)
Shanavaz, K. T.; Mythili, P.
2013-05-01
In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 × 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.
Optimal block boundary pre/postfiltering for wavelet-based image and video compression.
Liang, Jie; Tu, Chengjie; Tran, Trac D
2005-12-01
This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework. PMID:16370467
Multispectral image compression technology based on dual-tree discrete wavelet transform
NASA Astrophysics Data System (ADS)
Fang, Zhijun; Luo, Guihua; Liu, Zhicheng; Gan, Yun; Lu, Yu
2009-10-01
The paper proposes a combination of DCT and the Dual-Tree Discrete Wavelet Transform (DDWT) to solve the problems in multi-spectral image data storage and transmission. The proposed method not only removes spectral redundancy by1D DCT, but also removes spatial redundancy by 2D Dual-Tree Discrete Wavelet Transform. Therefore, it achieves low distortion under the conditions of high compression and high-quality reconstruction of the multi-spectral image. Tested by DCT, Haar and DDWT, the results show that the proposed method eliminates the blocking effect of wavelet and has strong visual sense and smooth image, which means the superiors with DDWT has more prominent quality of reconstruction and less noise.
NASA Astrophysics Data System (ADS)
Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.
2013-08-01
We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.
DSP accelerator for the wavelet compression/decompression of high- resolution images
Hunt, M.A.; Gleason, S.S.; Jatko, W.B.
1993-07-23
A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.
[A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].
Zhao, An; Wu, Baoming
2006-12-01
This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained. PMID:17228703
Kim, Byung S; Yoo, Sun K
2007-09-01
The use of wireless networks bears great practical importance in instantaneous transmission of ECG signals during movement. In this paper, three typical wavelet-based ECG compression algorithms, Rajoub (RA), Embedded Zerotree Wavelet (EZ), and Wavelet Transform Higher-Order Statistics Coding (WH), were evaluated to find an appropriate ECG compression algorithm for scalable and reliable wireless tele-cardiology applications, particularly over a CDMA network. The short-term and long-term performance characteristics of the three algorithms were analyzed using normal, abnormal, and measurement noise-contaminated ECG signals from the MIT-BIH database. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via simulation models including the noise-free channel model, random noise channel model, and CDMA channel model, as well as over an actual CDMA network currently operating in Korea. This study found that the EZ algorithm achieves the best compression efficiency within a low-noise environment, and that the WH algorithm is competitive for use in high-error environments with degraded short-term performance with abnormal or contaminated ECG signals. PMID:17701824
Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm
NASA Technical Reports Server (NTRS)
Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin
1994-01-01
The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.
A linear quality control design for high efficient wavelet-based ECG data compression.
Hung, King-Chu; Tsai, Chin-Feng; Ku, Cheng-Tung; Wang, Huan-Sheng
2009-05-01
In ECG data compression, maintaining reconstructed signal with desired quality is crucial for clinical application. In this paper, a linear quality control design based on the reversible round-off non-recursive discrete periodized wavelet transform (RRO-NRDPWT) is proposed for high efficient ECG data compression. With the advantages of error propagation resistance and octave coefficient normalization, RRO-NRDPWT enables the non-linear quantization control to obtain an approximately linear distortion by using a single control variable. Based on the linear programming, a linear quantization scale prediction model is presented for the quality control of reconstructed ECG signal. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better quality control performance than that of other wavelet-based systems. PMID:19070935
Review of digital fingerprint acquisition systems and wavelet compression
NASA Astrophysics Data System (ADS)
Hopper, Thomas
2003-04-01
Over the last decade many criminal justice agencies have replaced their fingerprint card based systems with electronic processing. We examine these new systems and find that image acquisition to support the identification application is consistently a challenge. Image capture and compression are widely dispersed and relatively new technologies within criminal justice information systems. Image quality assurance programs are just beginning to mature.
The wavelet transform and the suppression theory of binocular vision for stereo image compression
Reynolds, W.D. Jr; Kenyon, R.V.
1996-08-01
In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.
Wavelet-based ECG compression by bit-field preserving and running length encoding.
Chan, Hsiao-Lung; Siao, You-Chen; Chen, Szi-Wen; Yu, Shih-Fan
2008-04-01
Efficient electrocardiogram (ECG) compression can reduce the payload of real-time ECG transmission as well as reduce the amount of data storage in long-term ECG recording. In this paper an ECG compression/decompression architecture based on the bit-field preserving (BFP) and running length encoding (RLE)/decoding schemes incorporated with the discrete wavelet transform (DWT) is proposed. Compared to complex and repetitive manipulations in the set partitioning in hierarchical tree (SPIHT) coding and the vector quantization (VQ), the proposed algorithm has advantages of simple manipulations and a feedforward structure that would be suitable to implement on very-large-scale integrated circuits and general microcontrollers. PMID:18164098
Speech coding and compression using wavelets and lateral inhibitory networks
NASA Astrophysics Data System (ADS)
Ricart, Richard
1990-12-01
The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.
Applications of wavelet-based compression to multidimensional Earth science data
NASA Technical Reports Server (NTRS)
Bradley, Jonathan N.; Brislawn, Christopher M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Applications of wavelet-based compression to multidimensional earth science data
Bradley, J.N.; Brislawn, C.M.
1993-01-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
Applications of wavelet-based compression to multidimensional earth science data
Bradley, J.N.; Brislawn, C.M.
1993-02-01
A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.
SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression
Bradley, J.N.; Brislawn, C.M.
1993-12-01
The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.
NASA Astrophysics Data System (ADS)
Zeng, Li; Jansen, Christian; Unser, Michael A.; Hunziker, Patrick
2001-12-01
High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.
Fahmy, Gamal; Black, John; Panchanathan, Sethuraman
2006-06-01
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented. PMID:16764265
Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz
2016-03-01
In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol. PMID:26779641
Clinical evaluation of wavelet compression of digitized chest x-rays
NASA Astrophysics Data System (ADS)
Erickson, Bradley J.; Manduca, Armando; Persons, Kenneth R.
1997-05-01
In this paper we assess lossy image compression of digitalized chest x-rays using radiologist assessment of anatomic structures and numerical measurements of image accuracy. Forty chest x-rays were digitized and compressed using an irreversible wavelet technique at 10, 20, 40 and 80:1. These were presented in a blinded fashion with an uncompressed image for subjective A-B comparison of 11 anatomic structures as well as overall quality. Mean error, RMS error, maximum pixel error, and number of pixels within 1 percent of original value were also computed for compression ratios from 10:1 to 80:1. We found that at low compression there was a slight preference for compressed images. There was no significant difference at 20:1 and 40:1. There was a slight preference on some structures for the original compared with 80:1 compressed images. Numerical measures demonstrated high image faithfulness, both in terms of number of pixels that were within 1 percent of their original value, and by the average error for all pixels. Our findings suggest that lossy compression at 40:1 or more can be used without perceptible loss in the demonstration of anatomic structures.
Block based image compression technique using rank reduction and wavelet difference reduction
NASA Astrophysics Data System (ADS)
Bolotnikova, Anastasia; Rasti, Pejman; Traumann, Andres; Lusi, Iiris; Daneshmand, Morteza; Noroozi, Fatemeh; Samuel, Kadri; Sarkar, Suman; Anbarjafari, Gholamreza
2015-12-01
In this paper a new block based lossy image compression technique which is using rank reduction of the image and wavelet difference reduction (WDR) technique, is proposed. Rank reduction is obtained by applying singular value decomposition (SVD). The input image is divided into blocks of equal sizes after which quantization by SVD is carried out on each block followed by WDR technique. Reconstruction is carried out by decompressing each blocks bit streams and then merging all of them to obtain the decompressed image. The visual and quantitative experimental results of the proposed image compression technique are shown and also compared with those of the WDR technique and JPEG2000. From the results of the comparison, the proposed image compression technique outperforms the WDR and JPEG2000 techniques.
Ho, B.K.T.; Tsai, M.J.; Wei, J.; Ma, M.; Saipetch, P.
1996-12-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio ({approximately}20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group`s (MPEG`s) motion compensated prediction to take advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain cases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
Yekani Khoei, Elmira; Hassannejad, Reza; Mozaffari Tazehkand, Behzad
2015-01-01
Introduction: Body sensor network is a key technology that is used for supervising the physiological information from a long distance that enables physicians to predict and diagnose effectively the different conditions. These networks include small sensors with the ability of sensing where there are some limitations in calculating and energy. Methods: In the present research, a new compression method based on the analysis of principal components and wavelet transform is used to increase the coherence. In the present method, the first analysis of the main principles is to find the principal components of the data in order to increase the coherence for increasing the similarity between the data and compression rate. Then, according to the ability of wavelet transform, data are decomposed to different scales. In restoration process of data only special parts are restored and some parts of the data that include noise are omitted. By noise omission, the quality of the sent data increases and good compression could be obtained. Results: Pilates practices were executed among twelve patients with various dysfunctions. The results showed 0.7210, 0.8898, 0.6548, 0.6765, 0.6009, 0.7435, 0.7651, 0.7623, 0.7736, 0.8596, 0.8856 and 0.7102 compression ratios in proposed method and 0.8256, 0.9315, 0.9340, 0.9509, 0.8998, 0.9556, 0.9732, 0.9580, 0.8046, 0.9448, 0.9573 and 0.9440 compression ratios in previous method (Tseng algorithm). Conclusion: Comparing compression rates and prediction errors with the available results show the exactness of the proposed method. PMID:25901292
Cao, Libo; Harrington, Peter de B; Harden, Charles S; McHugh, Vincent M; Thomas, Martin A
2004-02-15
Linear and nonlinear wavelet compression of ion mobility spectrometry (IMS) data are compared and evaluated. IMS provides low detection limits and rapid response for many compounds. Nonlinear wavelet compression of ion mobility spectra reduced the data to 4-5% of its original size, while eliminating artifacts in the reconstructed spectra that occur with linear compression, and the root-mean-square reconstruction error was 0.17-0.20% of the maximum intensity of the uncompressed spectra. Furthermore, nonlinear wavelet compression precisely preserves the peak location (i.e., drift time). Small variations in peak location may occur in the reconstructed spectra that were linearly compressed. A method was developed and evaluated for optimizing the compression. The compression method was evaluated with in-flight data recorded from ion mobility spectrometers mounted in an unmanned aerial vehicle (UAV). Plumes of dimethyl methylphosphonate were disseminated for interrogation by the UAV-mounted IMS system. The daublet 8 wavelet filter exhibited the best performance for these evaluations. PMID:14961740
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )
1993-01-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-05-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.
FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
NASA Astrophysics Data System (ADS)
Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas
1993-08-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
NASA Astrophysics Data System (ADS)
Hou, Ying
2009-10-01
In this paper, a hyperspectral image lossy coder using three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm based on Karhunen-Loève transform (KLT) and wavelet transform (WT) is proposed. This coding scheme adopts 1D KLT as spectral decorrelator and 2D WT as spatial decorrelator. Furthermore, the computational complexity and the coding performance of the low-complexity KLT are compared and evaluated. In comparison with several stateof- the-art coding algorithms, experimental results indicate that our coder can achieve better lossy compression performance.
NASA Technical Reports Server (NTRS)
Matic, Roy M.; Mosley, Judith I.
1994-01-01
Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.
NASA Astrophysics Data System (ADS)
Liang, Lei; Li, Xinwu; Gao, Xizhang; Guo, Huadong
2015-01-01
The three-dimensional (3-D) structure of forests, especially the vertical structure, is an important parameter of forest ecosystem modeling for monitoring ecological change. Synthetic aperture radar tomography (TomoSAR) provides scene reflectivity estimation of vegetation along elevation coordinates. Due to the advantages of super-resolution imaging and a small number of measurements, distribution compressive sensing (DCS) inversion techniques for polarimetric SAR tomography were successfully developed and applied. This paper addresses the 3-D imaging of forested areas based on the framework of DCS using fully polarimetric (FP) multibaseline SAR interferometric (MB-InSAR) tomography at the P-band. A new DCS-based FP TomoSAR method is proposed: a new wavelet-based distributed compressive sensing FP TomoSAR method (FP-WDCS TomoSAR method). The method takes advantage of the joint sparsity between polarimetric channel signals in the wavelet domain to jointly inverse the reflectivity profiles in each channel. The method not only allows high accuracy and super-resolution imaging with a low number of acquisitions, but can also obtain the polarization information of the vertical structure of forested areas. The effectiveness of the techniques for polarimetric SAR tomography is demonstrated using FP P-band airborne datasets acquired by the ONERA SETHI airborne system over a test site in Paracou, French Guiana.
Kowal, Grzegorz; Lazarian, A. E-mail: lazarian@astro.wisc.ed
2010-09-01
We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.
Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong
2015-06-01
Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds. PMID:25620521
Duchaineau, M A; Porumbescu, S D; Bertram, M; Hamann, B; Joy, K I
2000-10-06
Currently, large physics simulations produce 3D fields whose individual surfaces, after conventional extraction processes, contain upwards of hundreds of millions of triangles. Detailed interactive viewing of these surfaces requires powerful compression to minimize storage, and fast view-dependent optimization of display triangulations to drive high-performance graphics hardware. In this work we provide an overview of an end-to-end multiresolution dataflow strategy whose goal is to increase efficiencies in practice by several orders of magnitude. Given recent advancements in subdivision-surface wavelet compression and view-dependent optimization, we present algorithms here that provide the ''glue'' that makes this strategy hold together. Shrink-wrapping converts highly detailed unstructured surfaces of arbitrary topology to the semi-structured form needed for wavelet compression. Remapping to triangle bintrees minimizes disturbing ''pops'' during real-time display-triangulation optimization and provides effective selective-transmission compression for out-of-core and remote access to these huge surfaces.
Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence
NASA Astrophysics Data System (ADS)
Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-11-01
Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.
ECG compression using Slantlet and lifting wavelet transform with and without normalisation
NASA Astrophysics Data System (ADS)
Aggarwal, Vibha; Singh Patterh, Manjeet
2013-05-01
This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.
Comparison of wavelet scalar quantization and JPEG for fingerprint image compression
NASA Astrophysics Data System (ADS)
Kidd, Robert C.
1995-01-01
An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at
NASA Astrophysics Data System (ADS)
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
NASA Astrophysics Data System (ADS)
Cheng, Kai-jen; Dill, Jeffrey
2013-05-01
In this paper, a lossless to lossy transform based image compression of hyperspectral images based on Integer Karhunen-Loève Transform (IKLT) and Integer Discrete Wavelet Transform (IDWT) is proposed. Integer transforms are used to accomplish reversibility. The IKLT is used as a spectral decorrelator and the 2D-IDWT is used as a spatial decorrelator. The three-dimensional Binary Embedded Zerotree Wavelet (3D-BEZW) algorithm efficiently encodes hyperspectral volumetric image by implementing progressive bitplane coding. The signs and magnitudes of transform coefficients are encoded separately. Lossy and lossless compressions of signs are implemented by conventional EZW algorithm and arithmetic coding respectively. The efficient 3D-BEZW algorithm is applied to code magnitudes. Further compression can be achieved using arithmetic coding. The lossless and lossy compression performance is compared with other state of the art predictive and transform based image compression methods on Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) images. Results show that the 3D-BEZW performance is comparable to predictive algorithms. However, its computational cost is comparable to transform- based algorithms.
Using wavelet fusion approach at panchromatic imagery to achieve dynamic range compression
NASA Astrophysics Data System (ADS)
Wan, Hung-Sen; Hsu, Chau-Yun; Hsu, Yuan Hung
2008-10-01
The image fusion technique is to maximize the information in images at same area or object taken by different sensors. It enhances unapparent features at each image and wildly applied at remote sensing, medical image, machine vision, and military identification. In remote sensing, the latest sensors usually provide 11-bit panchromatic data which compose more radiometric information; however the standard visual equipment can only produce 8-bit resolution content that limits the analysis of imagery on the screen or paper. This paper shows how to preserve the original 11-bit information after the DRA (Dynamic Range Adjustment) approaches and keep the output from color distortion during the following pan/multi-spectrum image fusion process. We propose a good dynamic range compression method converting the original IKONOS panchromatic image into high, low luminance and typical linear stretched images and using wavelet fusion to enhance the radiometric visualization and keeping good correlation with the multi-spectrum images in order to produce fine pan-sharpened product.
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping
2016-01-01
Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068
High performance projectile seal development for non perfect railgun bores
Wolfe, T.R.; Vine, F.E. Le; Riedy, P.E.; Panlasigui, A.; Hawke, R.S.; Susoeff, A.R.
1997-01-01
The sealing of high pressure gas behind an accelerating projectile has been developed over centuries of use in conventional guns and cannons. The principal concern was propulsion efficiency and trajectory accuracy and repeatability. The development of guns for use as high pressure equation-of-state (EOS) research tools, increased the importance of better seals to prevent gas leakage from interfering with the experimental targets. The development of plasma driven railguns has further increased the need for higher quality seals to prevent gas and plasma blow-by. This paper summarizes more than a decade of effort to meet these increased requirements. In small bore railguns, the first improvement was prompted by the need to contain the propulsive plasma behind the projectile to avoid the initiation of current conducting paths in front of the projectile. The second major requirements arose from the development of a railgun to serve as an EOS tool where it was necessary to maintain an evacuated region in front of the projectile throughout the acceleration process. More recently, the techniques developed for the small bore guns have been applied to large bore railguns and electro-thermal chemical guns in order to maximize their propulsion efficiency. Furthermore, large bore railguns are often less rigid and less straight than conventional homogeneous material guns. Hence, techniques to maintain seals in non perfect, non homogeneous material launchers have been developed and are included in this paper.
NASA Astrophysics Data System (ADS)
Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.
2006-08-01
Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.
Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms
NASA Astrophysics Data System (ADS)
Gurkan, Hakan
2012-12-01
In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.
NASA Technical Reports Server (NTRS)
Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; Moore, Thomas E.
2015-01-01
Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.
NASA Astrophysics Data System (ADS)
Grenga, Temistocle
The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.
NASA Astrophysics Data System (ADS)
Martin, Roland; Monteiller, Vadim; Komatitsch, Dimitri; Perrouty, Stéphane; Jessell, Mark; Bonvalot, Sylvain; Lindsay, Mark
2013-12-01
We solve the 3-D gravity inverse problem using a massively parallel voxel (or finite element) implementation on a hybrid multi-CPU/multi-GPU (graphics processing units/GPUs) cluster. This allows us to obtain information on density distributions in heterogeneous media with an efficient computational time. In a new software package called TOMOFAST3D, the inversion is solved with an iterative least-square or a gradient technique, which minimizes a hybrid L1-/L2-norm-based misfit function. It is drastically accelerated using either Haar or fourth-order Daubechies wavelet compression operators, which are applied to the sensitivity matrix kernels involved in the misfit minimization. The compression process behaves like a pre-conditioning of the huge linear system to be solved and a reduction of two or three orders of magnitude of the computational time can be obtained for a given number of CPU processor cores. The memory storage required is also significantly reduced by a similar factor. Finally, we show how this CPU parallel inversion code can be accelerated further by a factor between 3.5 and 10 using GPU computing. Performance levels are given for an application to Ghana, and physical information obtained after 3-D inversion using a sensitivity matrix with around 5.37 trillion elements is discussed. Using compression the whole inversion process can last from a few minutes to less than an hour for a given number of processor cores instead of tens of hours for a similar number of processor cores when compression is not used.
Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression.
Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander
2016-05-01
By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143
Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression
Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander
2016-01-01
By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
Brislawn, Christopher M.
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.
Wavelet-based watermarking and compression for ECG signals with verification evaluation.
Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan
2014-01-01
In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636
Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation
Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan
2014-01-01
In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636
Hwang, Wen-Jyi; Chine, Ching-Fung; Li, Kuo-Jung
2003-03-01
In this paper, a novel medical data compression algorithm, termed layered set partitioning in hierarchical trees (LSPIHT) algorithm, is presented for telemedicine applications. In the LSPIHT, the encoded bit streams are divided into a number of layers for transmission and reconstruction. Starting from the base layer, by accumulating bit streams up to different enhancement layers, we can reconstruct medical data with various signal-to-noise ratios (SNRs) and/or resolutions. Receivers with distinct specifications can then share the same source encoder to reduce the complexity of telecommunication networks for telemedicine applications. Numerical results show that, besides having low network complexity, the LSPIHT attains better rate-distortion performance as compared with other algorithms for encoding medical data. PMID:12670019
Wavelet-based very low bandwidth video compression for physical security
NASA Astrophysics Data System (ADS)
Cox, Paul G.
1998-12-01
Video cameras have become a key component for physical security and continue to grow in importance in today's environment. Video cameras often must be installed in remote locations or locations where physical tampering may be a factor. The solution is to transmit the video over wireless communication links. Often, the communication bandwidths are very narrow (typical less than 9.6 kbits). In addition, the image transmission must be made in real time or near time, while still maintaining the integrity or quality of the imagery. This poses a very challenging problem for the transmission of imagery - in particular motion imagery or video. Tridents WaveNet program offers a solution to this problem where the primary objective of this effort is to provide a real time, high quality video compression capability. This paper discusses the WaveNet program with respect to the application of physical security.
Wavelets on Planar Tesselations
Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.
2000-02-25
We present a new technique for progressive approximation and compression of polygonal objects in images. Our technique uses local parameterizations defined by meshes of convex polygons in the plane. We generalize a tensor product wavelet transform to polygonal domains to perform multiresolution analysis and compression of image regions. The advantage of our technique over conventional wavelet methods is that the domain is an arbitrary tessellation rather than, for example, a uniform rectilinear grid. We expect that this technique has many applications image compression, progressive transmission, radiosity, virtual reality, and image morphing.
NASA Astrophysics Data System (ADS)
Khieovongphachanh, Vimontha; Hamamoto, Kazuhiko; Kondo, Shozo
In this paper, we investigate optimized quantization method in JPEG2000 application for medical ultrasonic echo images. JPEG2000 has been issued as the new standard for image compression technique, which is based on Wavelet Transform (WT) and JPEG2000 incorporated into DICOM (Digital Imaging and Communications in Medicine). There are two quantization methods. One is the scalar derived quantization (SDQ), which is usually used in standard JPEG2000. The other is the scalar expounded quantization (SEQ), which can be optimized by user. Therefore, this paper is an optimization of quantization step, which is determined by Genetic Algorithm (GA). Then, the results are compared with SDQ and SEQ determined by arithmetic average method. The purpose of this paper is to improve image quality and compression ratio for medical ultrasonic echo images. The image quality is evaluated by objective assessment, PSNR (Peak Signal to Noise Ratio) and subjective assessment is evaluated by ultrasonographers from Tokai University Hospital and Tokai University Hachioji Hospital. The results show that SEQ determined by GA provides better image quality than SDQ and SEQ determined by arithmetic average method. Additionally, three optimization methods of quantization step apply to thin wire target image for analysis of point spread function.
NASA Astrophysics Data System (ADS)
Smith, Charles L.; Chu, Wei-Kom; Wobig, Randy; Chao, Hong-Yang; Enke, Charles
1999-07-01
An ongoing PACS project at our facility has been expanded to include providing and managing images used for routine clinical operation of the department of radiation oncology. The intent of our investigation has been to enable out clinical radiotherapy service to enter the tele-medicine environment through the use of a PACS system initially implemented in the department of radiology. The backbone for the imaging network includes five CT and three MR scanners located across three imaging centers. A PC workstation in the department of radiation oncology was used to transmit CT imags to a satellite facility located approximately 60 miles from the primary center. Chest CT images were used to analyze network transmission performance. Connectivity established between the primary department and satellite has fulfilled all image criteria required by the oncologist. Establishing the link tot eh oncologist at the satellite diminished bottlenecking of imaging related tasks at the primary facility due to physician absence. A 30:1 compression ratio using a wavelet-based algorithm provided clinically acceptable images treatment planning. Clinical radiotherapy images can be effectively managed in a wide- area-network to link satellite facilities to larger clinical centers.
Resnikoff, H.L. )
1993-01-01
The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.
Schlossnagle, G.; Restrepo, J.M.; Leaf, G.K.
1993-12-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and contrasted against their counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrate by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and several tabulated values are included.
Wavelet Representation of Contour Sets
Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I
2001-07-19
We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.
Low-Oscillation Complex Wavelets
NASA Astrophysics Data System (ADS)
ADDISON, P. S.; WATSON, J. N.; FENG, T.
2002-07-01
In this paper we explore the use of two low-oscillation complex wavelets—Mexican hat and Morlet—as powerful feature detection tools for data analysis. These wavelets, which have been largely ignored to date in the scientific literature, allow for a decomposition which is more “temporal than spectral” in wavelet space. This is shown to be useful for the detection of small amplitude, short duration signal features which are masked by much larger fluctuations. Wavelet transform-based methods employing these wavelets (based on both wavelet ridges and modulus maxima) are developed and applied to sonic echo NDT signals used for the analysis of structural elements. A new mobility scalogram and associated reflectogram is defined for analysis of impulse response characteristics of structural elements and a novel signal compression technique is described in which the pertinent signal information is contained within a few modulus maxima coefficients. As an example of its usefulness, the signal compression method is employed as a pre-processor for a neural network classifier. The authors believe that low oscillation complex wavelets have wide applicability to other practical signal analysis problems. Their possible application to two such problems is discussed briefly—the interrogation of arrhythmic ECG signals and the detection and characterization of coherent structures in turbulent flow fields.
Wavelet theory and its applications
Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.
1996-07-01
This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.
Optical wavelet transform for fingerprint identification
NASA Astrophysics Data System (ADS)
MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.
1994-03-01
The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.
Visibility of wavelet quantization noise
NASA Technical Reports Server (NTRS)
Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.
1997-01-01
The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Multiresolution With Super-Compact Wavelets
NASA Technical Reports Server (NTRS)
Lee, Dohyung
2000-01-01
The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of
Proximity sensing with wavelet generated video
NASA Astrophysics Data System (ADS)
Noel, Steven E.; Szu, Harold H.
1998-10-01
In this paper we introduce wavelet video processing of proximity sensor signals. Proximity sensing is required for a wide range of military and commercial applications, including weapon fuzzing, robotics, and automotive collision avoidance. While our proposed method temporarily increases signal dimension, it eventually performs data compression through the extraction of salient signal features. This data compression in turn reduces the necessary complexity of the remaining computational processing. We demonstrate our method of wavelet video processing via the proximity sensing of nearby objects through their Doppler shift. In doing this we perform a continuous wavelet transform on the Doppler signal, after subjecting it to a time-varying window. We then extract signal features from the resulting wavelet video, which we use as input to pattern recognition neural networks. The networks are trained to estimate the time- varying Doppler shift from the extracted features. We test the estimation performance of the networks, using different degrees of nonlinearity in the frequency shift over time and different levels of noise. We give the analytical result that the signal-to-noise enhancement of our proposed method is at least as good as the square root of the number of video frames, although more work is needed to completely quantify this. Real-time wavelet-based video processing and compression technology recently developed under the DOD WAVENET program offers an exciting opportunity to more fully investigate our proposed method.
Visibility of Wavelet Quantization Noise
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)
1995-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Wavelet Analysis of Space Solar Telescope Images
NASA Astrophysics Data System (ADS)
Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian
2003-12-01
The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.
Directional spherical multipole wavelets
Hayn, Michael; Holschneider, Matthias
2009-07-15
We construct a family of admissible analysis reconstruction pairs of wavelet families on the sphere. The construction is an extension of the isotropic Poisson wavelets. Similar to those, the directional wavelets allow a finite expansion in terms of off-center multipoles. Unlike the isotropic case, the directional wavelets are not a tight frame. However, at small scales, they almost behave like a tight frame. We give an explicit formula for the pseudodifferential operator given by the combination analysis-synthesis with respect to these wavelets. The Euclidean limit is shown to exist and an explicit formula is given. This allows us to quantify the asymptotic angular resolution of the wavelets.
NASA Astrophysics Data System (ADS)
Jones, B. J. T.
Wavelet analysis has become a major tool in many aspects of data handling, whether it be statistical analysis, noise removal or image reconstruction. Wavelet analysis has worked its way into fields as diverse as economics, medicine, geophysics, music and cosmology.
Application specific compression : final report.
Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.
2008-12-01
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
Fryer, M.O.
1997-05-01
This paper describes the use of wavelet transform techniques to analyze typical data found in industrial applications. A way of detecting system changes using wavelet transforms is described. The results of applying this method are described for several typical applications. The wavelet technique is compared with the use of Fourier transform methods.
Szu, H.; Hsu, C.
1996-12-31
Human sensors systems (HSS) may be approximately described as an adaptive or self-learning version of the Wavelet Transforms (WT) that are capable to learn from several input-output associative pairs of suitable transform mother wavelets. Such an Adaptive WT (AWT) is a redundant combination of mother wavelets to either represent or classify inputs.
Wavelet encoding and variable resolution progressive transmission
NASA Technical Reports Server (NTRS)
Blanford, Ronald P.
1993-01-01
Progressive transmission is a method of transmitting and displaying imagery in stages of successively improving quality. The subsampled lowpass image representations generated by a wavelet transformation suit this purpose well, but for best results the order of presentation is critical. Candidate data for transmission are best selected using dynamic prioritization criteria generated from image contents and viewer guidance. We show that wavelets are not only suitable but superior when used to encode data for progressive transmission at non-uniform resolutions. This application does not preclude additional compression using quantization of highpass coefficients, which to the contrary results in superior image approximations at low data rates.
Symplectic wavelet transformation.
Fan, Hong-Yi; Lu, Hai-Liang
2006-12-01
Usually a wavelet transform is based on dilated-translated wavelets. We propose a symplectic-transformed-translated wavelet family psi(*)(r,s)(z-kappa) (r,s are the symplectic transform parameters, |s|(2)-|r|(2)=1, kappa is a translation parameter) generated from the mother wavelet psi and the corresponding wavelet transformation W(psi)f(r,s;kappa)=integral(infinity)(-infinity)(d(2)z/pi)f(z)psi(*)(r,s)(z-kappa). This new transform possesses well-behaved properties and is related to the optical Fresnel transform in quantum mechanical version. PMID:17099740
[An algorithm of a wavelet-based medical image quantization].
Hou, Wensheng; Wu, Xiaoying; Peng, Chenglin
2002-12-01
The compression of medical image is the key to study tele-medicine & PACS. We have studied the statistical distribution of wavelet subimage coefficients and concluded that the distribution of wavelet subimage coefficients is very much similar to that of Laplacian distribution. Based on the statistical properties of image wavelet decomposition, an image quantization algorithm is proposed. In this algorithm, we selected the sample-standard-deviation as the key quantization threshold in every wavelet subimage. The test has proved that, the main advantages of this algorithm are simple computing and the predictability of coefficients in different quantization threshold range. Also, high compression efficiency can be obtained. Therefore, this algorithm can be potentially used in tele-medicine and PACS. PMID:12561372
Option pricing from wavelet-filtered financial series
NASA Astrophysics Data System (ADS)
de Almeida, V. T. X.; Moriconi, L.
2012-10-01
We perform wavelet decomposition of high frequency financial time series into large and small time scale components. Taking the FTSE100 index as a case study, and working with the Haar basis, it turns out that the small scale component defined by most (≃99.6%) of the wavelet coefficients can be neglected for the purpose of option premium evaluation. The relevance of the hugely compressed information provided by low-pass wavelet-filtering is related to the fact that the non-gaussian statistical structure of the original financial time series is essentially preserved for expiration times which are larger than just one trading day.
Manchanda, P.; Meenakshi
2009-07-02
Recently Manchanda, Meenakshi and Siddiqi have studied Haar-Vilenkin wavelet and a special type of non-uniform multiresolution analysis. Haar-Vilenkin wavelet is a generalization of Haar wavelet. Motivated by the paper of Gabardo and Nashed we have introduced a class of multiresolution analysis extending the concept of classical multiresolution analysis. We present here a resume of these results. We hope that applications of these concepts to some significant real world problems could be found.
Progressive Compression of Volumetric Subdivision Meshes
Laney, D; Pascucci, V
2004-04-16
We present a progressive compression technique for volumetric subdivision meshes based on the slow growing refinement algorithm. The system is comprised of a wavelet transform followed by a progressive encoding of the resulting wavelet coefficients. We compare the efficiency of two wavelet transforms. The first transform is based on the smoothing rules used in the slow growing subdivision technique. The second transform is a generalization of lifted linear B-spline wavelets to the same multi-tier refinement structure. Direct coupling with a hierarchical coder produces progressive bit streams. Rate distortion metrics are evaluated for both wavelet transforms. We tested the practical performance of the scheme on synthetic data as well as data from laser indirect-drive fusion simulations with multiple fields per vertex. Both wavelet transforms result in high quality trade off curves and produce qualitatively good coarse representations.
Wavelet Analyses and Applications
ERIC Educational Resources Information Center
Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.
2009-01-01
It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…
Image coding by way of wavelets
NASA Technical Reports Server (NTRS)
Shahshahani, M.
1993-01-01
The application of two wavelet transforms to image compression is discussed. It is noted that the Haar transform, with proper bit allocation, has performance that is visually superior to an algorithm based on a Daubechies filter and to the discrete cosine transform based Joint Photographic Experts Group (JPEG) algorithm at compression ratios exceeding 20:1. In terms of the root-mean-square error, the performance of the Haar transform method is basically comparable to that of the JPEG algorithm. The implementation of the Haar transform can be achieved in integer arithmetic, making it very suitable for applications requiring real-time performance.
Simultaneous denoising and compression of multispectral images
NASA Astrophysics Data System (ADS)
Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.
2013-01-01
A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.
Source Wavelet Phase Extraction
NASA Astrophysics Data System (ADS)
Naghadeh, Diako Hariri; Morley, Christopher Keith
2016-06-01
Extraction of propagation wavelet phase from seismic data can be conducted using first, second, third and fourth-order statistics. Three new methods are introduced, which are: (1) Combination of different moments, (2) Windowed continuous wavelet transform and (3) Maximum correlation with cosine function. To compare different methods synthetic data with and without noise were chosen. Results show that first, second and third order statistics are not able to preserve wavelet phase. Kurtosis can preserve propagation wavelet phase but signal-to-noise ratio can affect the extracted phase using this method. So for data set with low signal-to-noise ratio, it will be unstable. Using a combination of different moments to extract the phase is more robust than applying kurtosis. The improvement occurs because zero phase wavelets with reverse polarities have equal maximum kurtosis values hence the correct wavelet polarity cannot be identified. Zero-phase wavelets with reverse polarities have minimum and maximum values for a combination of different-moments method. These properties enable the technique to handle a finite data segment and to choose the correct wavelet polarity. Also, the existence of different moments can decrease sensitivity to outliers. A windowed continuous wavelet transform is more sensitive to signal-to-noise ratio than the combination of different-moments method, also if the scale for the wavelet is incorrect it will encounter with more problems to extract phase. When the effects of frequency bandwidth, signal-to-noise ratio and analyzing window length are considered, the results of extracting phase information from data without and with noise demonstrate that combination of different-moments is superior to the other methods introduced here.
Adaptive compression of image data
NASA Astrophysics Data System (ADS)
Hludov, Sergei; Schroeter, Claus; Meinel, Christoph
1998-09-01
In this paper we will introduce a method of analyzing images, a criterium to differentiate between images, a compression method of medical images in digital form based on the classification of the image bit plane and finally an algorithm for adaptive image compression. The analysis of the image content is based on a valuation of the relative number and absolute values of the wavelet coefficients. A comparison between the original image and the decoded image will be done by a difference criteria calculated by the wavelet coefficients of the original image and the decoded image of the first and second iteration step of the wavelet transformation. This adaptive image compression algorithm is based on a classification of digital images into three classes and followed by the compression of the image by a suitable compression algorithm. Furthermore we will show that applying these classification rules on DICOM-images is a very effective method to do adaptive compression. The image classification algorithm and the image compression algorithms have been implemented in JAVA.
Wavelet analysis of electric adjustable speed drive waveforms
Czarkowski, D.; Domijan, A. Jr.
1998-10-01
The three most common adjustable speed drives (ASDs) used in HVAC equipment, namely, pulse-width modulated (PWM) induction drive, brushless-dc drive, and switched-reluctance drive, generate non-periodic and nonstationary electric waveforms with sharp edges and transients. Deficiencies of Fourier transform methods in analysis of such ASD waveforms prompted an application of the wavelet transform. Results of discrete wavelet transform (DWT) analysis of PWM inverter-fed motor waveforms are presented. The best mother wavelet for analysis of the recorded waveforms is selected. Data compression properties of the selected mother wavelet are compared to those of the fast Fourier transform (FFT). Multilevel feature detection of ASD waveforms using the DWT is shown.
Periodized Daubechies wavelets
Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.
1996-03-01
The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin; Clyne, John; Childs, Hank
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.
Global and Local Distortion Inference During Embedded Zerotree Wavelet Decompression
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.
1996-01-01
This paper presents algorithms for inferring global and spatially local estimates of the squared-error distortion measures for the Embedded Zerotree Wavelet (EZW) image compression algorithm. All distortion estimates are obtained at the decoder without significantly compromising EZW's rate-distortion performance. Two methods are given for propagating distortion estimates from the wavelet domain to the spatial domain, thus giving individual estimates of distortion for each pixel of the decompressed image. These local distortion estimates seem to provide only slight improvement in the statistical characterization of EZW compression error relative to the global measure, unless actual squared errors are propagated. However, they provide qualitative information about the asymptotic nature of the error that may be helpful in wavelet filter selection for low bit rate applications.
NASA Technical Reports Server (NTRS)
Kempel, Leo C.
1992-01-01
Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.
Entanglement Renormalization and Wavelets.
Evenbly, Glen; White, Steven R
2016-04-01
We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems. PMID:27104687
Entanglement Renormalization and Wavelets
NASA Astrophysics Data System (ADS)
Evenbly, Glen; White, Steven R.
2016-04-01
We establish a precise connection between discrete wavelet transforms and entanglement renormalization, a real-space renormalization group transformation for quantum systems on the lattice, in the context of free particle systems. Specifically, we employ Daubechies wavelets to build approximations to the ground state of the critical Ising model, then demonstrate that these states correspond to instances of the multiscale entanglement renormalization ansatz (MERA), producing the first known analytic MERA for critical systems.
Lagrange wavelets for signal processing.
Shi, Z; Wei, G W; Kouri, D J; Hoffman, D K; Bao, Z
2001-01-01
This paper deals with the design of interpolating wavelets based on a variety of Lagrange functions, combined with novel signal processing techniques for digital imaging. Halfband Lagrange wavelets, B-spline Lagrange wavelets and Gaussian Lagrange (Lagrange distributed approximating functional (DAF)) wavelets are presented as specific examples of the generalized Lagrange wavelets. Our approach combines the perceptually dependent visual group normalization (VGN) technique and a softer logic masking (SLM) method. These are utilized to rescale the wavelet coefficients, remove perceptual redundancy and obtain good visual performance for digital image processing. PMID:18255493
Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram
Anant, K.S.
1997-06-01
In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the P as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the
Wavelet analysis of atmospheric turbulence
Hudgins, L.H.
1992-12-31
After a brief review of the elementary properties of Fourier Transforms, the Wavelet Transform is defined in Part I. Basic results are given for admissable wavelets. The Multiresolution Analysis, or MRA (a mathematical structure which unifies a large class of wavelets with Quadrature Mirror Filters) is then introduced. Some fundamental aspects of wavelet design are then explored. The Discrete Wavelet Transform is discussed and, in the context of an MRA, is seen to supply a Fast Wavelet Transform which competes with the Fast Fourier Transform for efficiency. In Part II, the Wavelet Transform is developed in terms of the scale number variable s instead of the scale length variable a where a = 1/s. Basic results such as the admissibility condition, conservation of energy, and the reconstruction theorem are proven in this context. After reviewing some motivation for the usual Fourier power spectrum, a definition is given for the wavelet power spectrum. This `spectral density` is then intepreted in the context of spectral estimation theory. Parseval`s theorem for Wavelets then leads naturally to the Wavelet Cross Spectrum, Wavelet Cospectrum, and Wavelet Quadrature Spectrum. Wavelet Transforms are then applied in Part III to the analysis of atmospheric turbulence. Data collected over the ocean is examined in the wavelet transform domain for underlying structure. A brief overview of atmospheric turbulence is provided. Then the overall method of applying Wavelet Transform techniques to time series data is described. A trace study is included, showing some of the aspects of choosing the computational algorithm, and selection of a specific analyzing wavelet. A model for generating synthetic turbulence data is developed, and seen to yield useful results in comparing with real data for structural transitions. Results from the theory of Wavelet Spectral Estimation and Wavelength Cross-Transforms are applied to studying the momentum transport and the heat flux.
Integrated system for image storage, retrieval, and transmission using wavelet transform
NASA Astrophysics Data System (ADS)
Yu, Dan; Liu, Yawen; Mu, Ray Y.; Yang, Shi-Qiang
1998-12-01
Currently, much work has been done in the area of image storage and retrieval. However, the overall performance has been far from practical. A highly integrated wavelet-based image management system is proposed in this paper. By integrating wavelet-based solutions for image compression and decompression, content-based retrieval and progressive transmission, much higher performance can be achieved. The multiresolution nature of the wavelet transform has been proven to be a powerful tool to represent images. The wavelet transform decomposes the image into a set of subimages with different resolutions. From here three solutions for key aspects of image management are reached. The content-based image retrieval (CBIR) features of our system include the color, contour, texture, sample, keyword and topic information of images. The first four features can be naturally extracted from the wavelet transform coefficients. By scoring the similarity of users' requests with images in the database, those who have higher scores are noted and the user receives feedback. Image compression and decompression. Assuming that details at high resolution and diagonal directions are less visible to the human eye, a good compression ratio can be achieved. In each subimage, the wavelet coefficients are vector quantized (VQ), using the LGB algorithm, which is improved in our approach to accelerate the process. Higher compression ratio can be achieved with DPCM and entropy coding method applied together. With YIQ representation, color images can also be effectively compressed. There is a very low load on the network bandwidth by transmitting compressed image data across the network. Progressive transmission is possible by employment of the multiresolution nature of the wavelet, which makes the system respond faster and the user-interface more friendly. The system shows a high overall performance by exploring the excellent features of wavelet, and integrating key aspects of image management. An
Electromagnetic spatial coherence wavelets.
Castaneda, Roman; Garcia-Sucerquia, Jorge
2006-01-01
The recently introduced concept of spatial coherence wavelets is generalized to describe the propagation of electromagnetic fields in the free space. For this aim, the spatial coherence wavelet tensor is introduced as an elementary amount, in terms of which the formerly known quantities for this domain can be expressed. It allows for the analysis of the relationship between the spatial coherence properties and the polarization state of the electromagnetic wave. This approach is completely consistent with the recently introduced unified theory of coherence and polarization for random electromagnetic beams, but it provides further insight about the causal relationship between the polarization states at different planes along the propagation path. PMID:16478063
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph
2004-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.
Spectral Data Reduction via Wavelet Decomposition
NASA Technical Reports Server (NTRS)
Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)
2002-01-01
The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.
Compressive rendering: a rendering application of compressed sensing.
Sen, Pradeep; Darabi, Soheil
2011-04-01
Recently, there has been growing interest in compressed sensing (CS), the new theory that shows how a small set of linear measurements can be used to reconstruct a signal if it is sparse in a transform domain. Although CS has been applied to many problems in other fields, in computer graphics, it has only been used so far to accelerate the acquisition of light transport. In this paper, we propose a novel application of compressed sensing by using it to accelerate ray-traced rendering in a manner that exploits the sparsity of the final image in the wavelet basis. To do this, we raytrace only a subset of the pixel samples in the spatial domain and use a simple, greedy CS-based algorithm to estimate the wavelet transform of the image during rendering. Since the energy of the image is concentrated more compactly in the wavelet domain, less samples are required for a result of given quality than with conventional spatial-domain rendering. By taking the inverse wavelet transform of the result, we compute an accurate reconstruction of the desired final image. Our results show that our framework can achieve high-quality images with approximately 75 percent of the pixel samples using a nonadaptive sampling scheme. In addition, we also perform better than other algorithms that might be used to fill in the missing pixel data, such as interpolation or inpainting. Furthermore, since the algorithm works in image space, it is completely independent of scene complexity. PMID:21311092
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R.
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.
2005-01-01
ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.
Wavelet Domain Radiofrequency Pulse Design Applied to Magnetic Resonance Imaging
Huettner, Andrew M.; Mickevicius, Nikolai J.; Ersoz, Ali; Koch, Kevin M.; Muftuler, L. Tugan; Nencka, Andrew S.
2015-01-01
A new method for designing radiofrequency (RF) pulses with numerical optimization in the wavelet domain is presented. Numerical optimization may yield solutions that might otherwise have not been discovered with analytic techniques alone. Further, processing in the wavelet domain reduces the number of unknowns through compression properties inherent in wavelet transforms, providing a more tractable optimization problem. This algorithm is demonstrated with simultaneous multi-slice (SMS) spin echo refocusing pulses because reduced peak RF power is necessary for SMS diffusion imaging with high acceleration factors. An iterative, nonlinear, constrained numerical minimization algorithm was developed to generate an optimized RF pulse waveform. Wavelet domain coefficients were modulated while iteratively running a Bloch equation simulator to generate the intermediate slice profile of the net magnetization. The algorithm minimizes the L2-norm of the slice profile with additional terms to penalize rejection band ripple and maximize the net transverse magnetization across each slice. Simulations and human brain imaging were used to demonstrate a new RF pulse design that yields an optimized slice profile and reduced peak energy deposition when applied to a multiband single-shot echo planar diffusion acquisition. This method may be used to optimize factors such as magnitude and phase spectral profiles and peak RF pulse power for multiband simultaneous multi-slice (SMS) acquisitions. Wavelet-based RF pulse optimization provides a useful design method to achieve a pulse waveform with beneficial amplitude reduction while preserving appropriate magnetization response for magnetic resonance imaging. PMID:26517262
Wavelet Domain Radiofrequency Pulse Design Applied to Magnetic Resonance Imaging.
Huettner, Andrew M; Mickevicius, Nikolai J; Ersoz, Ali; Koch, Kevin M; Muftuler, L Tugan; Nencka, Andrew S
2015-01-01
A new method for designing radiofrequency (RF) pulses with numerical optimization in the wavelet domain is presented. Numerical optimization may yield solutions that might otherwise have not been discovered with analytic techniques alone. Further, processing in the wavelet domain reduces the number of unknowns through compression properties inherent in wavelet transforms, providing a more tractable optimization problem. This algorithm is demonstrated with simultaneous multi-slice (SMS) spin echo refocusing pulses because reduced peak RF power is necessary for SMS diffusion imaging with high acceleration factors. An iterative, nonlinear, constrained numerical minimization algorithm was developed to generate an optimized RF pulse waveform. Wavelet domain coefficients were modulated while iteratively running a Bloch equation simulator to generate the intermediate slice profile of the net magnetization. The algorithm minimizes the L2-norm of the slice profile with additional terms to penalize rejection band ripple and maximize the net transverse magnetization across each slice. Simulations and human brain imaging were used to demonstrate a new RF pulse design that yields an optimized slice profile and reduced peak energy deposition when applied to a multiband single-shot echo planar diffusion acquisition. This method may be used to optimize factors such as magnitude and phase spectral profiles and peak RF pulse power for multiband simultaneous multi-slice (SMS) acquisitions. Wavelet-based RF pulse optimization provides a useful design method to achieve a pulse waveform with beneficial amplitude reduction while preserving appropriate magnetization response for magnetic resonance imaging. PMID:26517262
Multiple wavelet-tree-based image coding and robust transmission
NASA Astrophysics Data System (ADS)
Cao, Lei; Chen, Chang Wen
2004-10-01
In this paper, we present techniques based on multiple wavelet-tree coding for robust image transmission. The algorithm of set partitioning in hierarchical trees (SPIHT) is a state-of-the-art technique for image compression. This variable length coding (VLC) technique, however, is extremely sensitive to channel errors. To improve the error resilience capability and in the meantime to keep the high source coding efficiency through VLC, we propose to encode each wavelet tree or a group of wavelet trees using SPIHT algorithm independently. Instead of encoding the entire image as one bitstream, multiple bitstreams are generated. Therefore, error propagation is limited within individual bitstream. Two methods based on subsampling and human visual sensitivity are proposed to group the wavelet trees. The multiple bitstreams are further protected by the rate compatible puncture convolutional (RCPC) codes. Unequal error protection are provided for both different bitstreams and different bit segments inside each bitstream. We also investigate the improvement of error resilience through error resilient entropy coding (EREC) and wavelet tree coding when channels are slightly corruptive. A simple post-processing technique is also proposed to alleviate the effect of residual errors. We demonstrate through simulations that systems with these techniques can achieve much better performance than systems transmitting a single bitstream in noisy environments.
Correlative weighted stacking for seismic data in the wavelet domain
Zhang, S.; Xu, Y.; Xia, J.
2004-01-01
Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.
Optical asymmetric image encryption using gyrator wavelet transform
NASA Astrophysics Data System (ADS)
Mehra, Isha; Nishchal, Naveen K.
2015-11-01
In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Discrete wavelet analysis of power system transients
Wilkinson, W.A.; Cox, M.D.
1996-11-01
Wavelet analysis is a new method for studying power system transients. Through wavelet analysis, transients are decomposed into a series of wavelet components, each of which is a time-domain signal that covers a specific octave frequency band. This paper presents the basic ideas of discrete wavelet analysis. A variety of actual and simulated transient signals are then analyzed using the discrete wavelet transform that help demonstrate the power of wavelet analysis.
Hyperspectral image data compression based on DSP
NASA Astrophysics Data System (ADS)
Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin
2010-11-01
The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.
Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.
2012-07-17
The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.
NASA Astrophysics Data System (ADS)
Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.
2012-07-01
The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.
Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach
NASA Technical Reports Server (NTRS)
Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.
2000-01-01
The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.
Sangeetha, S; Sujatha, C M; Manamalli, D
2014-01-01
In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples. PMID:25571265
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Compression of gray-scale fingerprint images
NASA Astrophysics Data System (ADS)
Hopper, Thomas
1994-03-01
The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.
Digital watermarking algorithm based on HVS in wavelet domain
NASA Astrophysics Data System (ADS)
Zhang, Qiuhong; Xia, Ping; Liu, Xiaomei
2013-10-01
As a new technique used to protect the copyright of digital productions, the digital watermark technique has drawn extensive attention. A digital watermarking algorithm based on discrete wavelet transform (DWT) was presented according to human visual properties in the paper. Then some attack analyses were given. Experimental results show that the watermarking scheme proposed in this paper is invisible and robust to cropping, and also has good robustness to cut , compression , filtering , and noise adding .
An Evolved Wavelet Library Based on Genetic Algorithm
Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.
2014-01-01
As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225
Neural network wavelet technology: A frontier of automation
NASA Technical Reports Server (NTRS)
Szu, Harold
1994-01-01
Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.
The decoding method based on wavelet image En vector quantization
NASA Astrophysics Data System (ADS)
Liu, Chun-yang; Li, Hui; Wang, Tao
2013-12-01
With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.
NASA Astrophysics Data System (ADS)
Chevrot, Sébastien; Martin, Roland; Komatitsch, Dimitri
2012-12-01
Wavelets are extremely powerful to compress the information contained in finite-frequency sensitivity kernels and tomographic models. This interesting property opens the perspective of reducing the size of global tomographic inverse problems by one to two orders of magnitude. However, introducing wavelets into global tomographic problems raises the problem of computing fast wavelet transforms in spherical geometry. Using a Cartesian cubed sphere mapping, which grids the surface of the sphere with six blocks or 'chunks', we define a new algorithm to implement fast wavelet transforms with the lifting scheme. This algorithm is simple and flexible, and can handle any family of discrete orthogonal or bi-orthogonal wavelets. Since wavelet coefficients are local in space and scale, aliasing effects resulting from a parametrization with global functions such as spherical harmonics are avoided. The sparsity of tomographic models expanded in wavelet bases implies that it is possible to exploit the power of compressed sensing to retrieve Earth's internal structures optimally. This approach involves minimizing a combination of a ℓ2 norm for data residuals and a ℓ1 norm for model wavelet coefficients, which can be achieved through relatively minor modifications of the algorithms that are currently used to solve the tomographic inverse problem.
NASA Technical Reports Server (NTRS)
Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.
1996-01-01
Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.
NASA Technical Reports Server (NTRS)
Jameson, Leland
1996-01-01
Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.
A generalized wavelet extrema representation
Lu, Jian; Lades, M.
1995-10-01
The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.
NASA Astrophysics Data System (ADS)
Ng, J.; Kingsbury, N. G.
2004-02-01
wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The
Fan, Hong-Yi; Lu, Hai-Liang
2007-03-01
The Einstein-Podolsky-Rosen entangled state representation is applied to studying the admissibility condition of mother wavelets for complex wavelet transforms, which leads to a family of new mother wavelets. Mother wavelets thus are classified as the Hermite-Gaussian type for real wavelet transforms and the Laguerre-Gaussian type for the complex case. PMID:17392919
Wavelet periodicity detection algorithms
NASA Astrophysics Data System (ADS)
Benedetto, John J.; Pfander, Goetz E.
1998-10-01
This paper deals with the analysis of time series with respect to certain known periodicities. In particular, we shall present a fast method aimed at detecting periodic behavior inherent in noise data. The method is composed of three steps: (1) Non-noisy data are analyzed through spectral and wavelet methods to extract specific periodic patterns of interest. (2) Using these patterns, we construct an optimal piecewise constant wavelet designed to detect the underlying periodicities. (3) We introduce a fast discretized version of the continuous wavelet transform, as well as waveletgram averaging techniques, to detect occurrence and period of these periodicities. The algorithm is formulated to provide real time implementation. Our procedure is generally applicable to detect locally periodic components in signals s which can be modeled as s(t) equals A(t)F(h(t)) + N(t) for t in I, where F is a periodic signal, A is a non-negative slowly varying function, and h is strictly increasing with h' slowly varying, N denotes background activity. For example, the method can be applied in the context of epileptic seizure detection. In this case, we try to detect seizure periodics in EEG and ECoG data. In the case of ECoG data, N is essentially 1/f noise. In the case of EEG data and for t in I,N includes noise due to cranial geometry and densities. In both cases N also includes standard low frequency rhythms. Periodicity detection has other applications including ocean wave prediction, cockpit motion sickness prediction, and minefield detection.
Wavelets and spacetime squeeze
NASA Technical Reports Server (NTRS)
Han, D.; Kim, Y. S.; Noz, Marilyn E.
1993-01-01
It is shown that the wavelet is the natural language for the Lorentz covariant description of localized light waves. A model for covariant superposition is constructed for light waves with different frequencies. It is therefore possible to construct a wave function for light waves carrying a covariant probability interpretation. It is shown that the time-energy uncertainty relation (Delta(t))(Delta(w)) is approximately 1 for light waves is a Lorentz-invariant relation. The connection between photons and localized light waves is examined critically.
Perceptual compression of magnitude-detected synthetic aperture radar imagery
NASA Technical Reports Server (NTRS)
Gorman, John D.; Werness, Susan A.
1994-01-01
A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.
An Introduction to Wavelet Theory and Analysis
Miner, N.E.
1998-10-01
This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.
NASA Astrophysics Data System (ADS)
Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua
1997-04-01
Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.
Spatially adaptive bases in wavelet-based coding of semi-regular meshes
NASA Astrophysics Data System (ADS)
Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter
2010-05-01
In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.
Wavelet networks for face processing
NASA Astrophysics Data System (ADS)
Krüger, V.; Sommer, G.
2002-06-01
Wavelet networks (WNs) were introduced in 1992 as a combination of artificial neural radial basis function (RBF) networks and wavelet decomposition. Since then, however, WNs have received only a little attention. We believe that the potential of WNs has been generally underestimated. WNs have the advantage that the wavelet coefficients are directly related to the image data through the wavelet transform. In addition, the parameters of the wavelets in the WNs are subject to optimization, which results in a direct relation between the represented function and the optimized wavelets, leading to considerable data reduction (thus making subsequent algorithms much more efficient) as well as to wavelets that can be used as an optimized filter bank. In our study we analyze some WN properties and highlight their advantages for object representation purposes. We then present a series of results of experiments in which we used WNs for face tracking. We exploit the efficiency that is due to data reduction for face recognition and face-pose estimation by applying the optimized-filter-bank principle of the WNs.
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization
Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.
2000-01-05
We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.
An overview of the quantum wavelet transform, focused on earth science applications
NASA Astrophysics Data System (ADS)
Shehab, O.; LeMoigne, J.; Lomonaco, S.; Halem, M.
2015-12-01
Registering the images from the MODIS system and the OCO-2 satellite is currently being done by classical image registration techniques. One such technique is wavelet transformation. Besides image registration, wavelet transformation is also used in other areas of earth science, for example, processinga and compressing signal variation, etc. In this talk, we investigate the applicability of few quantum wavelet transformation algorithms to perform image registration on the MODIS and OCO-2 data. Most of the known quantum wavelet transformation algorithms are data agnostic. We investigate their applicability in transforming Flexible Representation for Quantum Images. Similarly, we also investigate the applicability of the algorithms in signal variation analysis. We also investigate the transformation of the models into pseudo-boolean functions to implement them on commercially available quantum annealing computers, such as the D-Wave computer located at NASA Ames.
Yue, Yong; Croitoru, Mihai M; Bidani, Akhil; Zwischenberger, Joseph B; Clark, John W
2006-03-01
This paper introduces a novel nonlinear multiscale wavelet diffusion method for ultrasound speckle suppression and edge enhancement. This method is designed to utilize the favorable denoising properties of two frequently used techniques: the sparsity and multiresolution properties of the wavelet, and the iterative edge enhancement feature of nonlinear diffusion. With fully exploited knowledge of speckle image models, the edges of images are detected using normalized wavelet modulus. Relying on this feature, both the envelope-detected speckle image and the log-compressed ultrasonic image can be directly processed by the algorithm without need for additional preprocessing. Speckle is suppressed by employing the iterative multiscale diffusion on the wavelet coefficients. With a tuning diffusion threshold strategy, the proposed method can improve the image quality for both visualization and auto-segmentation applications. We validate our method using synthetic speckle images and real ultrasonic images. Performance improvement over other despeckling filters is quantified in terms of noise suppression and edge preservation indices. PMID:16524086
Peak finding using biorthogonal wavelets
Tan, C.Y.
2000-02-01
The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.
Generalized orthogonal wavelet phase reconstruction.
Axtell, Travis W; Cristi, Roberto
2013-05-01
Phase reconstruction is used for feedback control in adaptive optics systems. To achieve performance metrics for high actuator density or with limited processing capabilities on spacecraft, a wavelet signal processing technique is advantageous. Previous derivations of this technique have been limited to the Haar wavelet. This paper derives the relationship and algorithms to reconstruct phase with O(n) computational complexity for wavelets with the orthogonal property. This has additional benefits for performance with noise in the measurements. We also provide details on how to handle the boundary condition for telescope apertures. PMID:23695316
Birdsong Denoising Using Wavelets.
Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal
2016-01-01
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391
Birdsong Denoising Using Wavelets
Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal
2016-01-01
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391
Joint wavelet-based coding and packetization for video transport over packet-switched networks
NASA Astrophysics Data System (ADS)
Lee, Hung-ju
1996-02-01
In recent years, wavelet theory applied to image, and audio and video compression has been extensively studied. However, only gaining compression ratio without considering the underlying networking systems is unrealistic, especially for multimedia applications over networks. In this paper, we present an integrated approach, which attempts to preserve the advantages of wavelet-based image coding scheme and to provide robustness to a certain extent for lost packets over packet-switched networks. Two different packetization schemes, called the intrablock-oriented (IAB) and interblock-oriented (IRB) schemes, in conjunction with wavelet-based coding, are presented. Our approach is evaluated under two different packet loss models with various packet loss probabilities through simulations which are driven by real video sequences.
Wavelet packets feasibility study for the design of an ECG compressor.
Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Godino-Llorente, Juan Ignacio; Barner, Kenneth E
2007-04-01
Most of the recent electrocardiogram (ECG) compression approaches developed with the wavelet transform are implemented using the discrete wavelet transform. Conversely, wavelet packets (WP) are not extensively used, although they are an adaptive decomposition for representing signals. In this paper, we present a thresholding-based method to encode ECG signals using WP. The design of the compressor has been carried out according to two main goals: (1) The scheme should be simple to allow real-time implementation; (2) quality, i.e., the reconstructed signal should be as similar as possible to the original signal. The proposed scheme is versatile as far as neither QRS detection nor a priori signal information is required. As such, it can thus be applied to any ECG. Results show that WP perform efficiently and can now be considered as an alternative in ECG compression applications. PMID:17405386
A wavelet phase filter for emission tomography
Olsen, E.T.; Lin, B.
1995-07-01
The presence of a high level of noise is a characteristic in some tomographic imaging techniques such as positron emission tomography (PET). Wavelet methods can smooth out noise while preserving significant features of images. Mallat et al. proposed a wavelet based denoising scheme exploiting wavelet modulus maxima, but the scheme is sensitive to noise. In this study, the authors explore the properties of wavelet phase, with a focus on reconstruction of emission tomography images. Specifically, they show that the wavelet phase of regular Poisson noise under a Haar-type wavelet transform converges in distribution to a random variable uniformly distributed on [0, 2{pi}). They then propose three wavelet-phase-based denoising schemes which exploit this property: edge tracking, local phase variance thresholding, and scale phase variation thresholding. Some numerical results are also presented. The numerical experiments indicate that wavelet phase techniques show promise for wavelet based denoising methods.
A signal invariant wavelet function selection algorithm.
Garg, Girisha
2016-04-01
This paper addresses the problem of mother wavelet selection for wavelet signal processing in feature extraction and pattern recognition. The problem is formulated as an optimization criterion, where a wavelet library is defined using a set of parameters to find the best mother wavelet function. For estimating the fitness function, adopted to evaluate the performance of the wavelet function, analysis of variance is used. Genetic algorithm is exploited to optimize the determination of the best mother wavelet function. For experimental evaluation, solutions for best mother wavelet selection are evaluated on various biomedical signal classification problems, where the solutions of the proposed algorithm are assessed and compared with manual hit-and-trial methods. The results show that the solutions of automated mother wavelet selection algorithm are consistent with the manual selection of wavelet functions. The algorithm is found to be invariant to the type of signals used for classification. PMID:26253283
Heart Disease Detection Using Wavelets
NASA Astrophysics Data System (ADS)
González S., A.; Acosta P., J. L.; Sandoval M., M.
2004-09-01
We develop a wavelet based method to obtain standardized gray-scale chart of both healthy hearts and of hearts suffering left ventricular hypertrophy. The hypothesis that early bad functioning of heart can be detected must be tested by comparing the wavelet analysis of the corresponding ECD with the limit cases. Several important parameters shall be taken into account such as age, sex and electrolytic changes.
Wavelet analysis in virtual colonoscopy
NASA Astrophysics Data System (ADS)
Greenblum, Sharon; Li, Jiang; Huang, Adam; Summers, Ronald M.
2006-03-01
The computed tomographic colonography (CTC) computer aided detection (CAD) program is a new method in development to detect colon polyps in virtual colonoscopy. While high sensitivity is consistently achieved, additional features are desired to increase specificity. In this paper, a wavelet analysis was applied to CTCCAD outputs in an attempt to filter out false positive detections. 52 CTCCAD detection images were obtained using a screen capture application. 26 of these images were real polyps, confirmed by optical colonoscopy and 26 were false positive detections. A discrete wavelet transform of each image was computed with the MATLAB wavelet toolbox using the Haar wavelet at levels 1-5 in the horizontal, vertical and diagonal directions. From the resulting wavelet coefficients at levels 1-3 for all directions, a 72 feature vector was obtained for each image, consisting of descriptive statistics such as mean, variance, skew, and kurtosis at each level and orientation, as well as error statistics based on a linear predictor of neighboring wavelet coefficients. The vectors for each of the 52 images were then run through a support vector machine (SVM) classifier using ten-fold cross-validation training to determine its efficiency in distinguishing polyps from false positives. The SVM results showed 100% sensitivity and 51% specificity in correctly identifying the status of detections. If this technique were added to the filtering process of the CTCCAD polyp detection scheme, the number of false positive results could be reduced significantly.
Wavelet-based polarimetry analysis
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik
2014-06-01
Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.
Tests for Wavelets as a Basis Set
NASA Astrophysics Data System (ADS)
Baker, Thomas; Evenbly, Glen; White, Steven
A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.
General inversion formulas for wavelet transforms
NASA Astrophysics Data System (ADS)
Holschneider, Matthias
1993-09-01
This article is the continuation of a series of articles about group theory and wavelet analysis [A. Grossmann, J. Morlet, and T. Paul, J. Math. Phys. 26, 2473 (1985)]. As is well-known in the case of the afine group, the reconstruction wavelet and the analyzing wavelet need not be identic. In this article it is shown that this holds for arbitrary groups. In addition it is shown that even for nonadmissible analyzing wavelets the wavelet transform may be inverted. Accordingly the image of the wavelet transform can be characterized by many different reproducing kernels.
An Attack on Wavelet Tree Shuffling Encryption Schemes
NASA Astrophysics Data System (ADS)
Assegie, Samuel; Salama, Paul; King, Brian
With the ubiquity of the internet and advances in technology, especially digital consumer electronics, demand for online multimedia services is ever increasing. While it's possible to achieve a great reduction in bandwidth utilization of multimedia data such as image and video through compression, security still remains a great concern. Traditional cryptographic algorithms/systems for data security are often not fast enough to process the vast amounts of data generated by the multimedia applications to meet the realtime constraints. Selective encryption is a new scheme for multimedia content protection. It involves encrypting only a portion of the data to reduce computational complexity(the amount of data to encrypt)while preserving a sufficient level of security. To achieve this, many selective encryption schemes are presented in different literatures. One of them is Wavelet Tree Shuffling. In this paper we assess the security of a wavelet tree shuffling encryption scheme.
Hierarchical structure analysis of interstellar clouds using nonorthogonal wavelets
NASA Technical Reports Server (NTRS)
Langer, William D.; Wilson, Robert W.; Anderson, Charles H.
1993-01-01
We introduce the use of Laplacian pyramid transforms, a form of nonorthogonal wavelets, to analyze the structure of interstellar clouds. These transforms are generally better suited for analyzing structure than orthogonal wavelets because they provide more flexibility in the structure of the encoding functions - here circularly symmetric bandpass filters. This technique is applied to CO maps of Barnard 5. In the (C-13)O maps, for example, we identify 60 different fragments and clumps, as well as several cavities, or bubbles. Many features show evidence of hierarchical structure, with most of the power in the largest wavelengths. The clumps have a more chaotic structure at small wavelengths than expected for Kolmogorov turbulence, and a mass distribution proportional to M exp -5/3. The structure analysis is consistent with a picture where gravity, energy injection, compressible turbulence, and coalescence play an important role in the dynamics of B5.
Adaptive wavelet collocation method simulations of Rayleigh-Taylor instability
NASA Astrophysics Data System (ADS)
Reckinger, S. J.; Livescu, D.; Vasilyev, O. V.
2010-12-01
Numerical simulations of single-mode, compressible Rayleigh-Taylor instability are performed using the adaptive wavelet collocation method (AWCM), which utilizes wavelets for dynamic grid adaptation. Due to the physics-based adaptivity and direct error control of the method, AWCM is ideal for resolving the wide range of scales present in the development of the instability. The problem is initialized consistent with the solutions from linear stability theory. Non-reflecting boundary conditions are applied to prevent the contamination of the instability growth by pressure waves created at the interface. AWCM is used to perform direct numerical simulations that match the early-time linear growth, the terminal bubble velocity and a reacceleration region.
A New Approach for Fingerprint Image Compression
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.
Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets
Duchaineau, M A; Bertram, M; Porumbescu, S; Hamann, B; Joy, K I
2001-10-03
Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitable approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.
Spherical wavelet transform: linking global seismic tomography and imaging
NASA Astrophysics Data System (ADS)
Pan, J.
2001-12-01
Each year, numerous seismic tomographic images are published based on either new parameterization, damping schemes or datasets. Though people agree generally on the longer- wavelength seismic structures, large discrepencies still exist among various models. Normally the data is noisy, thus the inverse problem is often ill-conditioned. Sampling rate may be enough to resolve for long-wavelength structures when we parameterize the earth to a low harmonic order. However, higher order signals (slabs, plume-like structures, and local seismic velocity anomalies (SVA)) on a global scale remain under-sampled. Finer discretization of the model space increases the problem size dramatically but does not alleviate the nature of the problem. The main challenge thus is to find an efficient representation of the model space to solve for the lower- and higher- degree SVAs simultaneously. Spherical wavelets are a good choice because of their compact support (locaized) in both spatial and frequency domains. If SVAs can be viewed as an image, they consist of smooth-varying signals superpositioned by small-scale local changes and can be compressed greatly and represented better using spherical wavelets. By mapping the model parameters into a nested multi-resolution analysis (MRA) space, the signals become comparable in size therefore stable solutions can be achieved at every level of the resolution without introducing subjective damping. The efficiency of using wavelets and MRA to denoise and compress signals can be used to reduce the problem size and eliminate effects of noisy data. This new algorithm can achieve better resolving power for 2D and 3D seismic tomography, by linking image processing with inverse theory. Advances in spherical wavelets enable the introduction of wavelet analysis and a new parameterization of MRA into global tomography studies. In this paper, we present the new inversion method based on spherical wavelet transform. An application to 2D surface wave
Al-Ajlouni, A F; Abo-Zahhad, M; Ahmed, S M; Schilling, R J
2008-01-01
Compression of electrocardiography (ECG) is necessary for efficient storage and transmission of the digitized ECG signals. Discrete wavelet transform (DWT) has recently emerged as a powerful technique for ECG signal compression due to its multi-resolution signal decomposition and locality properties. This paper presents an ECG compressor based on the selection of optimum threshold levels of DWT coefficients in different subbands that achieve maximum data volume reduction while preserving the significant signal morphology features upon reconstruction. First, the ECG is wavelet transformed into m subbands and the wavelet coefficients of each subband are thresholded using an optimal threshold level. Thresholding removes excessively small features and replaces them with zeroes. The threshold levels are defined for each signal so that the bit rate is minimized for a target distortion or, alternatively, the distortion is minimized for a target compression ratio. After thresholding, the resulting significant wavelet coefficients are coded using multi embedded zero tree (MEZW) coding technique. In order to assess the performance of the proposed compressor, records from the MIT-BIH Arrhythmia Database were compressed at different distortion levels, measured by the percentage rms difference (PRD), and compression ratios (CR). The method achieves good CR values with excellent reconstruction quality that compares favourably with various classical and state-of-the-art ECG compressors. Finally, it should be noted that the proposed method is flexible in controlling the quality of the reconstructed signals and the volume of the compressed signals by establishing a target PRD and a target CR a priori, respectively. PMID:19005960
Multiresolution Distance Volumes for Progressive Surface Compression
Laney, D E; Bertram, M; Duchaineau, M A; Max, N L
2002-04-18
We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.
Group-normalized wavelet packet signal processing
NASA Astrophysics Data System (ADS)
Shi, Zhuoer; Bao, Zheng
1997-04-01
Since the traditional wavelet and wavelet packet coefficients do not exactly represent the strength of signal components at the very time(space)-frequency tilling, group- normalized wavelet packet transform (GNWPT), is presented for nonlinear signal filtering and extraction from the clutter or noise, together with the space(time)-frequency masking technique. The extended F-entropy improves the performance of GNWPT. For perception-based image, soft-logic masking is emphasized to remove the aliasing with edge preserved. Lawton's method for complex valued wavelets construction is extended to generate the complex valued compactly supported wavelet packets for radar signal extraction. This kind of wavelet packets are symmetry and unitary orthogonal. Well-defined wavelet packets are chosen by the analysis remarks on their time-frequency characteristics. For real valued signal processing, such as images and ECG signal, the compactly supported spline or bi- orthogonal wavelet packets are preferred for perfect de- noising and filtering qualities.
A Mellin transform approach to wavelet analysis
NASA Astrophysics Data System (ADS)
Alotta, Gioacchino; Di Paola, Mario; Failla, Giuseppe
2015-11-01
The paper proposes a fractional calculus approach to continuous wavelet analysis. Upon introducing a Mellin transform expression of the mother wavelet, it is shown that the wavelet transform of an arbitrary function f(t) can be given a fractional representation involving a suitable number of Riesz integrals of f(t), and corresponding fractional moments of the mother wavelet. This result serves as a basis for an original approach to wavelet analysis of linear systems under arbitrary excitations. In particular, using the proposed fractional representation for the wavelet transform of the excitation, it is found that the wavelet transform of the response can readily be computed by a Mellin transform expression, with fractional moments obtained from a set of algebraic equations whose coefficient matrix applies for any scale a of the wavelet transform. Robustness and computationally efficiency of the proposed approach are shown in the paper.
Wavelet correlations in the [ital p] model
Greiner, M. Institut fuer Theoretische Physik, Justus Liebig Universitaet, 35392 Geien ); Lipa, P.; Carruthers, P. )
1995-03-01
We suggest applying the concept of wavelet transforms to the study of correlations in multiparticle physics. Both the usual correlation functions as well as the wavelet transformed ones are calculated for the [ital p] model, which is a simple but tractable random cascade model. For this model, the wavelet transform decouples correlations between fluctuations defined on different scales. The advantageous properties of factorial moments are also shared by properly defined factorial wavelet correlations.
Adaptive Multilinear Tensor Product Wavelets.
Weiss, Kenneth; Lindstrom, Peter
2016-01-01
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells. PMID:26529742
NASA Technical Reports Server (NTRS)
Sjoegreen, B.; Yee, H. C.
2001-01-01
The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these
Discrete wavelet transform core for image processing applications
NASA Astrophysics Data System (ADS)
Savakis, Andreas E.; Carbone, Richard
2005-02-01
This paper presents a flexible hardware architecture for performing the Discrete Wavelet Transform (DWT) on a digital image. The proposed architecture uses a variation of the lifting scheme technique and provides advantages that include small memory requirements, fixed-point arithmetic implementation, and a small number of arithmetic computations. The DWT core may be used for image processing operations, such as denoising and image compression. For example, the JPEG2000 still image compression standard uses the Cohen-Daubechies-Favreau (CDF) 5/3 and CDF 9/7 DWT for lossless and lossy image compression respectively. Simple wavelet image denoising techniques resulted in improved images up to 27 dB PSNR. The DWT core is modeled using MATLAB and VHDL. The VHDL model is synthesized to a Xilinx FPGA to demonstrate hardware functionality. The CDF 5/3 and CDF 9/7 versions of the DWT are both modeled and used as comparisons. The execution time for performing both DWTs is nearly identical at approximately 14 clock cycles per image pixel for one level of DWT decomposition. The hardware area generated for the CDF 5/3 is around 15,000 gates using only 5% of the Xilinx FPGA hardware area, at 2.185 MHz max clock speed and 24 mW power consumption.
A 1D wavelet filtering for ultrasound images despeckling
NASA Astrophysics Data System (ADS)
Dahdouh, Sonia; Dubois, Mathieu; Frenoux, Emmanuelle; Osorio, Angel
2010-03-01
Ultrasound images appearance is characterized by speckle, shadows, signal dropout and low contrast which make them really difficult to process and leads to a very poor signal to noise ratio. Therefore, for main imaging applications, a denoising step is necessary to apply successfully medical imaging algorithms on such images. However, due to speckle statistics, denoising and enhancing edges on these images without inducing additional blurring is a real challenging problem on which usual filters often fail. To deal with such problems, a large number of papers are working on B-mode images considering that the noise is purely multiplicative. Making such an assertion could be misleading, because of internal pre-processing such as log compression which are done in the ultrasound device. To address those questions, we designed a novel filtering method based on 1D Radiofrequency signal. Indeed, since B-mode images are initially composed of 1D signals and since the log compression made by ultrasound devices modifies noise statistics, we decided to filter directly the 1D Radiofrequency signal envelope before log compression and image reconstitution, in order to conserve as much information as possible. A bi-orthogonal wavelet transform is applied to the log transform of each signal and an adaptive 1D split and merge like algorithm is used to denoise wavelet coefficients. Experiments were carried out on synthetic data sets simulated with Field II simulator and results show that our filter outperforms classical speckle filtering methods like Lee, non-linear means or SRAD filters.
Wavelet approach to accelerator problems. 2: Metaplectic wavelets
Fedorova, A.; Zeitlin, M.; Parsa, Z.
1997-05-01
This is the second part of a series of talks in which the authors present applications of wavelet analysis to polynomial approximations for a number of accelerator physics problems. According to the orbit method and by using construction from the geometric quantization theory they construct the symplectic and Poisson structures associated with generalized wavelets by using metaplectic structure and corresponding polarization. The key point is a consideration of semidirect product of Heisenberg group and metaplectic group as subgroup of automorphisms group of dual to symplectic space, which consists of elements acting by affine transformations.
A Progressive Image Compression Method Based on EZW Algorithm
NASA Astrophysics Data System (ADS)
Du, Ke; Lu, Jianming; Yahagi, Takashi
A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Recent advances in wavelet technology
NASA Technical Reports Server (NTRS)
Wells, R. O., Jr.
1994-01-01
Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.
Adaptive wavelets and relativistic magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo
2016-03-01
We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.
Visual masking in wavelet compression for JPEG-2000
NASA Astrophysics Data System (ADS)
Daly, Scott J.; Zeng, Wenjun; Li, Jin; Lei, Shawmin
2000-04-01
We describe a nonuniform quantization scheme for JPEG2000 that leverages the masking properties of the visual system, in which visibility to distortions declines as image energy increases. Derivatives of contrast transducer functions convey visual threshold changes due to local image content (i.e. the mask). For any frequency region, these functions have approximately the same shape, once the threshold and mask contrast axes are normalized to the frequency's threshold. We have developed two methods that can work together to take advantage of masking. One uses a nonlinearity interposed between the visual weighting and uniform quantization stage at the encoder. In the decoder, the inverse nonlinearity is applied before the inverse transform. The resulting image- adaptive behavior is achieved with only a small overhead (the masking table), and without adding image assessment computations. This approach, however, underestimates masking near zero crossings within a frequency band, so an additional technique pools coefficient energy in a small local neighborhood around each coefficient within a frequency band. It does this in a causal manner to avoid overhead. The first effect of these techniques is to improve the image quality as the image becomes more complex, and these techniques allow image quality increases in applications where using the visual system's frequency response provides little advantage. A key area of improvement is in low amplitude textures, in areas such as facial skin. The second effect relates to operational attributes, since for a given bitrate, the image quality is more robust against variations in image complexity.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Wavelets based on Hermite cubic splines
NASA Astrophysics Data System (ADS)
Cvejnová, Daniela; Černá, Dana; Finěk, Václav
2016-06-01
In 2000, W. Dahmen et al. designed biorthogonal multi-wavelets adapted to the interval [0,1] on the basis of Hermite cubic splines. In recent years, several more simple constructions of wavelet bases based on Hermite cubic splines were proposed. We focus here on wavelet bases with respect to which both the mass and stiffness matrices are sparse in the sense that the number of nonzero elements in any column is bounded by a constant. Then, a matrix-vector multiplication in adaptive wavelet methods can be performed exactly with linear complexity for any second order differential equation with constant coefficients. In this contribution, we shortly review these constructions and propose a new wavelet which leads to improved Riesz constants. Wavelets have four vanishing wavelet moments.
Wavelet-based face verification for constrained platforms
NASA Astrophysics Data System (ADS)
Sellahewa, Harin; Jassim, Sabah A.
2005-03-01
Human Identification based on facial images is one of the most challenging tasks in comparison to identification based on other biometric features such as fingerprints, palm prints or iris. Facial recognition is the most natural and suitable method of identification for security related applications. This paper is concerned with wavelet-based schemes for efficient face verification suitable for implementation on devices that are constrained in memory size and computational power such as PDA"s and smartcards. Beside minimal storage requirements we should apply as few as possible pre-processing procedures that are often needed to deal with variation in recoding conditions. We propose the LL-coefficients wavelet-transformed face images as the feature vectors for face verification, and compare its performance of PCA applied in the LL-subband at levels 3,4 and 5. We shall also compare the performance of various versions of our scheme, with those of well-established PCA face verification schemes on the BANCA database as well as the ORL database. In many cases, the wavelet-only feature vector scheme has the best performance while maintaining efficacy and requiring minimal pre-processing steps. The significance of these results is their efficiency and suitability for platforms of constrained computational power and storage capacity (e.g. smartcards). Moreover, working at or beyond level 3 LL-subband results in robustness against high rate compression and noise interference.
Iterative image coding with overcomplete complex wavelet transforms
NASA Astrophysics Data System (ADS)
Kingsbury, Nick G.; Reeves, Tanya
2003-06-01
Overcomplete transforms, such as the Dual-Tree Complex Wavelet Transform, can offer more flexible signal representations than critically-sampled transforms such as the Discrete Wavelet Transform. However the process of selecting the optimal set of coefficients to code is much more difficult because many different sets of transform coefficients can represent the same decoded image. We show that large numbers of transform coefficients can be set to zero without much reconstruction quality loss by forcing compensatory changes in the remaining coefficients. We develop a system for achieving these coding aims of coefficient elimination and compensation, based on iterative projection of signals between the image domain and transform domain with a non-linear process (e.g.~centre-clipping or quantization) applied in the transform domain. The convergence properties of such non-linear feedback loops are discussed and several types of non-linearity are proposed and analyzed. The compression performance of the overcomplete scheme is compared with that of the standard Discrete Wavelet Transform, both objectively and subjectively, and is found to offer advantages of up to 0.65 dB in PSNR and significant reduction in visibility of some types of coding artifacts.
NASA Astrophysics Data System (ADS)
Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah A.
2014-05-01
Video compression and encryption became very essential in a secured real time video transmission. Applying both techniques simultaneously is one of the challenges where the size and the quality are important in multimedia transmission. In this paper we proposed a new technique for video compression and encryption. Both encryption and compression are based on edges extracted from the high frequency sub-bands of wavelet decomposition. The compression algorithm based on hybrid of: discrete wavelet transforms, discrete cosine transform, vector quantization, wavelet based edge detection, and phase sensing. The compression encoding algorithm treats the video reference and non-reference frames in two different ways. The encryption algorithm utilized A5 cipher combined with chaotic logistic map to encrypt the significant parameters and wavelet coefficients. Both algorithms can be applied simultaneously after applying the discrete wavelet transform on each individual frame. Experimental results show that the proposed algorithms have the following features: high compression, acceptable quality, and resistance to the statistical and bruteforce attack with low computational processing.
A Wavelet Perspective on the Allan Variance.
Percival, Donald B
2016-04-01
The origins of the Allan variance trace back 50 years ago to two seminal papers, one by Allan (1966) and the other by Barnes (1966). Since then, the Allan variance has played a leading role in the characterization of high-performance time and frequency standards. Wavelets first arose in the early 1980s in the geophysical literature, and the discrete wavelet transform (DWT) became prominent in the late 1980s in the signal processing literature. Flandrin (1992) briefly documented a connection between the Allan variance and a wavelet transform based upon the Haar wavelet. Percival and Guttorp (1994) noted that one popular estimator of the Allan variance-the maximal overlap estimator-can be interpreted in terms of a version of the DWT now widely referred to as the maximal overlap DWT (MODWT). In particular, when the MODWT is based on the Haar wavelet, the variance of the resulting wavelet coefficients-the wavelet variance-is identical to the Allan variance when the latter is multiplied by one-half. The theory behind the wavelet variance can thus deepen our understanding of the Allan variance. In this paper, we review basic wavelet variance theory with an emphasis on the Haar-based wavelet variance and its connection to the Allan variance. We then note that estimation theory for the wavelet variance offers a means of constructing asymptotically correct confidence intervals (CIs) for the Allan variance without reverting to the common practice of specifying a power-law noise type a priori. We also review recent work on specialized estimators of the wavelet variance that are of interest when some observations are missing (gappy data) or in the presence of contamination (rogue observations or outliers). It is a simple matter to adapt these estimators to become estimators of the Allan variance. Finally we note that wavelet variances based upon wavelets other than the Haar offer interesting generalizations of the Allan variance. PMID:26529757
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Foveated wavelet image quality index
NASA Astrophysics Data System (ADS)
Wang, Zhou; Bovik, Alan C.; Lu, Ligang; Kouloheris, Jack L.
2001-12-01
The human visual system (HVS) is highly non-uniform in sampling, coding, processing and understanding. The spatial resolution of the HVS is highest around the point of fixation (foveation point) and decreases rapidly with increasing eccentricity. Currently, most image quality measurement methods are designed for uniform resolution images. These methods do not correlate well with the perceived foveated image quality. Wavelet analysis delivers a convenient way to simultaneously examine localized spatial as well as frequency information. We developed a new image quality metric called foveated wavelet image quality index (FWQI) in the wavelet transform domain. FWQI considers multiple factors of the HVS, including the spatial variance of the contrast sensitivity function, the spatial variance of the local visual cut-off frequency, the variance of human visual sensitivity in different wavelet subbands, and the influence of the viewing distance on the display resolution and the HVS features. FWQI can be employed for foveated region of interest (ROI) image coding and quality enhancement. We show its effectiveness by using it as a guide for optimal bit assignment of an embedded foveated image coding system. The coding system demonstrates very good coding performance and scalability in terms of foveated objective as well as subjective quality measurement.
Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.
ERIC Educational Resources Information Center
Culik, Karel II; Kari, Jarkko
1994-01-01
Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…
NASA Astrophysics Data System (ADS)
Lim, Se Hoon
Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field
Uncertainty Principle and Elementary Wavelet
NASA Astrophysics Data System (ADS)
Bliznetsov, M.
This paper is aimed to define time-and-spectrum characteristics of elementary wavelet. An uncertainty relation between the width of a pulse amplitude spectrum and its time duration and extension in space is investigated in the paper. Analysis of uncertainty relation is carried out for the causal pulses with minimum-phase spectrum. Amplitude spectra of elementary pulses are calculated using modified Fourier spectral analysis. Modification of Fourier analysis is justified by the necessity of solving zero frequency paradox in amplitude spectra that are calculated with the help of standard Fourier anal- ysis. Modified Fourier spectral analysis has the same resolution along the frequency axis and excludes physically unobservable values from time-and-spectral presenta- tions and determines that Heaviside unit step function has infinitely wide spectrum equal to 1 along the whole frequency range. Dirac delta function has the infinitely wide spectrum in the infinitely high frequency scope. Difference in propagation of wave and quasi-wave forms of energy motion is established from the analysis of un- certainty relation. Unidirectional pulse velocity depends on the relative width of the pulse spectra. Oscillating pulse velocity is constant in given nondispersive medium. Elementary wavelet has the maximum relative spectrum width and minimum time du- ration among all the oscillating pulses whose velocity is equal to the velocity of casual harmonic components of the pulse spectra. Relative width of elementary wavelet spec- trum in regard to resonance frequency is square root of 4/3 and approximately equal to 1.1547.... Relative width of this wavelet spectrum in regard to the center frequency is equal to 1. The more relative width of unidirectional pulse spectrum exceeds rela- tive width of elementary wavelet spectrum the higher velocity of unidirectional pulse propagation. The concept of velocity exceeding coefficient is introduced for pulses presenting quasi-wave form of energy
Embedded wavelet-based face recognition under variable position
NASA Astrophysics Data System (ADS)
Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi
2015-02-01
For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding
NASA Astrophysics Data System (ADS)
Alesanco, Álvaro; García, José
2007-12-01
Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD), which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT) algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10[InlineEquation not available: see fulltext.] preserves the signal quality and thus they recommend this value to be used in the compression system.
Next gen wavelets down-sampling preserving statistics
NASA Astrophysics Data System (ADS)
Szu, Harold; Miao, Lidan; Chanyagon, Pornchai; Cader, Masud
2007-04-01
We extend the 2 nd Gen Discrete Wavelet Transform (DWT) of Swelden to the Next Generations (NG) Digital Wavelet Transform (DWT) preserving the statistical salient features. The lossless NG DWT accomplishes the data compression of "wellness baseline profiles (WBP)" of aging population at homes. For medical monitoring system at home fronts we translate the military experience to dual usage of veterans & civilian alike with the following three requirements: (i) Data Compression: The necessary down sampling reduces the immense amount of data of individual WBP from hours to days and to weeks for primary caretakers in terms of moments, e.g. mean value, variance, etc., without the artifacts caused by FFT arbitrary windowing. (ii) Lossless: our new NG_DWT must preserve the original data sets. (iii) Phase Transition: NG_DWT must capture the critical phase transition of the wellness toward the sickness with simultaneous display of local statistical moments. According to the Nyquist sampling theory, assuming a band-limited wellness physiology, we must sample the WBP at least twice per day since it is changing diurnally and seasonally. Since NG_DWT, like the 2 nd Gen, is lossless, we can reconstruct the original time series for the physicians' second looks. This technique of NG_DWT can also help stock market day-traders monitoring the volatility of multiple portfolios without artificial horizon artifacts.
Lossless Compression on MRI Images Using SWT.
Anusuya, V; Raghavan, V Srinivasa; Kavitha, G
2014-10-01
Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices. PMID:24848945
Wavelet analysis of internal gravity waves
NASA Astrophysics Data System (ADS)
Hawkins, J.; Warn-Varnas, A.; Chin-Bing, S.; King, D.; Smolarkiewicsz, P.
2005-05-01
A series of model studies of internal gravity waves (igw) have been conducted for several regions of interest. Dispersion relations from the results have been computed using wavelet analysis as described by Meyers (1993). The wavelet transform is repeatedly applied over time and the components are evaluated with respect to their amplitude and peak position (Torrence and Compo, 1998). In this sense we have been able to compute dispersion relations from model results and from measured data. Qualitative agreement has been obtained in some cases. The results from wavelet analysis must be carefully interpreted because the igw models are fully nonlinear and wavelet analysis is fundamentally a linear technique. Nevertheless, a great deal of information describing igw propagation can be obtained from the wavelet transform. We address the domains over which wavelet analysis techniques can be applied and discuss the limits of their applicability.
On the wavelet optimized finite difference method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1994-01-01
When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.
NASA Astrophysics Data System (ADS)
Preda, Radu O.; Vizireanu, Dragos Nicolae
2011-01-01
The development of the information technology and computer networks facilitates easy duplication, manipulation, and distribution of digital data. Digital watermarking is one of the proposed solutions for effectively safeguarding the rightful ownership of digital images and video. We propose a public digital watermarking technique for video copyright protection in the discrete wavelet transform domain. The scheme uses binary images as watermarks. These are embedded in the detail wavelet coefficients of the middle wavelet subbands. The method is a combination of spread spectrum and quantization-based watermarking. Every bit of the watermark is spread over a number of wavelet coefficients with the use of a secret key by means of quantization. The selected wavelet detail coefficients from different subbands are quantized using an optimal quantization model, based on the characteristics of the human visual system (HVS). Our HVS-based scheme is compared to a non-HVS approach. The resilience of the watermarking algorithm is tested against a series of different spatial, temporal, and compression attacks. To improve the robustness of the algorithm, we use error correction codes and embed the watermark with spatial and temporal redundancy. The proposed method achieves a good perceptual quality and high resistance to a large spectrum of attacks.
Principal component analysis in the wavelet domain: new features for underwater object recognition
NASA Astrophysics Data System (ADS)
Okimoto, Gordon S.; Lemonds, David W.
1999-08-01
Principal component analysis (PCA) in the wavelet domain provides powerful features for underwater object recognition applications. The multiresolution analysis of the Morlet wavelet transform (MWT) is used to pre-process echo returns from targets ensonified by biologically motivated broadband signal. PCA is then used to compress and denoise the resulting time-scale signal representation for presentation to a hierarchical neural network for object classification. Wavelet/PCA features combined with multi-aspect data fusion and neural networks have resulted in impressive underwater object recognition performance using backscatter data generated by simulate dolphin echolocation clicks and bat- like linear frequency modulated upsweeps. For example, wavelet/PCA features extracted from LFM echo returns have resulted in correct classification rates of 98.6 percent over a six target suite, which includes two mine simulators and four clutter objects. For the same data, ROC analysis of the two-class mine-like versus non-mine-like problem resulted in a probability of detection of 0.981 and a probability of false alarm of 0.032 at the 'optimal' operating point. The wavelet/PCA feature extraction algorithm is currently being implemented in VLSI for use in small, unmanned underwater vehicles designed for mine- hunting operations in shallow water environments.
Image-based scene representation using wavelet-based interval morphing
NASA Astrophysics Data System (ADS)
Bao, Paul; Xu, Dan
1999-07-01
Scene appearance for a continuous range of viewpoint can be represented by a discrete set of images via image morphing. In this paper, we present a new robust image morphing scheme based on 2D wavelet transform and interval field interpolation. Traditional mesh-base and field-based morphing algorithms, usually designed in the spatial image space, suffer from very high time complexity and therefore make themselves impractical in real-time virtual environment applications. Compared with traditional morphing methods, the proposed wavelet-based interval morphing scheme performs interval interpolation in both the frequency and spatial spaces. First, the images of the scene can be significantly compressed in the frequency domain with little degradation in visual quality and therefore the complexity of the scene can be significantly reduced. Second, since a feature point in the image may correspond to a neighborhood in a subband image in the wavelet domain, we define feature interval for the wavelet-transformed images for an accurate feature matching between the morphing images. Based on the feature intervals, we employ the interval field interpolation to morph the images progressively in a coarse-to-fine process. Finally, we use a post-warping procedure to transform the interpolated views to its desired position. A nice future of using wavelet transformation is its multiresolution representation mode, which enables the progressive morphing of scene.
Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements
NASA Astrophysics Data System (ADS)
McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.
2012-12-01
We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia
Wavelet analysis in two-dimensional tomography
NASA Astrophysics Data System (ADS)
Burkovets, Dimitry N.
2002-02-01
The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.
NASA Astrophysics Data System (ADS)
Anderson, Peter G.; Liu, Changmeng
2003-01-01
We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.
Wavelet analysis of epileptic spikes
NASA Astrophysics Data System (ADS)
Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.
2003-05-01
Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.
Wavelet transforms for optical pulse analysis.
Vázquez, Javier Molina; Mazilu, Michael; Miller, Alan; Galbraith, Ian
2005-12-01
An exploration of wavelet transforms for ultrashort optical pulse characterization is given. Some of the most common wavelets are examined to determine the advantages of using the causal quasi-wavelet suggested in Proceedings of the LEOS 15th Annual Meeting (IEEE, 2002), Vol. 2, p. 592, in terms of pulse analysis and, in particular, chirp extraction. Owing to its ability to distinguish between past and future pulse information, the causal quasi-wavelet is found to be highly suitable for optical pulse characterization. PMID:16396051
Entangled Husimi Distribution and Complex Wavelet Transformation
NASA Astrophysics Data System (ADS)
Hu, Li-Yun; Fan, Hong-Yi
2010-05-01
Similar in spirit to the preceding work (Int. J. Theor. Phys. 48:1539, 2009) where the relationship between wavelet transformation and Husimi distribution function is revealed, we study this kind of relationship to the entangled case. We find that the optical complex wavelet transformation can be used to study the entangled Husimi distribution function in phase space theory of quantum optics. We prove that, up to a Gaussian function, the entangled Husimi distribution function of a two-mode quantum state | ψ> is just the modulus square of the complex wavelet transform of e^{-\\vert η \\vert 2/2} with ψ( η) being the mother wavelet.
Integrated wavelets for medical image analysis
NASA Astrophysics Data System (ADS)
Heinlein, Peter; Schneider, Wilfried
2003-11-01
Integrated wavelets are a new method for discretizing the continuous wavelet transform (CWT). Independent of the choice of discrete scale and orientation parameters they yield tight families of convolution operators. Thus these families can easily be adapted to specific problems. After presenting the fundamental ideas, we focus primarily on the construction of directional integrated wavelets and their application to medical images. We state an exact algorithm for implementing this transform and present applications from the field of digital mammography. The first application covers the enhancement of microcalcifications in digital mammograms. Further, we exploit the directional information provided by integrated wavelets for better separation of microcalcifications from similar structures.
Investigation into the geometric consequences of processing substantially compressed images
NASA Astrophysics Data System (ADS)
Tempelmann, Udo; Nwosu, Zubbi; Zumbrunn, Roland M.
1995-07-01
One of the major driving forces behind digital photogrammetric systems is the continued drop in the cost of digital storage systems. However, terrestrial remote sensing systems continue to generate enormous volumes of data due to smaller pixels, larger coverage, and increased multispectral and multitemporal possibilities. Sophisticated compression algorithms have been developed but reduced visual quality of their output, which impedes object identification, and resultant geometric deformation have been limiting factors in employing compression. Compression and decompression time is also an issue but of less importance due to off-line possibilities. Two typical image blocks have been selected, one sub-block from a SPOT image and the other is an image of industrial targets taken with an off-the-shelf CCD. Three common compression algorithms have been chosen: JPEG, Wavelet, and Fractal. The images are run through the compression/decompression cycle, with parameter chosen to cover the whole range of available compression ratios. Points are identified on these images and their locations are compared against those in the originals. These results are presented to assist choice of compression facilities after considerations on metric quality against storage availability. Fractals offer the best visual quality but JPEG, closely followed by wavelets, imposes less geometric defects. JPEG seems to offer the best all-around performance when you consider geometric and visual quality, and compression/decompression speed.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
The FBI compression standard for digitized fingerprint images
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Optimization and implementation of the integer wavelet transform for image coding.
Grangetto, Marco; Magli, Enrico; Martina, Maurizio; Olmo, Gabriella
2002-01-01
This paper deals with the design and implementation of an image transform coding algorithm based on the integer wavelet transform (IWT). First of all, criteria are proposed for the selection of optimal factorizations of the wavelet filter polyphase matrix to be employed within the lifting scheme. The obtained results lead to the IWT implementations with very satisfactory lossless and lossy compression performance. Then, the effects of finite precision representation of the lifting coefficients on the compression performance are analyzed, showing that, in most cases, a very small number of bits can be employed for the mantissa keeping the performance degradation very limited. Stemming from these results, a VLSI architecture is proposed for the IWT implementation, capable of achieving very high frame rates with moderate gate complexity. PMID:18244658
A 64-channel neural signal processor/ compressor based on Haar wavelet transform.
Shaeri, Mohammad Ali; Sodagar, Amir M; Abrishami-Moghaddam, Hamid
2011-01-01
A signal processor/compressor dedicated to implantable neural recording microsystems is presented. Signal compression is performed based on Haar wavelet. It is shown in this paper that, compared to other mathematical transforms already used for this purpose, compression of neural signals using this type of wavelet transform can be of almost the same quality, while demanding less circuit complexity and smaller silicon area. Designed in a 0.13-μm standard CMOS process, the 64-channel 8-bit signal processor reported in this paper occupies 113 μm x 110 μm of silicon area. It operates under a 1.8-V supply voltage at a master clock frequency of 3.2 MHz. PMID:22255805
On the use of the Stockwell transform for image compression
NASA Astrophysics Data System (ADS)
Wang, Yanwei; Orchard, Jeff
2009-02-01
In this paper, we investigate the use of the Stockwell Transform for image compression. The proposed technique uses the Discrete Orthogonal Stockwell Transform (DOST), an orthogonal version of the Discrete Stockwell Transform (DST). These mathematical transforms provide a multiresolution spatial-frequency representation of a signal or image. First, we give a brief introduction for the Stockwell transform and the DOST. Then we outline a simplistic compression method based on setting the smallest coefficients to zero. In an experiment, we use this compression strategy on three different transforms: the Fast Fourier transform, the Daubechies wavelet transform and the DOST. The results show that the DOST outperforms the two other methods.
Multiresolution Distance Volumes for Progressive Surface Compression
Laney, D; Bertram, M; Duchaineau, M; Max, N
2002-01-14
Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.
3D steerable wavelets in practice.
Chenouard, Nicolas; Unser, Michael
2012-11-01
We introduce a systematic and practical design for steerable wavelet frames in 3D. Our steerable wavelets are obtained by applying a 3D version of the generalized Riesz transform to a primary isotropic wavelet frame. The novel transform is self-reversible (tight frame) and its elementary constituents (Riesz wavelets) can be efficiently rotated in any 3D direction by forming appropriate linear combinations. Moreover, the basis functions at a given location can be linearly combined to design custom (and adaptive) steerable wavelets. The features of the proposed method are illustrated with the processing and analysis of 3D biomedical data. In particular, we show how those wavelets can be used to characterize directional patterns and to detect edges by means of a 3D monogenic analysis. We also propose a new inverse-problem formalism along with an optimization algorithm for reconstructing 3D images from a sparse set of wavelet-domain edges. The scheme results in high-quality image reconstructions which demonstrate the feature-reduction ability of the steerable wavelets as well as their potential for solving inverse problems. PMID:22752138
Image registration using redundant wavelet transforms
NASA Astrophysics Data System (ADS)
Brown, Richard K.; Claypoole, Roger L., Jr.
2001-12-01
Imagery is collected much faster and in significantly greater quantities today compared to a few years ago. Accurate registration of this imagery is vital for comparing the similarities and differences between multiple images. Image registration is a significant component in computer vision and other pattern recognition problems, medical applications such as Medical Resonance Images (MRI) and Positron Emission Tomography (PET), remotely sensed data for target location and identification, and super-resolution algorithms. Since human analysis is tedious and error prone for large data sets, we require an automatic, efficient, robust, and accurate method to register images. Wavelet transforms have proven useful for a variety of signal and image processing tasks. In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency. The shift-invariant wavelet transform is applied in translation estimation and a new rotation-invariant polar wavelet transform is effectively utilized in rotation estimation. We demonstrate the robustness of these redundant wavelet transforms for the registration of two images (i.e., translating or rotating an input image to a reference image), but extensions to larger data sets are feasible. We compare the registration accuracy of our redundant wavelet transforms to the critically sampled discrete wavelet transform using the Daubechies wavelet to illustrate the power of our algorithm in the presence of significant additive white Gaussian noise and strongly translated or rotated images.
2-D wavelet with position controlled resolution
NASA Astrophysics Data System (ADS)
Walczak, Andrzej; Puzio, Leszek
2005-09-01
Wavelet transformation localizes all irregularities in the scene. It is most effective in the case when intensities in the scene have no sharp details. It is the case often present in a medical imaging. To identify the shape one has to extract it from the scene as typical irregularity. When the scene does not contain sharp changes then common differential filters are not efficient tool for a shape extraction. The new 2-D wavelet for such task has been proposed. Described wavelet transform is axially symmetric and has varied scale in dependence on the distance from the centre of the wavelet symmetry. The analytical form of the wavelet has been presented as well as its application for details extraction in the scene. Most important feature of the wavelet transform is that it gives a multi-scale transformation, and if zoom is on the wavelet selectivity varies proportionally to the zoom step. As a result, the extracted shape does not change during zoom operation. What is more the wavelet selectivity can be fit to the local intensity gradient properly to obtain best extraction of the irregularities.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
[An improved motion estimation of medical image series via wavelet transform].
Zhang, Ying; Rao, Nini; Wang, Gang
2006-10-01
The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
Adapting overcomplete wavelet models to natural images
NASA Astrophysics Data System (ADS)
Sallee, Phil; Olshausen, Bruno A.
2003-11-01
Overcomplete wavelet representations have become increasingly popular for their ability to provide highly sparse and robust descriptions of natural signals. We describe a method for incorporating an overcomplete wavelet representation as part of a statistical model of images which includes a sparse prior distribution over the wavelet coefficients. The wavelet basis functions are parameterized by a small set of 2-D functions. These functions are adapted to maximize the average log-likelihood of the model for a large database of natural images. When adapted to natural images, these functions become selective to different spatial orientations, and they achieve a superior degree of sparsity on natural images as compared with traditional wavelet bases. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Inference with the learned model is demonstrated for applications such as denoising, with results that compare favorably with other methods.
Using wavelets to learn pattern templates
NASA Astrophysics Data System (ADS)
Scott, Clayton D.; Nowak, Robert D.
2002-07-01
Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.
Critically sampled wavelets with composite dilations.
Easley, Glenn R; Labate, Demetrio
2012-02-01
Wavelets with composite dilations provide a general framework for the construction of waveforms defined not only at various scales and locations, as traditional wavelets, but also at various orientations and with different scaling factors in each coordinate. As a result, they are useful to analyze the geometric information that often dominate multidimensional data much more efficiently than traditional wavelets. The shearlet system, for example, is a particular well-known realization of this framework, which provides optimally sparse representations of images with edges. In this paper, we further investigate the constructions derived from this approach to develop critically sampled wavelets with composite dilations for the purpose of image coding. Not only do we show that many nonredundant directional constructions recently introduced in the literature can be derived within this setting, but we also introduce new critically sampled discrete transforms that achieve much better nonlinear approximation rates than traditional discrete wavelet transforms and outperform the other critically sampled multiscale transforms recently proposed. PMID:21843993
Wavelet Analysis of Umbral Oscillations
NASA Astrophysics Data System (ADS)
Christopoulou, E. B.; Skodras, A.; Georgakilas, A. A.; Koutchmy, S.
2003-07-01
We study the temporal behavior of the intensity and velocity chromospheric umbral oscillations, applying wavelet analysis techniques to four sets of observations in the Hα line and one set of simultaneous observations in the Hα and the nonmagnetic Fe I (5576.099 Å) line. The wavelet and Fourier power spectra of the intensity and the velocity at chromospheric levels show both 3 and 5 minute oscillations. Oscillations in the 5 minute band are prominent in the intensity power spectra; they are significantly reduced in the velocity power spectra. We observe multiple peaks of closely spaced cospatial frequencies in the 3 minute band (5-8 mHz). Typically, there are three oscillating modes present: (1) a major one near 5.5 mHz, (2) a secondary near 6.3 mHz, and (3) oscillations with time-varying frequencies around 7.5 mHz that are present for limited time intervals. In the frame of current theories, the oscillating mode near 5.5 mHz should be considered as a fingerprint of the photospheric resonator, while the other two modes can be better explained by the chromospheric resonator. The wavelet spectra show a dynamic temporal behavior of the 3 minute oscillations. We observed (1) frequency drifts, (2) modes that are stable over a long time and then fade away or split up into two oscillation modes, and (3) suppression of frequencies for short time intervals. This behavior can be explained by the coupling between modes closely spaced in frequency or/and by long-term variations of the driving source of the resonators. Based on observations performed on the National Solar Observatory/Sacramento Peak Observatory Richard B. Dunn Solar Telescope (DST) and on the Big Bear Solar Observatory Harold Zirin Telescope.
Wavelet Algorithms for Illumination Computations
NASA Astrophysics Data System (ADS)
Schroder, Peter
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al. (32), have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k^2 + n) versus the usual O(n^2) (k is the number of input surfaces, n the number of finite elements into which the input surfaces are meshed). Similarly a hierarchical technique has been introduced for the more general radiance problem (which allows glossy reflectors) by Aupperle et al. (6). In this dissertation we show the equivalence of these hierarchical techniques to the use of a Haar wavelet basis in a general Galerkin framework. By so doing, we come to a deeper understanding of the properties of the numerical approximations used and are able to extend the hierarchical techniques to higher orders. In particular, we show the correspondence of the geometric arguments underlying hierarchical methods to the theory of Calderon-Zygmund operators and their sparse realization in wavelet bases. The resulting wavelet algorithms for radiosity and radiance are analyzed and numerical results achieved with our implementation are reported. We find that the resulting algorithms achieve smaller and smoother errors at equivalent work.
Reservoir characterization using wavelet transforms
NASA Astrophysics Data System (ADS)
Rivera Vega, Nestor
Automated detection of geological boundaries and determination of cyclic events controlling deposition can facilitate stratigraphic analysis and reservoir characterization. This study applies the wavelet transformation, a recent advance in signal analysis techniques, to interpret cyclicity, determine its controlling factors, and detect zone boundaries. We tested the cyclostratigraphic assessments using well log and core data from a well in a fluvio-eolian sequence in the Ormskirk Sandstone, Irish Sea. The boundary detection technique was tested using log data from 10 wells in the Apiay field, Colombia. We processed the wavelet coefficients for each zone of the Ormskirk Formation and determined the wavelengths of the strongest cyclicities. Comparing these periodicities with Milankovitch cycles, we found a strong correspondence of the two. This suggests that climate exercised an important control on depositional cyclicity, as had been concluded in previous studies of the Ormskirk Sandstone. The wavelet coefficients from the log data in the Apiay field were combined to form features. These vectors were used in conjunction with pattern recognition techniques to perform detection in 7 boundaries. For the upper two units, the boundary was detected within 10 feet of their actual depth, in 90% of the wells. The mean detection performance in the Apiay field is 50%. We compared our method with other traditional techniques which do not focus on selecting optimal features for boundary identification. Those methods resulted in detection performances of 40% for the uppermost boundary, which lag behind the 90% performance of our method. Automated determination of geologic boundaries will expedite studies, and knowledge of the controlling deposition factors will enhance stratigraphic and reservoir characterization models. We expect that automated boundary detection and cyclicity analysis will prove to be valuable and time-saving methods for establishing correlations and their
NASA Astrophysics Data System (ADS)
Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda
2016-03-01
Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.
Wavelet Regularization Per Nullspace Shuttle
NASA Astrophysics Data System (ADS)
Charléty, J.; Nolet, G.; Sigloch, K.; Voronin, S.; Loris, I.; Simons, F. J.; Daubechies, I.; Judd, S.
2010-12-01
Wavelet decomposition of models in an over-parameterized Earth and L1-norm minimization in wavelet space is a promising strategy to deal with the very heterogeneous data coverage in the Earth without sacrificing detail in the solution where this is resolved (see Loris et al., abstract this session). However, L1-norm minimizations are nonlinear, and pose problems of convergence speed when applied to large data sets. In an effort to speed up computations we investigate the application of the nullspace shuttle (Deal and Nolet, GJI 1996). The nullspace shuttle is a filter that adds components from the nullspace to the minimum norm solution so as to have the model satisfy additional conditions not imposed by the data. In our case, the nullspace shuttle projects the model on a truncated basis of wavelets. The convergence of this strategy is unproven, in contrast to algorithms using Landweber iteration or one of its variants, but initial computations using a very large data base give reason for optimism. We invert 430,554 P delay times measured by cross-correlation in different frequency windows. The data are dominated by observations with US Array, leading to a major discrepancy in the resolution beneath North America and the rest of the world. This is a subset of the data set inverted by Sigloch et al (Nature Geosci, 2008), excluding only a small number of ISC delays at short distance and all amplitude data. The model is a cubed Earth model with 3,637,248 voxels spanning mantle and crust, with a resolution everywhere better than 70 km, to which 1912 event corrections are added. In each iteration we determine the optimal solution by a least squares inversion with minimal damping, after which we regularize the model in wavelet space. We then compute the residual data vector (after an intermediate scaling step), and solve for a model correction until a satisfactory chi-square fit for the truncated model is obtained. We present our final results on convergence as well as a
Seamless multiresolution isosurfaces using wavelets
Udeshi, T.; Hudson, R.; Papka, M. E.
2000-04-11
Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.
Perceau, Géraldine; Faure, Christine
2012-01-01
The compression of a venous ulcer is carried out with the use of bandages, and for less exudative ulcers, with socks, stockings or tights. The system of bandages is complex. Different forms of extension and therefore different types of models exist. PMID:22489428
NASA Astrophysics Data System (ADS)
Maiolo, M.; Vancheri, A.; Krause, R.; Danani, A.
2015-11-01
In this paper, we apply Multiresolution Analysis (MRA) to develop sparse but accurate representations for the Multiscale Coarse-Graining (MSCG) approximation to the many-body potential of mean force. We rigorously framed the MSCG method into MRA so that all the instruments of this theory become available together with a multitude of new basis functions, namely the wavelets. The coarse-grained (CG) force field is hierarchically decomposed at different resolution levels enabling to choose the most appropriate wavelet family for each physical interaction without requiring an a priori knowledge of the details localization. The representation of the CG potential in this new efficient orthonormal basis leads to a compression of the signal information in few large expansion coefficients. The multiresolution property of the wavelet transform allows to isolate and remove the noise from the CG force-field reconstruction by thresholding the basis function coefficients from each frequency band independently. We discuss the implementation of our wavelet-based MSCG approach and demonstrate its accuracy using two different condensed-phase systems, i.e. liquid water and methanol. Simulations of liquid argon have also been performed using a one-to-one mapping between atomistic and CG sites. The latter model allows to verify the accuracy of the method and to test different choices of wavelet families. Furthermore, the results of the computer simulations show that the efficiency and sparsity of the representation of the CG force field can be traced back to the mathematical properties of the chosen family of wavelets. This result is in agreement with what is known from the theory of multiresolution analysis of signals.
On the use of lossless integer wavelet transforms in medical image segmentation
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Mallya, Yogish
2005-04-01
Recent trends in medical image processing involve computationally intensive processing techniques on large data sets, especially for 3D applications such as segmentation, registration, volume rendering etc. Multi-resolution image processing techniques have been used in order to speed-up these methods. However, all well-known techniques currently used in multi-resolution medical image processing rely on using Gaussain-based or other equivalent floating point representations that are lossy and irreversible. In this paper, we study the use of Integer Wavelet Transforms (IWT) to address the issue of lossless representation and reversible reconstruction for such medical image processing applications while still retaining all the benefits which floating-point transforms offer such as high speed and efficient memory usage. In particular, we consider three low-complexity reversible wavelet transforms namely the - Lazy-wavelet, the Haar wavelet or (1,1) and the S+P transform as against the Gaussian filter for multi-resolution speed-up of an automatic bone removal algorithm for abdomen CT Angiography. Perfect-reconstruction integer wavelet filters have the ability to perfectly recover the original data set at any step in the application. An additional advantage with the reversible wavelet representation is that it is suitable for lossless compression for purposes of storage, archiving and fast retrieval. Given the fact that even a slight loss of information in medical image processing can be detrimental to diagnostic accuracy, IWTs seem to be the ideal choice for multi-resolution based medical image segmentation algorithms. These could also be useful for other medical image processing methods.
Wavelet Neural Network Using Multiple Wavelet Functions in Target Threat Assessment
Guo, Lihong; Duan, Hong
2013-01-01
Target threat assessment is a key issue in the collaborative attack. To improve the accuracy and usefulness of target threat assessment in the aerial combat, we propose a variant of wavelet neural networks, MWFWNN network, to solve threat assessment. How to select the appropriate wavelet function is difficult when constructing wavelet neural network. This paper proposes a wavelet mother function selection algorithm with minimum mean squared error and then constructs MWFWNN network using the above algorithm. Firstly, it needs to establish wavelet function library; secondly, wavelet neural network is constructed with each wavelet mother function in the library and wavelet function parameters and the network weights are updated according to the relevant modifying formula. The constructed wavelet neural network is detected with training set, and then optimal wavelet function with minimum mean squared error is chosen to build MWFWNN network. Experimental results show that the mean squared error is 1.23 × 10−3, which is better than WNN, BP, and PSO_SVM. Target threat assessment model based on the MWFWNN has a good predictive ability, so it can quickly and accurately complete target threat assessment. PMID:23509436
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Robust retrieval from compressed medical image archives
NASA Astrophysics Data System (ADS)
Sidorov, Denis N.; Lerallut, Jean F.; Cocquerez, Jean-Pierre; Azpiroz, Joaquin
2005-04-01
Paper addresses the computational aspects of extracting important features directly from compressed images for the purpose of aiding biomedical image retrieval based on content. The proposed method for treatment of compressed medical archives follows the JPEG compression standard and exploits algorithm based on spacial analysis of the image cosine spectrum coefficients amplitude and location. The experiments on modality-specific archive of osteoarticular images show robustness of the method based on measured spectral spatial statistics. The features, which were based on the cosine spectrum coefficients' values, could satisfy different types of queries' modalities (MRI, US, etc), which emphasized texture and edge properties. In particular, it has been shown that there is wealth of information in the AC coefficients of the DCT transform, which can be utilized to support fast content-based image retrieval. The computational cost of proposed signature generation algorithm is low. Influence of conventional and the state-of-the-art compression techniques based on cosine and wavelet integral transforms on the performance of content-based medical image retrieval has been also studied. We found no significant differences in retrieval efficiencies for non-compressed and JPEG2000-compressed images even at the lowest bit rate tested.
NASA Astrophysics Data System (ADS)
Liu, Hong; Mo, Yu L.
1998-08-01
There are many textures such as woven fabrics having repeating Textron. In order to handle the textural characteristics of images with defects, this paper proposes a new method based on 2D wavelet transform. In the method, a new concept of different adaptive wavelet bases is used to match the texture pattern. The 2D wavelet transform has two different adaptive orthonormal wavelet bases for rows and columns which differ from Daubechies wavelet bases. The orthonormal wavelet bases for rows and columns are generated by genetic algorithm. The experiment result demonstrate the ability of the different adaptive wavelet bases to characterize the texture and locate the defects in the texture.
Wavelet analysis of electron-density maps.
Main, P; Wilson, J
2000-05-01
The wavelet transform is a powerful technique in signal processing and image analysis and it is shown here that wavelet analysis of low-resolution electron-density maps has the potential to increase their resolution. Like Fourier analysis, wavelet analysis expresses the image (electron density) in terms of a set of orthogonal functions. In the case of the Fourier transform, these functions are sines and cosines and each one contributes to the whole of the image. In contrast, the wavelet functions (simply called wavelets) can be quite localized and may only contribute to a small part of the image. This gives control over the amount of detail added to the map as the resolution increases. The mathematical details are outlined and an algorithm which achieves a resolution increase from 10 to 7 A using a knowledge of the wavelet-coefficient histograms, electron-density histogram and the observed structure amplitudes is described. These histograms are calculated from the electron density of known structures, but it seems likely that the histograms can be predicted, just as electron-density histograms are at high resolution. The results show that the wavelet coefficients contain the information necessary to increase the resolution of electron-density maps. PMID:10771431
Application of wavelets to automatic target recognition
NASA Astrophysics Data System (ADS)
Stirman, Charles
1995-03-01
'Application of Wavelets to Automatic Target Recognition,' is the second phase of multiphase project to insert compactly supported wavelets into an existing or near-term Department of Defense system such as the Longbow fire control radar for the Apache Attack Helicopter. In this contract, we have concentrated mainly on the classifier function. During the first phase of the program ('Application of Wavelets to Radar Data Processing'), the feasibility of using wavelets to process high range resolution profile (HRRP) amplitude returns from a wide bandwidth radar system was demonstrated. This phase obtained fully polarized wide bandwidth radar HRRP amplitude returns and processed, them with wavelet and wavelet packet or (best basis) transforms. Then, by mathematically defined nonlinear feature selection, we showed that significant improvements in the probability of correct classification are possible, up to 14 percentage points maximum (4 percentage points average) improvement when compared to the current classifier performance. In addition, we addressed the feasibility of using wavelet packets' best basis to address target registration, man made object rejection, clutter discriminations, and synthetic aperture radar scene speckle removal and object registration.
Applications of a fast, continuous wavelet transform
Dress, W.B.
1997-02-01
A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.
Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.
2011-01-01
Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737
On alternative wavelet reconstruction formula: a case study of approximate wavelets.
Lebedeva, Elena A; Postnikov, Eugene B
2014-10-01
The application of the continuous wavelet transform to the study of a wide class of physical processes with oscillatory dynamics is restricted by large central frequencies owing to the admissibility condition. We propose an alternative reconstruction formula for the continuous wavelet transform, which is applicable even if the admissibility condition is violated. The case of the transform with the standard reduced Morlet wavelet, which is an important example of such analysing functions, is discussed. PMID:26064533
Wavelet Applications for Flight Flutter Testing
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.
1999-01-01
Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.
FOPEN ultrawideband SAR imaging by wavelet interpolation
NASA Astrophysics Data System (ADS)
Guo, Hanwei; Liang, Diannong; Wang, Yan; Huang, Xiaotao; Dong, Zhen
2003-09-01
Wave number Domain Imaging algorithm can deal with the problem of foliage-penetrating ultra-wide band synthesis aperture radar (FOPEN UWB SAR) imaging. Stolt interpolation is a key role in Imaging Algorithm and is unevenly interpolation problem. There is no fast computation algorithm on Stolt interpolation. In this paper, A novel 4-4 tap of integer wavelet filters is used as Stolt interpolation base function. A fast interpolation algorithm is put forwards to. There is only plus and shift operation in wavelet interpolation that is easy to realize by hardware. The real data are processed to prove the wavelet interpolation valid for FOPEN UWB SAR imaging.
Wavelet frames and admissibility in higher dimensions
Fuehr, H.
1996-12-01
This paper is concerned with the relations between discrete and continuous wavelet transforms on {ital k}-dimensional Euclidean space. We start with the construction of continuous wavelet transforms with the help of square-integrable representations of certain semidirect products, thereby generalizing results of Bernier and Taylor. We then turn to frames of L{sup 2}({bold R}{sup {ital k}}) and to the question, when the functions occurring in a given frame are admissible for a given continuous wavelet transform. For certain frames we give a characterization which generalizes a result of Daubechies to higher dimensions. {copyright} {ital 1996 American Institute of Physics.}
Transionospheric signal detection with chirped wavelets
Doser, A.B.; Dunham, M.E.
1997-11-01
Chirped wavelets are utilized to detect dispersed signals in the joint time scale domain. Specifically, pulses that become dispersed by transmission through the ionosphere and are received by satellites as nonlinear chirps are investigated. Since the dispersion greatly lowers the signal to noise ratios, it is difficult to isolate the signals in the time domain. Satellite data are examined with discrete wavelet expansions. Detection is accomplished via a template matching threshold scheme. Quantitative experimental results demonstrate that the chirped wavelet detection scheme is successful in detecting the transionospheric pulses at very low signal to noise ratios.
Evaluation of the tactical utility of compressed imagery
NASA Astrophysics Data System (ADS)
Irvine, John M.; Eckstein, Barbara A.; Hummel, Robert A.; Peters, Richard J.; Ritzel, Rhonda L.
2002-06-01
The effects of compression on image utility are assessed based on manual exploitation performed by military imagery analysts (IAs). The original, uncompressed synthetic aperture radar imagery and compressed products are rated for the Radar National Imagery Interpretability Rating Scale (NIIRS), image features and sensor artifacts, and target detection and recognition. Images were compressed via standard JPEG compression, single-scale intelligent bandwidth compression (IBC), and wavelet/trellis- coded quantization (W/TCQ) at 50-to-1 and 100-to-1 ratios. We find that the utility of the compressed imagery differs only slightly from the uncompressed imagery, with the exception of the JPEG products. Otherwise, both the 50-to-1 and 100-to-1 compressed imagery appear similar in terms of image quality. Radar NIIRS indicates that even 100-to-1 compression using IBC or W/TCQ has minimal impact on imagery intelligence value. A slight loss in performance occurs for vehicle counting and identification tasks. These findings suggest that both single-scale IBC and W/TCQ compression techniques have matured to a point that they could provide value to the tactical user. Additional assessments may verify the practical limits of compression for synthetic aperture radar (SAR) data and address the transition to a field environment.
Wavelet differential neural network observer.
Chairez, Isaac
2009-09-01
State estimation for uncertain systems affected by external noises is an important problem in control theory. This paper deals with a state observation problem when the dynamic model of a plant contains uncertainties or it is completely unknown. Differential neural network (NN) approach is applied in this uninformative situation but with activation functions described by wavelets. A new learning law, containing an adaptive adjustment rate, is suggested to imply the stability condition for the free parameters of the observer. Nominal weights are adjusted during the preliminary training process using the least mean square (LMS) method. Lyapunov theory is used to obtain the upper bounds for the weights dynamics as well as for the mean squared estimation error. Two numeric examples illustrate this approach: first, a nonlinear electric system, governed by the Chua's equation and second the Lorentz oscillator. Both systems are assumed to be affected by external perturbations and their parameters are unknown. PMID:19674951
Wavelets: the Key to Intermittent Information?
NASA Astrophysics Data System (ADS)
Silverman, B. W.; Vassilicos, J. C.
2000-08-01
In recent years there has been an explosion of interest in wavelets, in a wide range of fields in science and engineering and beyond. This book brings together contributions from researchers from disparate fields, both in order to demonstrate to a wide readership the current breadth of work in wavelets, and to encourage cross-fertilization of ideas. It demonstrates the genuinely interdisplinary nature of wavelet research and applications. Particular areas covered include turbulence, statistics, time series analysis, signal and image processing, the physiology of vision, astronomy, economics and acoustics. Some of the work uses standard wavelet approaches and in other cases new methodology is developed. The papers were originally presented at a Royal Society Discussion Meeting, to a large and enthusiastic audience of specialists and non-specialists.
Wavelet based recognition for pulsar signals
NASA Astrophysics Data System (ADS)
Shan, H.; Wang, X.; Chen, X.; Yuan, J.; Nie, J.; Zhang, H.; Liu, N.; Wang, N.
2015-06-01
A signal from a pulsar can be decomposed into a set of features. This set is a unique signature for a given pulsar. It can be used to decide whether a pulsar is newly discovered or not. Features can be constructed from coefficients of a wavelet decomposition. Two types of wavelet based pulsar features are proposed. The energy based features reflect the multiscale distribution of the energy of coefficients. The singularity based features first classify the signals into a class with one peak and a class with two peaks by exploring the number of the straight wavelet modulus maxima lines perpendicular to the abscissa, and then implement further classification according to the features of skewness and kurtosis. Experimental results show that the wavelet based features can gain comparatively better performance over the shape parameter based features not only in the clustering and classification, but also in the error rates of the recognition tasks.
Wavelet Analysis for Acoustic Phased Array
NASA Astrophysics Data System (ADS)
Kozlov, Inna; Zlotnick, Zvi
2003-03-01
Wavelet spectrum analysis is known to be one of the most powerful tools for exploring quasistationary signals. In this paper we use wavelet technique to develop a new Direction Finding (DF) Algorithm for the Acoustic Phased Array (APA) systems. Utilising multi-scale analysis of libraries of wavelets allows us to work with frequency bands instead of individual frequency of an acoustic source. These frequency bands could be regarded as features extracted from quasistationary signals emitted by a noisy object. For detection, tracing and identification of a sound source in a noisy environment we develop smart algorithm. The essential part of this algorithm is a special interacting procedure of the above-mentioned DF-algorithm and the wavelet-based Identification (ID) algorithm developed in [4]. Significant improvement of the basic properties of a receiving APA pattern is achieved.
Wavelet-based acoustic recognition of aircraft
Dress, W.B.; Kercel, S.W.
1994-09-01
We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.
Velocity and Object Detection Using Quaternion Wavelets
Traversoni, Leonardo; Xu Yi
2007-09-06
DStarting from stereoscopic films we detect corresponding objects in both and stablish an epipolar geometry as well as corresponding moving objects are detected and its movement described all using quaternion wavelets and quaternion phase space decomposition.
The wavelet response as a multiscale NDT method.
Le Gonidec, Y; Conil, F; Gibert, D
2003-08-01
We analyze interfaces by using reflected waves in the framework of the wavelet transform. First, we introduce the wavelet transform as an efficient method to detect and characterize a discontinuity in the acoustical impedance profile of a material. Synthetic examples are shown for both an isolated reflector and multiscale clusters of nearby defects. In the second part of the paper we present the wavelet response method as a natural extension of the wavelet transform when the velocity profile to be analyzed can only be remotely probed by propagating wavelets through the medium (instead of being directly convolved as in the wavelet transform). The wavelet response is constituted by the reflections of the incident wavelets on the discontinuities and we show that both transforms are equivalent when multiple scattering is neglected. We end this paper by experimentally applying the wavelet response in an acoustic tank to characterize planar reflectors with finite thicknesses. PMID:12853084
Applications of a fast continuous wavelet transform
NASA Astrophysics Data System (ADS)
Dress, William B.
1997-04-01
A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice
Implementation of modified SPIHT algorithm for Compression of images
NASA Astrophysics Data System (ADS)
Kurume, A. V.; Yana, D. M.
2011-12-01
We present a throughput-efficient FPGA implementation of the Set Partitioning in Hierarchical Trees (SPIHT) algorithm for compression of images. The SPIHT uses inherent redundancy among wavelet coefficients and suited for both grey and color images. The SPIHT algorithm uses dynamic data structure which hinders hardware realization. we have modified basic SPIHT in two ways, one by using static (fixed) mappings which represent significant information and the other by interchanging the sorting and refinement passes.
Trabecular bone texture classification using wavelet leaders
NASA Astrophysics Data System (ADS)
Zou, Zilong; Yang, Jie; Megalooikonomou, Vasileios; Jennane, Rachid; Cheng, Erkang; Ling, Haibin
2016-03-01
In this paper we propose to use the Wavelet Leader (WL) transformation for studying trabecular bone patterns. Given an input image, its WL transformation is defined as the cross-channel-layer maximum pooling of an underlying wavelet transformation. WL inherits the advantage of the original wavelet transformation in capturing spatial-frequency statistics of texture images, while being more robust against scale and orientation thanks to the maximum pooling strategy. These properties make WL an attractive alternative to replace wavelet transformations which are used for trabecular analysis in previous studies. In particular, in this paper, after extracting wavelet leader descriptors from a trabecular texture patch, we feed them into two existing statistic texture characterization methods, namely the Gray Level Co-occurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). The most discriminative features, Energy of GLCM and Gray Level Non-Uniformity of GLRLM, are retained to distinguish two different populations between osteoporotic patients and control subjects. Receiver Operating Characteristics (ROC) curves are used to measure performance of classification. Experimental results on a recently released benchmark dataset show that WL significantly boosts the performance of baseline wavelet transformations by 5% in average.
The Continuous wavelet in airborne gravimetry
NASA Astrophysics Data System (ADS)
Liang, X.; Liu, L.
2013-12-01
Airborne gravimetry is an efficient method to recover medium and high frequency band of earth gravity over any region, especially inaccessible areas, which can measure gravity data with high accuracy,high resolution and broad range in a rapidly and economical way, and It will play an important role for geoid and geophysical exploration. Filtering methods for reducing high-frequency errors is critical to the success of airborne gravimetry due to Aircraft acceleration determination based on GPS.Tradiontal filters used in airborne gravimetry are FIR,IIR filer and so on. This study recommends an improved continuous wavelet to process airborne gravity data. Here we focus on how to construct the continuous wavelet filters and show their working principle. Particularly the technical parameters (window width parameter and scale parameter) of the filters are tested. Then the raw airborne gravity data from the first Chinese airborne gravimetry campaign are filtered using FIR-low pass filter and continuous wavelet filters to remove the noise. The comparison to reference data is performed to determinate external accuracy, which shows that continuous wavelet filters applied to airborne gravity in this thesis have good performances. The advantages of the continuous wavelet filters over digital filters are also introduced. The effectiveness of the continuous wavelet filters for airborne gravimetry is demonstrated through real data computation.
Optimal wavelet denoising for smart biomonitor systems
NASA Astrophysics Data System (ADS)
Messer, Sheila R.; Agzarian, John; Abbott, Derek
2001-03-01
Future smart-systems promise many benefits for biomedical diagnostics. The ideal is for simple portable systems that display and interpret information from smart integrated probes or MEMS-based devices. In this paper, we will discuss a step towards this vision with a heart bio-monitor case study. An electronic stethoscope is used to record heart sounds and the problem of extracting noise from the signal is addressed via the use of wavelets and averaging. In our example of heartbeat analysis, phonocardiograms (PCGs) have many advantages in that they may be replayed and analysed for spectral and frequency information. Many sources of noise may pollute a PCG including foetal breath sounds if the subject is pregnant, lung and breath sounds, environmental noise and noise from contact between the recording device and the skin. Wavelets can be employed to denoise the PCG. The signal is decomposed by a discrete wavelet transform. Due to the efficient decomposition of heart signals, their wavelet coefficients tend to be much larger than those due to noise. Thus, coefficients below a certain level are regarded as noise and are thresholded out. The signal can then be reconstructed without significant loss of information in the signal. The questions that this study attempts to answer are which wavelet families, levels of decomposition, and thresholding techniques best remove the noise in a PCG. The use of averaging in combination with wavelet denoising is also addressed. Possible applications of the Hilbert Transform to heart sound analysis are discussed.
KLT-based quality controlled compression of single-lead ECG.
Blanchett, T; Kember, G C; Fenton, G A
1998-07-01
An electrocardiogram (ECG) compression algorithm based on a combination of the Karhunen-Loeve transform (KLT) and multirate sampling is introduced. The use of multirate sampling reduces KLT computational times to those reported for wavelet-packet-based compression techniques. A beat-by-beat quality controlled compression criterion is shown to be necessary to ensure clinically adequate reconstruction of each beat. The resulting quality controlled algorithm efficiently achieves compression rates of approximately 30-40:1 for the MIT-BIH database. PMID:9644904
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Adaptive prediction trees for image compression.
Robinson, John A
2006-08-01
This paper presents a complete general-purpose method for still-image compression called adaptive prediction trees. Efficient lossy and lossless compression of photographs, graphics, textual, and mixed images is achieved by ordering the data in a multicomponent binary pyramid, applying an empirically optimized nonlinear predictor, exploiting structural redundancies between color components, then coding with hex-trees and adaptive runlength/Huffman coders. Color palettization and order statistics prefiltering are applied adaptively as appropriate. Over a diverse image test set, the method outperforms standard lossless and lossy alternatives. The competing lossy alternatives use block transforms and wavelets in well-studied configurations. A major result of this paper is that predictive coding is a viable and sometimes preferable alternative to these methods. PMID:16900671
Liu, Guoyan; Liu, Hongjun; Kadir, Abdurahman
2012-01-01
This paper proposes a new dynamic and robust blind watermarking scheme for color pathological image based on discrete wavelet transform (DWT). The binary watermark image is preprocessed before embedding; firstly it is scrambled by Arnold cat map and then encrypted by pseudorandom sequence generated by robust chaotic map. The host image is divided into n × n blocks, and the encrypted watermark is embedded into the higher frequency domain of blue component. The mean and variance of the subbands are calculated, to dynamically modify the wavelet coefficient of a block according to the embedded 0 or 1, so as to generate the detection threshold. We research the relationship between embedding intensity and threshold and give the effective range of the threshold to extract the watermark. Experimental results show that the scheme can resist against common distortions, especially getting advantage over JPEG compression, additive noise, brightening, rotation, and cropping. PMID:23243463
Wavelet-based Poisson Solver for use in Particle-In-CellSimulations
Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.
2005-05-13
We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.
Storing digital data using zero-compression method
NASA Astrophysics Data System (ADS)
Al-Qawasmi, Abdel-Rahman; Al-Lawama, Aiman
2008-01-01
Zero-Compression Method (ZCM) is a simple and effective algorithm that can be used to compress the digital data which consists of a significant numbers of zeros. The method has the ability to encode and decode the data with high processing speed and it has the ability to recover the stored data with minimum error and minimum storage area. On the other hand, the ZCM may have a wide practical value in storing data extracted from ECG signals. The method is implemented for various types of signals such as textual data, wavelet subsignals, randomly generated signals and speech signals. In this paper the coding and decoding algorithm is presented.
Compression and venous ulcers.
Stücker, M; Link, K; Reich-Schupke, S; Altmeyer, P; Doerler, M
2013-03-01
Compression therapy is considered to be the most important conservative treatment of venous leg ulcers. Until a few years ago, compression bandages were regarded as first-line therapy of venous leg ulcers. However, to date medical compression stockings are the first choice of treatment. With respect to compression therapy of venous leg ulcers the following statements are widely accepted: 1. Compression improves the healing of ulcers when compared with no compression; 2. Multicomponent compression systems are more effective than single-component compression systems; 3. High compression is more effective than lower compression; 4. Medical compression stockings are more effective than compression with short stretch bandages. Healed venous leg ulcers show a high relapse rate without ongoing treatment. The use of medical stockings significantly reduces the amount of recurrent ulcers. Furthermore, the relapse rate of venous leg ulcers can be significantly reduced by a combination of compression therapy and surgery of varicose veins compared with compression therapy alone. PMID:23482538
NASA Astrophysics Data System (ADS)
Yang, Shuyu; Zamora, Gilberto; Wilson, Mark; Mitra, Sunanda
2000-06-01
Existing lossless coding models yield only up to 3:1 compression. However, a much higher lossless compression can be achieved for certain medical images when the images are segmented prior to applying integer to integer wavelet transform and lossless coding. The methodology used in this research work is to apply a contour detection scheme to segment the image first. The segmented image is then wavelet transformed with integer to integer mapping to obtain a lower weighted entropy than the original. An adaptive arithmetic model is then applied to code the transformed image losslessly. For the male visible human color image set, the overall average lossless compression using the above scheme is around 10:1 whereas the compression ratio of an individual slice can be as high as 16:1. The achievable compression ratio depends on the actual bit rate of the segmented images attained by lossless coding as well as the compression obtainable from segmentation alone. The computational time required by the entire process is fast enough for application on large medical images.
Mining wavelet transformed boiler data sets
NASA Astrophysics Data System (ADS)
Letsche, Terry Lee
Accurate combustion models provide information that allows increased boiler efficiency optimization, saving money and resources while reducing waste. Boiler combustion processes are noted for being complex, nonstationary and nonlinear. While numerous methods have been used to model boiler processes, data driven approaches reflect actual operating conditions within a particular boiler and do not depend on idealized, complex, or expensive empirical models. Boiler and combustion processes vary in time, requiring a denoising technique that preserves the temporal and frequency nature of the data. Moving average, a common technique, smoothes data---low frequency noise is not removed. This dissertation examines models built with wavelet denoising techniques that remove low and high frequency noise in both time and frequency domains. The denoising process has a number of parameters, including choice of wavelet, threshold value, level of wavelet decomposition, and disposition of attributes that appear to be significant at multiple thresholds. A process is developed to experimentally evaluate the predictive accuracy of these models and compares this result against two benchmarks. The first research hypothesis compares the performance of these wavelet denoised models to the model generated from the original data. The second research hypothesis compares the performance of the models generated with this denoising approach to the most effective model generated from a moving average process. In both experiments it was determined that the Daubechies 4 wavelet was a better choice than the more typically chosen Haar wavelet, wavelet packet decomposition outperforms other levels of wavelet decomposition, and discarding all but the lowest threshold repeating attributes produces superior results. The third research hypothesis examined using a two-dimensional wavelet transform on the data. Another parameter for handling the boundary condition was introduced. In the two-dimensional case
Low bit-rate efficient compression for seismic data.
Averbuch, A Z; Meyer, R; Stromberg, J O; Coifman, R; Vassiliou, A
2001-01-01
Compression is a relatively new introduced technique for seismic data operations. The main drive behind the use of data compression in seismic data is the very large size of seismic data acquired. Some of the most recent acquired marine seismic data sets exceed 10 Tbytes, and in fact there are currently seismic surveys planned with a volume of around 120 Tbytes. Thus, the need to compress these very large seismic data files is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform, while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that
Multichannel Compressive Sensing MRI Using Noiselet Encoding
Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin
2015-01-01
The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548
Techniques for region coding in object-based image compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2004-01-01
Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2014-07-01
Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex optimization. The DOA estimation problem is formulated in the CS framework and it is shown that CS has superior performance compared to traditional DOA estimation methods especially under challenging scenarios such as coherent arrivals and single-snapshot data. An offset and resolution analysis is performed to indicate the limitations of CS. It is shown that the limitations are related to the beampattern, thus can be predicted. The high-resolution capabilities and the robustness of CS are demonstrated on experimental array data from ocean acoustic measurements for source tracking with single-snapshot data. PMID:24993212
Multiparameter radar analysis using wavelets
NASA Astrophysics Data System (ADS)
Tawfik, Ben Bella Sayed
Multiparameter radars have been used in the interpretation of many meteorological phenomena. Rainfall estimates can be obtained from multiparameter radar measurements. Studying and analyzing spatial variability of different rainfall algorithms, namely R(ZH), the algorithm based on reflectivity, R(ZH, ZDR), the algorithm based on reflectivity and differential reflectivity, R(KDP), the algorithm based on specific differential phase, and R(KDP, Z DR), the algorithm based on specific differential phase and differential reflectivity, are important for radar applications. The data used in this research were collected using CSU-CHILL, CP-2, and S-POL radars. In this research multiple objectives are addressed using wavelet analysis namely, (1)space time variability of various rainfall algorithms, (2)separation of convective and stratiform storms based on reflectivity measurements, (3)and detection of features such as bright bands. The bright band is a multiscale edge detection problem. In this research, the technique of multiscale edge detection is applied on the radar data collected using CP-2 radar on August 23, 1991 to detect the melting layer. In the analysis of space/time variability of rainfall algorithms, wavelet variance introduces an idea about the statistics of the radar field. In addition, multiresolution analysis of different rainfall estimates based on four algorithms, namely R(ZH), R( ZH, ZDR), R(K DP), and R(KDP, Z DR), are analyzed. The flood data of July 29, 1997 collected by CSU-CHILL radar were used for this analysis. Another set of S-POL radar data collected on May 2, 1997 at Wichita, Kansas were used as well. At each level of approximation, the detail and the approximation components are analyzed. Based on this analysis, the rainfall algorithms can be judged. From this analysis, an important result was obtained. The Z-R algorithms that are widely used do not show the full spatial variability of rainfall. In addition another intuitively obvious result
Hyperspectral images lossless compression using the 3D binary EZW algorithm
NASA Astrophysics Data System (ADS)
Cheng, Kai-jen; Dill, Jeffrey
2013-02-01
This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.
The simulation of far-field wavelets using frequency-domain air-gun array near-field wavelets
NASA Astrophysics Data System (ADS)
Song, Jian-Guo; Deng, Yong; Tong, Xin-Xin
2013-12-01
Air-gun arrays are used in marine-seismic exploration. Far-field wavelets in subsurface media represent the stacking of single air-gun ideal wavelets. We derived single air-gun ideal wavelets using near-field wavelets recorded from near-field geophones and then synthesized them into far-field wavelets. This is critical for processing wavelets in marineseismic exploration. For this purpose, several algorithms are currently used to decompose and synthesize wavelets in the time domain. If the traveltime of single air-gun wavelets is not an integral multiple of the sampling interval, the complex and error-prone resampling of the seismic signals using the time-domain method is necessary. Based on the relation between the frequency-domain phase and the time-domain time delay, we propose a method that first transforms the real near-field wavelet to the frequency domain via Fourier transforms; then, it decomposes it and composes the wavelet spectrum in the frequency domain, and then back transforms it to the time domain. Thus, the resampling problem is avoided and single air-gun wavelets and far-field wavelets can be reliably derived. The effect of ghost reflections is also considered, while decomposing the wavelet and removing the ghost reflections. Modeling and real data processing were used to demonstrate the feasibility of the proposed method.
Wavelet phase estimation using ant colony optimization algorithm
NASA Astrophysics Data System (ADS)
Wang, Shangxu; Yuan, Sanyi; Ma, Ming; Zhang, Rui; Luo, Chunmei
2015-11-01
Eliminating seismic wavelet is important in seismic high-resolution processing. However, artifacts may arise in seismic interpretation when the wavelet phase is inaccurately estimated. Therefore, we propose a frequency-dependent wavelet phase estimation method based on the ant colony optimization (ACO) algorithm with global optimization capacity. The wavelet phase can be optimized with the ACO algorithm by fitting nearby-well seismic traces with well-log data. Our proposed method can rapidly produce a frequency-dependent wavelet phase and optimize the seismic-to-well tie, particularly for weak signals. Synthetic examples demonstrate the effectiveness of the proposed ACO-based wavelet phase estimation method, even in the presence of a colored noise. Real data example illustrates that seismic deconvolution using an optimum mixed-phase wavelet can provide more information than that using an optimum constant-phase wavelet.
Wavelet transforms as solutions of partial differential equations
Zweig, G.
1997-10-01
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.
Image wavelet decomposition and applications
NASA Technical Reports Server (NTRS)
Treil, N.; Mallat, S.; Bajcsy, R.
1989-01-01
The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.
Displaying radiologic images on personal computers: image storage and compression--Part 2.
Gillespy, T; Rowberg, A H
1994-02-01
This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution. PMID:8172973
Generalizing Lifted Tensor-Product Wavelets to Irregular Polygonal Domains
Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.
2002-04-11
We present a new construction approach for symmetric lifted B-spline wavelets on irregular polygonal control meshes defining two-manifold topologies. Polygonal control meshes are recursively refined by stationary subdivision rules and converge to piecewise polynomial limit surfaces. At every subdivision level, our wavelet transforms provide an efficient way to add geometric details that are expanded from wavelet coefficients. Both wavelet decomposition and reconstruction operations are based on local lifting steps and have linear-time complexity.
Analysis of autostereoscopic three-dimensional images using multiview wavelets.
Saveljev, Vladimir; Palchikova, Irina
2016-08-10
We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470
Composite wavelet representations for reconstruction of missing data
NASA Astrophysics Data System (ADS)
Czaja, Wojciech; Dobrosotskaya, Julia; Manning, Benjamin
2013-05-01
We shall introduce a novel methodology for data reconstruction and recovery, based on composite wavelet representations. These representations include shearlets and crystallographic wavelets, among others, and they allow for an increased directional sensitivity in comparison with the standard multiscale techniques. Our new approach allows us to recover missing data, due to sparsity of composite wavelet representations, especially when compared to inpainting algorithms induced by traditional wavelet representations, and also due to the flexibility of our variational approach.
Undecimated Wavelet Transforms for Image De-noising
Gyaourova, A; Kamath, C; Fodor, I K
2002-11-19
A few different approaches exist for computing undecimated wavelet transform. In this work we construct three undecimated schemes and evaluate their performance for image noise reduction. We use standard wavelet based de-noising techniques and compare the performance of our algorithms with the original undecimated wavelet transform, as well as with the decimated wavelet transform. The experiments we have made show that our algorithms have better noise removal/blurring ratio.
Wavelet processing techniques for digital mammography
NASA Astrophysics Data System (ADS)
Laine, Andrew F.; Song, Shuwu
1992-09-01
This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).
Pseudo-Gabor wavelet for face recognition
NASA Astrophysics Data System (ADS)
Xie, Xudong; Liu, Wentao; Lam, Kin-Man
2013-04-01
An efficient face-recognition algorithm is proposed, which not only possesses the advantages of linear subspace analysis approaches-such as low computational complexity-but also has the advantage of a high recognition performance with the wavelet-based algorithms. Based on the linearity of Gabor-wavelet transformation and some basic assumptions on face images, we can extract pseudo-Gabor features from the face images without performing any complex Gabor-wavelet transformations. The computational complexity can therefore be reduced while a high recognition performance is still maintained by using the principal component analysis (PCA) method. The proposed algorithm is evaluated based on the Yale database, the Caltech database, the ORL database, the AR database, and the Facial Recognition Technology database, and is compared with several different face recognition methods such as PCA, Gabor wavelets plus PCA, kernel PCA, locality preserving projection, and dual-tree complex wavelet transformation plus PCA. Experiments show that consistent and promising results are obtained.
Lifting wavelet method of target detection
NASA Astrophysics Data System (ADS)
Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin
2009-11-01
Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.
Segmentation of dermoscopy images using wavelet networks.
Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeed; Gheissari, Niloofar; Mokhtari, Mojgan; Kolahdouzan, Farzaneh
2013-04-01
This paper introduces a new approach for the segmentation of skin lesions in dermoscopic images based on wavelet network (WN). The WN presented here is a member of fixed-grid WNs that is formed with no need of training. In this WN, after formation of wavelet lattice, determining shift and scale parameters of wavelets with two screening stage and selecting effective wavelets, orthogonal least squares algorithm is used to calculate the network weights and to optimize the network structure. The existence of two stages of screening increases globality of the wavelet lattice and provides a better estimation of the function especially for larger scales. R, G, and B values of a dermoscopy image are considered as the network inputs and the network structure formation. Then, the image is segmented and the skin lesions exact boundary is determined accordingly. The segmentation algorithm were applied to 30 dermoscopic images and evaluated with 11 different metrics, using the segmentation result obtained by a skilled pathologist as the ground truth. Experimental results show that our method acts more effectively in comparison with some modern techniques that have been successfully used in many medical imaging problems. PMID:23193305
Wavelet based detection of manatee vocalizations
NASA Astrophysics Data System (ADS)
Gur, Berke M.; Niezrecki, Christopher
2005-04-01
The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.
White, M.A.; Schmidt, J.C.; Topping, D.J.
2005-01-01
Wavelet analysis is a powerful tool with which to analyse the hydrologic effects of dam construction and operation on river systems. Using continuous records of instantaneous discharge from the Lees Ferry gauging station and records of daily mean discharge from upstream tributaries, we conducted wavelet analyses of the hydrologic structure of the Colorado River in Grand Canyon. The wavelet power spectrum (WPS) of daily mean discharge provided a highly compressed and integrative picture of the post-dam elimination of pronounced annual and sub-annual flow features. The WPS of the continuous record showed the influence of diurnal and weekly power generation cycles, shifts in discharge management, and the 1996 experimental flood in the post-dam period. Normalization of the WPS by local wavelet spectra revealed the fine structure of modulation in discharge scale and amplitude and provides an extremely efficient tool with which to assess the relationships among hydrologic cycles and ecological and geomorphic systems. We extended our analysis to sections of the Snake River and showed how wavelet analysis can be used as a data mining technique. The wavelet approach is an especially promising tool with which to assess dam operation in less well-studied regions and to evaluate management attempts to reconstruct desired flow characteristics. Copyright ?? 2005 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas
2002-08-01
We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
Wavelet Analysis for Wind Fields Estimation
Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.
2010-01-01
Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699
Image encryption using the fractional wavelet transform
NASA Astrophysics Data System (ADS)
Vilardy, Juan M.; Useche, J.; Torres, C. O.; Mattos, L.
2011-01-01
In this paper a technique for the coding of digital images is developed using Fractional Wavelet Transform (FWT) and random phase masks (RPMs). The digital image to encrypt is transformed with the FWT, after the coefficients resulting from the FWT (Approximation, Details: Horizontal, vertical and diagonal) are multiplied each one by different RPMs (statistically independent) and these latest results is applied an Inverse Wavelet Transform (IWT), obtaining the encrypted digital image. The decryption technique is the same encryption technique in reverse sense. This technique provides immediate advantages security compared to conventional techniques, in this technique the mother wavelet family and fractional orders associated with the FWT are additional keys that make access difficult to information to an unauthorized person (besides the RPMs used), thereby the level of encryption security is extraordinarily increased. In this work the mathematical support for the use of the FWT in the computational algorithm for the encryption is also developed.
Wavelet analysis for wind fields estimation.
Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G
2010-01-01
Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699
Nature's statistical symmetries, a characterization by wavelets.
Davis, A. B.
2001-01-01
Wavelets are the mathematical equivalent of a microscope, a means of looking at more or less detail in data. By applying wavelet transforms to remote sensing data (satellite images, atmospheric profiles, etc.), we can discover symmetries in Nature's ways of changing in lime and displaying a highly variable environment at any given time. These symmetries are not exact but statistical. The most intriguing one is 'scale-invariance' which describes how spatial statistics collected over a wide range of scales (using wave1m)follow simple power laws with respect to the scale parameter. The geometrical counterparts of statistical scale-invariance are the random fractals so often observed in Nature. This wavelet-based exploration of natural symmetry will be illustrated with clouds,
Wavelet-assisted volume ray casting.
He, T
1998-01-01
Volume rendering is an important technique for computational biology. In this paper we propose a new wavelet-assisted volume ray casting algorithm. The main idea is to use the wavelet coefficients for detecting the local frequency, and decide the appropriate sampling rate along the ray according to the maximum frequency. Our algorithm is to first apply the 3D discrete wavelet transform on the volume, then create an index volume to indicate the necessary sampling distance at each voxel. During ray casting, the original volume is traversed in the spatial domain, while the index volume is used to decide the appropriate sampling distance. We demonstrate that our algorithm provides a framework for approximating the volume rendering at different levels of quality in a rapid and controlled way. PMID:9697179
Characterization and simulation of gunfire with wavelets
Smallwood, D.O.
1998-09-01
Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The response of a structure to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The methods all used some form of the discrete fourier transform. The current paper will explore a simpler method to describe the nonstationary random process in terms of a wavelet transform. As was done previously, the gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. The wavelet transform is performed on each of these records. The mean and standard deviation of the resulting wavelet coefficients describe the composite characteristics of the entire waveform. It is shown that the distribution of the wavelet coefficients is approximately Gaussian with a nonzero mean and that the standard deviation of the coefficients at different times and levels are approximately independent. The gunfire is simulated by generating realizations of records of a single-round firing by computing the inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously discussed gunfire record. The individual realizations are then assembled into a realization of a time history of many rounds firing. A second-order correction of the probability density function (pdf) is accomplished with a zero memory nonlinear (ZMNL) function. The method is straightforward, easy to implement, and produces a simulated record very much like the original measured gunfire record.
Wavelet analysis applied to the IRAS cirrus
NASA Technical Reports Server (NTRS)
Langer, William D.; Wilson, Robert W.; Anderson, Charles H.
1994-01-01
The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.
Analysis of wavelet technology for NASA applications
NASA Technical Reports Server (NTRS)
Wells, R. O., Jr.
1994-01-01
The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.
Numerical Algorithms Based on Biorthogonal Wavelets
NASA Technical Reports Server (NTRS)
Ponenti, Pj.; Liandrat, J.
1996-01-01
Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.
Wavelet analysis of 'double quasar' flux data
NASA Astrophysics Data System (ADS)
Hjorth, P. G.; Villemoes, L. F.; Teuber, J.; Florentin-Nielsen, R.
1992-02-01
We have used a wavelet transform method to extract time delay information from the light curves of the gravitationally lensed quasar 0957+561 A,B. The time-frequency performance of wavelet transforms is different from that of, e.g., windowed Fourier transforms in allowing a better temporal resolution and localization of the multiple scales of the signal. It is shown that the discrepancies between the time delays derived by different authors may in part be ascribed to the choice of reduction method.
An Energy Efficient Compressed Sensing Framework for the Compression of Electroencephalogram Signals
Fauvel, Simon; Ward, Rabab K.
2014-01-01
The use of wireless body sensor networks is gaining popularity in monitoring and communicating information about a person's health. In such applications, the amount of data transmitted by the sensor node should be minimized. This is because the energy available in these battery powered sensors is limited. In this paper, we study the wireless transmission of electroencephalogram (EEG) signals. We propose the use of a compressed sensing (CS) framework to efficiently compress these signals at the sensor node. Our framework exploits both the temporal correlation within EEG signals and the spatial correlations amongst the EEG channels. We show that our framework is up to eight times more energy efficient than the typical wavelet compression method in terms of compression and encoding computations and wireless transmission. We also show that for a fixed compression ratio, our method achieves a better reconstruction quality than the CS-based state-of-the art method. We finally demonstrate that our method is robust to measurement noise and to packet loss and that it is applicable to a wide range of EEG signal types. PMID:24434840
Reconfigurable Hardware for Compressing Hyperspectral Image Data
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua
2010-01-01
High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of
Mass spectrometry cancer data classification using wavelets and genetic algorithm.
Nguyen, Thanh; Nahavandi, Saeid; Creighton, Douglas; Khosravi, Abbas
2015-12-21
This paper introduces a hybrid feature extraction method applied to mass spectrometry (MS) data for cancer classification. Haar wavelets are employed to transform MS data into orthogonal wavelet coefficients. The most prominent discriminant wavelets are then selected by genetic algorithm (GA) to form feature sets. The combination of wavelets and GA yields highly distinct feature sets that serve as inputs to classification algorithms. Experimental results show the robustness and significant dominance of the wavelet-GA against competitive methods. The proposed method therefore can be applied to cancer classification models that are useful as real clinical decision support systems for medical practitioners. PMID:26611346
Wavelet-based detection of transients in biological signals
NASA Astrophysics Data System (ADS)
Mzaik, Tahsin; Jagadeesh, Jogikal M.
1994-10-01
This paper presents two multiresolution algorithms for detection and separation of mixed signals using the wavelet transform. The first algorithm allows one to design a mother wavelet and its associated wavelet grid that guarantees the separation of signal components if information about the expected minimum signal time and frequency separation of the individual components is known. The second algorithm expands this idea to design two mother wavelets which are then combined to achieve the required separation otherwise impossible with a single wavelet. Potential applications include many biological signals such as ECG, EKG, and retinal signals.
Parallel object-oriented, denoising system using wavelet multiresolution analysis
Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.
2005-04-12
The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.
EEG analysis using wavelet-based information tools.
Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A
2006-06-15
Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity. PMID:16675027
A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring
NASA Astrophysics Data System (ADS)
Bin, Zheng; Qingfeng, Meng; Nan, Wang; Zhi, Li
2011-07-01
The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.
NASA Astrophysics Data System (ADS)
Arvind, Pratul
2012-11-01
The ability to identify and classify all ten types of faults in a distribution system is an important task for protection engineers. Unlike transmission system, distribution systems have a complex configuration and are subjected to frequent faults. In the present work, an algorithm has been developed for identifying all ten types of faults in a distribution system by collecting current samples at the substation end. The samples are subjected to wavelet packet transform and artificial neural network in order to yield better classification results. A comparison of results between wavelet transform and wavelet packet transform is also presented thereby justifying the feature extracted from wavelet packet transform yields promising results. It should also be noted that current samples are collected after simulating a 25kv distribution system in PSCAD software.
Quantum dynamics and electronic spectroscopy within the framework of wavelets
NASA Astrophysics Data System (ADS)
Toutounji, Mohamad
2013-03-01
This paper serves as a first-time report on formulating important aspects of electronic spectroscopy and quantum dynamics in condensed harmonic systems using the framework of wavelets, and a stepping stone to our future work on developing anharmonic wavelets. The Morlet wavelet is taken to be the mother wavelet for the initial state of the system of interest. This work reports daughter wavelets that may be used to study spectroscopy and dynamics of harmonic systems. These wavelets are shown to arise naturally upon optical electronic transition of the system of interest. Natural birth of basis (daughter) wavelets emerging on exciting an electronic two-level system coupled, both linearly and quadratically, to harmonic phonons is discussed. It is shown that this takes place through using the unitary dilation and translation operators, which happen to be part of the time evolution operator of the final electronic state. The corresponding optical autocorrelation function and linear absorption spectra are calculated to test the applicability and correctness of the herein results. The link between basis wavelets and the Liouville space generating function is established. An anharmonic mother wavelet is also proposed in the case of anharmonic electron-phonon coupling. A brief description of deriving anharmonic wavelets and the corresponding anharmonic Liouville space generating function is explored. In conclusion, a mother wavelet (be it harmonic or anharmonic) which accounts for Duschinsky mixing is suggested.
The analysis of unsteady wind turbine data using wavelet techniques
Slepski, J.E.; Kirchhoff, R.H.
1995-09-01
Wavelet analysis employs a relatively new technique which decomposes a signal into wavelets of finite length. A wavelet map is generated showing the distribution of signal variance in both the time and frequency domain. The first section of this paper begins with an introduction to wavelet theory, contrasting it to standard Fourier analysis. Some simple applications to the processing of harmonic signals are then given. Since wind turbines operate under unsteady stochastic loads, the time series of most machine parameters are non-stationary; wavelet analysis can be applied to this problem. In the second section of this paper, wavelet methods are used to examine data from Phase 2 of the NREL Combined Experiment. Data analyzed includes airfoil surface pressure, and low speed shaft torque. In each case the wavelet map offers valuable insight that could not be made without it.
Research of Gear Fault Detection in Morphological Wavelet Domain
NASA Astrophysics Data System (ADS)
Hong, Shi; Fang-jian, Shan; Bo, Cong; Wei, Qiu
2016-02-01
For extracting mutation information from gear fault signal and achieving a valid fault diagnosis, a gear fault diagnosis method based on morphological mean wavelet transform was designed. Morphological mean wavelet transform is a linear wavelet in the framework of morphological wavelet. Decomposing gear fault signal by this morphological mean wavelet transform could produce signal synthesis operators and detailed synthesis operators. For signal synthesis operators, it was just close to orginal signal, and for detailed synthesis operators, it contained fault impact signal or interference signal and could be catched. The simulation experiment result indicates that, compared with Fourier transform, the morphological mean wavelet transform method can do time-frequency analysis for original signal, effectively catch impact signal appears position; and compared with traditional linear wavelet transform, it has simple structure, easy realization, signal local extremum sensitivity and high denoising ability, so it is more adapted to gear fault real-time detection.
Best tree wavelet packet transform based copyright protection scheme for digital images
NASA Astrophysics Data System (ADS)
Rawat, Sanjay; Raman, Balasubramanian
2012-05-01
In this paper, a dual watermarking scheme based on discrete wavelet transform (DWT), wavelet packet transform (WPT) with best tree, and singular value decomposition (SVD) is proposed. In our algorithm, the cover image is sub-sampled into four sub-images and then two sub-images, having the highest sum of singular values are selected. Two different gray scale images are embedded in the selected sub-images. For embedding first watermark, one of the selected sub-image is decomposed via WPT. The entropy based algorithm is adopted to find the best tree of WPT. Watermark is embedded in all frequency sub-bands of the best tree. For embedding second watermark, l-level discrete wavelet transform (DWT) is performed on the second selected sub-image. The watermark is embedded by modifying the singular values of the transformed image. To enhance the security of the scheme, Zig-Zag scan in applied on the second watermark before embedding. The robustness of the proposed scheme is demonstrated through a series of attack simulations. Experimental results demonstrate that the proposed scheme has good perceptual invisibility and is also robust against various image processing operations, geometric attacks and JPEG Compression.
A real-time wavelet-based video decoder using SIMD technology
NASA Astrophysics Data System (ADS)
Klepko, Robert; Wang, Demin
2008-02-01
This paper presents a fast implementation of a wavelet-based video codec. The codec consists of motion-compensated temporal filtering (MCTF), 2-D spatial wavelet transform, and SPIHT for wavelet coefficient coding. It offers compression efficiency that is competitive to H.264. The codec is implemented in software running on a general purpose PC, using C programming language and streaming SIMD extensions intrinsics, without assembly language. This high-level software implementation allows the codec to be portable to other general-purpose computing platforms. Testing with a Pentium 4 HT at 3.6GHz (running under Linux and using the GCC compiler, version 4), shows that the software decoder is able to decode 4CIF video in real-time, over 2 times faster than software written only in C language. This paper describes the structure of the codec, the fast algorithms chosen for the most computationally intensive elements in the codec, and the use of SIMD to implement these algorithms.
Shape L’Âne Rouge: Sliding Wavelets for Indexing and Retrieval
Peter, Adrian; Rangarajan, Anand; Ho, Jeffrey
2010-01-01
Shape representation and retrieval of stored shape models are becoming increasingly more prominent in fields such as medical imaging, molecular biology and remote sensing. We present a novel framework that directly addresses the necessity for a rich and compressible shape representation, while simultaneously providing an accurate method to index stored shapes. The core idea is to represent point-set shapes as the square root of probability densities expanded in a wavelet basis. We then use this representation to develop a natural similarity metric that respects the geometry of these probability distributions, i.e. under the wavelet expansion, densities are points on a unit hypersphere and the distance between densities is given by the separating arc length. The process uses a linear assignment solver for non-rigid alignment between densities prior to matching; this has the connotation of “sliding” wavelet coefficients akin to the sliding block puzzle L’Âne Rouge. We illustrate the utility of this framework by matching shapes from the MPEG-7 data set and provide comparisons to other similarity measures, such as Euclidean distance shape distributions. PMID:20717478
Understanding wavelet analysis and filters for engineering applications
NASA Astrophysics Data System (ADS)
Parameswariah, Chethan Bangalore
Wavelets are signal-processing tools that have been of interest due to their characteristics and properties. Clear understanding of wavelets and their properties are a key to successful applications. Many theoretical and application-oriented papers have been written. Yet the choice of a right wavelet for a given application is an ongoing quest that has not been satisfactorily answered. This research has successfully identified certain issues, and an effort has been made to provide an understanding of wavelets by studying the wavelet filters in terms of their pole-zero and magnitude-phase characteristics. The magnitude characteristics of these filters have flat responses in both the pass band and stop band. The phase characteristics are almost linear. It is interesting to observe that some wavelets have the exact same magnitude characteristics but their phase responses vary in the linear slopes. An application of wavelets for fast detection of the fault current in a transformer and distinguishing from the inrush current clearly shows the advantages of the lower slope and fewer coefficients---Daubechies wavelet D4 over D20. This research has been published in the IEEE transactions on Power systems and is also proposed as an innovative method for protective relaying techniques. For detecting the frequency composition of the signal being analyzed, an understanding of the energy distribution in the output wavelet decompositions is presented for different wavelet families. The wavelets with fewer coefficients in their filters have more energy leakage into adjacent bands. The frequency bandwidth characteristics display flatness in the middle of the pass band confirming that the frequency of interest should be in the middle of the frequency band when performing a wavelet transform. Symlets exhibit good flatness with minimum ripple but the transition regions do not have sharper cut off. The number of wavelet levels and their frequency ranges are dependent on the two
Turbulence in Compressible Flows
NASA Technical Reports Server (NTRS)
1997-01-01
Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.
Parallel adaptive wavelet collocation method for PDEs
Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.
2015-10-01
A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.
Information retrieval system utilizing wavelet transform
Brewster, Mary E.; Miller, Nancy E.
2000-01-01
A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.
Nonlinear adaptive wavelet analysis of electrocardiogram signals
NASA Astrophysics Data System (ADS)
Yang, H.; Bukkapatnam, S. T.; Komanduri, R.
2007-08-01
Wavelet representation can provide an effective time-frequency analysis for nonstationary signals, such as the electrocardiogram (EKG) signals, which contain both steady and transient parts. In recent years, wavelet representation has been emerging as a powerful time-frequency tool for the analysis and measurement of EKG signals. The EKG signals contain recurring, near-periodic patterns of P , QRS , T , and U waveforms, each of which can have multiple manifestations. Identification and extraction of a compact set of features from these patterns is critical for effective detection and diagnosis of various disorders. This paper presents an approach to extract a fiducial pattern of EKG based on the consideration of the underlying nonlinear dynamics. The pattern, in a nutshell, is a combination of eigenfunctions of the ensembles created from a Poincare section of EKG dynamics. The adaptation of wavelet functions to the fiducial pattern thus extracted yields two orders of magnitude (some 95%) more compact representation (measured in terms of Shannon signal entropy). Such a compact representation can facilitate in the extraction of features that are less sensitive to extraneous noise and other variations. The adaptive wavelet can also lead to more efficient algorithms for beat detection and QRS cancellation as well as for the extraction of multiple classical EKG signal events, such as widths of QRS complexes and QT intervals.
Wavelet based image quality self measurements
NASA Astrophysics Data System (ADS)
Al-Jawad, Naseer; Jassim, Sabah
2010-04-01
Noise in general is considered to be degradation in image quality. Moreover image quality is measured based on the appearance of the image edges and their clarity. Most of the applications performance is affected by image quality and level of different types of degradation. In general measuring image quality and identifying the type of noise or degradation is considered to be a key factor in raising the applications performance, this task can be very challenging. Wavelet transform now a days, is widely used in different applications. These applications are mostly benefiting from the wavelet localisation in the frequency domain. The coefficients of the high frequency sub-bands in wavelet domain are represented by Laplace histogram. In this paper we are proposing to use the Laplace distribution histogram to measure the image quality and also to identify the type of degradation affecting the given image. Image quality and the level of degradation are mostly measured using a reference image with reasonable quality. The discussed Laplace distribution histogram provides a self testing measurement for the quality of the image. This measurement is based on constructing the theoretical Laplace distribution histogram of the high frequency wavelet sub-band. This construction is based on the actual standard deviation, then to be compared with the actual Laplace distribution histogram. The comparison is performed using histogram intersection method. All the experiments are performed using the extended Yale database.
Characterization and Simulation of Gunfire with Wavelets
Smallwood, David O.
1999-01-01
Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less
Wavelet transforms for detecting microcalcifications in mammograms
Strickland, R.N.; Hahn, H.I.
1996-04-01
Clusters of fine, granular microcalcifications in mammograms may be an early sign of disease. Individual grains are difficult to detect and segment due to size and shape variability and because the background mammogram texture is typically inhomogeneous. The authors develop a two-stage method based on wavelet transforms for detecting and segmenting calcifications. The first stage is based on an undecimated wavelet transform, which is simply the conventional filter bank implementation without downsampling, so that the low-low (LL), low-high (LH), high-low (HL), and high-high (HH) sub-bands remain at full size. Detection takes place in HH and the combination LH + HL. Four octaves are compared with two inter-octave voices for finer scale resolution. By appropriate selection of the wavelet basis the detection of microcalcifications in the relevant size range can be nearly optimized. In fact, the filters which transform the input image into HH and LH + HL are closely related to prewhitening matched filters for detecting Gaussian objects (idealized microcalcifications) in two common forms of Markov (background) noise. The second stage is designed to overcome the limitations of the simplistic Gaussian assumption and provides an accurate segmentation of calcification boundaries. Detected pixel sites in HH and LH + HL are dilated then weighted before computing the inverse wavelet transform. Individual microcalcifications are greatly enhanced in the output image, to the point where straightforward thresholding can be applied to segment them. FROC curves are computed from tests using a freely distributed database of digitized mammograms.
Fan, Hong-Yi; Lu, Hai-Liang
2006-02-01
The admissibility condition of a mother wavelet is explored in the context of quantum optics theory. By virtue of Dirac's representation theory and the coherent state property we derive a general formula for finding qualified mother wavelets. A comparison between a wavelet transform computed with the newly found mother wavelet and one computed with a Mexican hat wavelet is presented. PMID:16480224
Classification of mammographic microcalcifications using wavelets
NASA Astrophysics Data System (ADS)
Chitre, Yateen S.; Dhawan, Atam P.; Moskowitz, Myron; Sarwal, Alok; Bonasso, Christine; Narayan, Suresh B.
1995-05-01
Breast cancer is the leading cause of death among women. Breast cancer can be detected earlier by mammography than any other non-invasive examination. About 30% to 50% of breast cancers demonstrate tiny granulelike deposits of calcium called microcalcifications. It is difficult to distinguish between benign and malignant cases based on an examination of calcification regions, especially in hard-to-diagnose cases. We investigate the potential of using energy and entropy features computed from wavelet packets for their correlation with malignancy. Two types of Daubechies discrete filters were used as prototype wavelets. The energy and entropy features were computed for 128 benign and 63 malignant cases and analyzed using a multivariate cluster analysis and a univariate statistical analysis to reduce the feature set to a `five best set of features.' The efficacy of the reduced feature set to discriminate between the malignant and benign categories was evaluated using different multilayer perceptron architectures. The multilayer perceptron was trained using the backpropagation algorithm for various training and test set sizes. For each case 40 partitions of the data set were used to set up the training and test sets. The performance of the features was evaluated by computing the best area under the relative operating characteristic (ROC) curve and the average area under the ROC curve. The performance of the features computed from the wavelet packets was compared to a second set of features consisting of the wavelet packet features, image structure features and cluster features. The classification results are encouraging and indicate the potential of using features derived from wavelet packets in discriminating microcalcification regions into benign and malignant categories.
A wavelet-based neural model to optimize and read out a temporal population code
Luvizotto, Andre; Rennó-Costa, César; Verschure, Paul F. M. J.
2012-01-01
It has been proposed that the dense excitatory local connectivity of the neo-cortex plays a specific role in the transformation of spatial stimulus information into a temporal representation or a temporal population code (TPC). TPC provides for a rapid, robust, and high-capacity encoding of salient stimulus features with respect to position, rotation, and distortion. The TPC hypothesis gives a functional interpretation to a core feature of the cortical anatomy: its dense local and sparse long-range connectivity. Thus far, the question of how the TPC encoding can be decoded in downstream areas has not been addressed. Here, we present a neural circuit that decodes the spectral properties of the TPC using a biologically plausible implementation of a Haar transform. We perform a systematic investigation of our model in a recognition task using a standardized stimulus set. We consider alternative implementations using either regular spiking or bursting neurons and a range of spectral bands. Our results show that our wavelet readout circuit provides for the robust decoding of the TPC and further compresses the code without loosing speed or quality of decoding. We show that in the TPC signal the relevant stimulus information is present in the frequencies around 100 Hz. Our results show that the TPC is constructed around a small number of coding components that can be well decoded by wavelet coefficients in a neuronal implementation. The solution to the TPC decoding problem proposed here suggests that cortical processing streams might well consist of sequential operations where spatio-temporal transformations at lower levels forming a compact stimulus encoding using TPC that are subsequently decoded back to a spatial representation using wavelet transforms. In addition, the results presented here show that different properties of the stimulus might be transmitted to further processing stages using different frequency components that are captured by appropriately tuned
Implementation of Compressed Sensing in Telecardiology Sensor Networks
Correia Pinheiro, Eduardo; Postolache, Octavian Adrian; Silva Girão, Pedro
2010-01-01
Mobile solutions for patient cardiac monitoring are viewed with growing interest, and improvements on current implementations are frequently reported, with wireless, and in particular, wearable devices promising to achieve ubiquity. However, due to unavoidable power consumption limitations, the amount of data acquired, processed, and transmitted needs to be diminished, which is counterproductive, regarding the quality of the information produced. Compressed sensing implementation in wireless sensor networks (WSNs) promises to bring gains not only in power savings to the devices, but also with minor impact in signal quality. Several cardiac signals have a sparse representation in some wavelet transformations. The compressed sensing paradigm states that signals can be recovered from a few projections into another basis, incoherent with the first. This paper evaluates the compressed sensing paradigm impact in a cardiac monitoring WSN, discussing the implications in data reliability, energy management, and the improvements accomplished by in-network processing. PMID:20885973
Hyperspectral trace gas detection using the wavelet packet transform
NASA Astrophysics Data System (ADS)
Salvador, Mark Z.; Resmini, Ronald G.; Gomez, Richard B.
2008-04-01
A method for trace gas detection in hyperspectral data is demonstrated using the wavelet packet transform. This new method, the Wavelet Packet Subspace (WPS), applies the wavelet packet transform and selects a best basis for pattern matching. The wavelet packet transform is an extension of the wavelet transform, which fully decomposes a signal into a library of wavelet packet bases. Application of the wavelet packet transform to hyperspectral data for the detection of trace gases takes advantage of the ability of the wavelet transform to locate spectral features in both scale and location. By analyzing the wavelet packet tree of specific gas, nodes of the tree are selected which represent an orthogonal best basis. The best basis represents the significant spectral features of that gas. This is then used to identify pixels in the scene using existing matching algorithms such as spectral angle or matched filter. Using data from the Airborne Hyperspectral Imager (AHI), this method is compared to traditional matched filter detection methods. Initial results demonstrate a promising wavelet packet subspace technique for hyperspectral trace gas detection applications.
Multimode waveguide speckle patterns for compressive sensing.
Valley, George C; Sefler, George A; Justin Shaw, T
2016-06-01
Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS. PMID:27244406
Robust facial expression recognition via compressive sensing.
Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng
2012-01-01
Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks. PMID:22737035
Image compression using the W-transform
Reynolds, W.D. Jr.
1995-12-31
The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.
A symmetrical image encryption scheme in wavelet and time domain
NASA Astrophysics Data System (ADS)
Luo, Yuling; Du, Minghui; Liu, Junxiu
2015-02-01
There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.
Wavelet-enabled progressive data Access and Storage Protocol (WASP)
NASA Astrophysics Data System (ADS)
Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.
2015-12-01
Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.
Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification
NASA Astrophysics Data System (ADS)
Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.
1995-04-01
In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.
Feature selection using Haar wavelet power spectrum
Subramani, Prabakaran; Sahu, Rajendra; Verma, Shekhar
2006-01-01
Background Feature selection is an approach to overcome the 'curse of dimensionality' in complex researches like disease classification using microarrays. Statistical methods are utilized more in this domain. Most of them do not fit for a wide range of datasets. The transform oriented signal processing domains are not probed much when other fields like image and video processing utilize them well. Wavelets, one of such techniques, have the potential to be utilized in feature selection method. The aim of this paper is to assess the capability of Haar wavelet power spectrum in the problem of clustering and gene selection based on expression data in the context of disease classification and to propose a method based on Haar wavelet power spectrum. Results Haar wavelet power spectra of genes were analysed and it was observed to be different in different diagnostic categories. This difference in trend and magnitude of the spectrum may be utilized in gene selection. Most of the genes selected by earlier complex methods were selected by the very simple present method. Each earlier works proved only few genes are quite enough to approach the classification problem [1]. Hence the present method may be tried in conjunction with other classification methods. The technique was applied without removing the noise in data to validate the robustness of the method against the noise or outliers in the data. No special softwares or complex implementation is needed. The qualities of the genes selected by the present method were analysed through their gene expression data. Most of them were observed to be related to solve the classification issue since they were dominant in the diagnostic category of the dataset for which they were selected as features. Conclusion In the present paper, the problem of feature selection of microarray gene expression data was considered. We analyzed the wavelet power spectrum of genes and proposed a clustering and feature selection method useful for
Fabisch, Alexander; Kassahun, Yohannes; Wöhrle, Hendrik; Kirchner, Frank
2013-06-01
We examine two methods which are used to deal with complex machine learning problems: compressed sensing and model compression. We discuss both methods in the context of feed-forward artificial neural networks and develop the backpropagation method in compressed parameter space. We further show that compressing the weights of a layer of a multilayer perceptron is equivalent to compressing the input of the layer. Based on this theoretical framework, we will use orthogonal functions and especially random projections for compression and perform experiments in supervised and reinforcement learning to demonstrate that the presented methods reduce training time significantly. PMID:23501172
Adaptive wavelets for visual object detection and classification
NASA Astrophysics Data System (ADS)
Aghdasi, Farzin
1997-10-01
We investigate the application of adaptive wavelets for the representation and classification of signals in digitized speech and medical images. A class of wavelet basis functions are used to extract features from the regions of interest. These features are then used in an artificial neural network to classify the region are containing the desired object or belonging to the background clutter. The dilation and shift parameters of the wavelet functions are not fixed. These parameters are included in the training scheme. In this way the wavelets are adaptive to the expected shape and size of the signals. The results indicate that adaptive wavelet functions may outperform the classical fixed wavelet analysis in detection of subtle objects.
Analysis of scanning probe microscope images using wavelets.
Gackenheimer, C; Cayon, L; Reifenberger, R
2006-03-01
The utility of wavelet transforms for analysis of scanning probe images is investigated. Simulated scanning probe images are analyzed using wavelet transforms and compared to a parallel analysis using more conventional Fourier transform techniques. The wavelet method introduced in this paper is particularly useful as an image recognition algorithm to enhance nanoscale objects of a specific scale that may be present in scanning probe images. In its present form, the applied wavelet is optimal for detecting objects with rotational symmetry. The wavelet scheme is applied to the analysis of scanning probe data to better illustrate the advantages that this new analysis tool offers. The wavelet algorithm developed for analysis of scanning probe microscope (SPM) images has been incorporated into the WSxM software which is a versatile freeware SPM analysis package. PMID:16439061
Comparative study of wavelet denoising in myoelectric control applications.
Sharma, Tanu; Veer, Karan
2016-04-01
Here, the wavelet analysis has been investigated to improve the quality of myoelectric signal before use in prosthetic design. Effective Surface Electromyogram (SEMG) signals were estimated by first decomposing the obtained signal using wavelet transform and then analysing the decomposed coefficients by threshold methods. With the appropriate choice of wavelet, it is possible to reduce interference noise effectively in the SEMG signal. However, the most effective wavelet for SEMG denoising is chosen by calculating the root mean square value and signal power values. The combined results of root mean square value and signal power shows that wavelet db4 performs the best denoising among the wavelets. Furthermore, time domain and frequency domain methods were applied for SEMG signal analysis to investigate the effect of muscle-force contraction on the signal. It was found that, during sustained contractions, the mean frequency (MNF) and median frequency (MDF) increase as muscle force levels increase. PMID:26887581
An optimized hybrid encode based compression algorithm for hyperspectral image
NASA Astrophysics Data System (ADS)
Wang, Cheng; Miao, Zhuang; Feng, Weiyi; He, Weiji; Chen, Qian; Gu, Guohua
2013-12-01
Compression is a kernel procedure in hyperspectral image processing due to its massive data which will bring great difficulty in date storage and transmission. In this paper, a novel hyperspectral compression algorithm based on hybrid encoding which combines with the methods of the band optimized grouping and the wavelet transform is proposed. Given the characteristic of correlation coefficients between adjacent spectral bands, an optimized band grouping and reference frame selection method is first utilized to group bands adaptively. Then according to the band number of each group, the redundancy in the spatial and spectral domain is removed through the spatial domain entropy coding and the minimum residual based linear prediction method. Thus, embedded code streams are obtained by encoding the residual images using the improved embedded zerotree wavelet based SPIHT encode method. In the experments, hyperspectral images collected by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) were used to validate the performance of the proposed algorithm. The results show that the proposed approach achieves a good performance in reconstructed image quality and computation complexity.The average peak signal to noise ratio (PSNR) is increased by 0.21~0.81dB compared with other off-the-shelf algorithms under the same compression ratio.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
Wavelets in the solution of nongray radiative heat transfer equation
Bayazitoglu, Y.; Wang, B.Y.
1996-12-31
The wavelet basis functions are introduced into the radiative transfer equation in the frequency domain. The intensity of radiation is expanded in terms of Daubechies` wrapped around wavelet functions. It is shown that the wavelet basis approach to modeling nongrayness can be incorporated into any solution method for the equation of transfer. In this paper the resulting system of equations is solved for the one-dimensional radiative equilibrium problem using the P-N approximation.
NASA Astrophysics Data System (ADS)
Futatani, Shimpei; Bos, Wouter J. T.; del-Castillo-Negrete, Diego; Schneider, Kai; Benkadda, Sadruddin; Farge, Marie
2011-03-01
We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation data of two-dimensional drift-wave turbulence governed by Hasegawa-Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics.
NASA Astrophysics Data System (ADS)
Bakhouche, A.; Doghmane, N.
2008-06-01
In this paper, a new adaptive watermarking algorithm is proposed for still image based on the wavelet transform. The two major applications for watermarking are protecting copyrights and authenticating photographs. Our robust watermarking [3] [22] is used for copyright protection owners. The main reason for protecting copyrights is to prevent image piracy when the provider distributes the image on the Internet. Embed watermark in low frequency band is most resistant to JPEG compression, blurring, adding Gaussian noise, rescaling, rotation, cropping and sharpening but embedding in high frequency is most resistant to histogram equalization, intensity adjustment and gamma correction. In this paper, we extend the idea to embed the same watermark in two bands (LL and HH bands or LH and HL bands) at the second level of Discrete Wavelet Transform (DWT) decomposition. Our generalization includes all the four bands (LL, HL, LH, and HH) by modifying coefficients of the all four bands in order to compromise between acceptable imperceptibility level and attacks' resistance.
Variability of Solar Irradiances Using Wavelet Analysis
NASA Technical Reports Server (NTRS)
Pesnell, William D.
2007-01-01
We have used wavelets to analyze the sunspot number, F10.7 (the solar irradiance at a wavelength of approx.10.7 cm), and Ap (a geomagnetic activity index). Three different wavelets are compared, showing how each selects either temporal or scale resolution. Our goal is an envelope of solar activity that better bounds the large amplitude fluctuations form solar minimum to maximum. We show how the 11-year cycle does not disappear at solar minimum, that minimum is only the other part of the solar cycle. Power in the fluctuations of solar-activity-related indices may peak during solar maximum but the solar cycle itself is always present. The Ap index has a peak after solar maximum that appears to be better correlated with the current solar cycle than with the following cycle.
Wavelets for full reconfigurable ECG acquisition system
NASA Astrophysics Data System (ADS)
Morales, D. P.; García, A.; Castillo, E.; Meyer-Baese, U.; Palma, A. J.
2011-06-01
This paper presents the use of wavelet cores for a full reconfigurable electrocardiogram signal (ECG) acquisition system. The system is compound by two reconfigurable devices, a FPGA and a FPAA. The FPAA is in charge of the ECG signal acquisition, since this device is a versatile and reconfigurable analog front-end for biosignals. The FPGA is in charge of FPAA configuration, digital signal processing and information extraction such as heart beat rate and others. Wavelet analysis has become a powerful tool for ECG signal processing since it perfectly fits ECG signal shape. The use of these cores has been integrated in the LabVIEW FPGA module development tool that makes possible to employ VHDL cores within the usual LabVIEW graphical programming environment, thus freeing the designer from tedious and time consuming design of communication interfaces. This enables rapid test and graphical representation of results.
Wavelet Denoising of Mobile Radiation Data
Campbell, D; Lanier, R
2007-10-29
The investigation of wavelet analysis techniques as a means of filtering the gross-count signal obtained from radiation detectors has shown promise. These signals are contaminated with high frequency statistical noise and significantly varying background radiation levels. Wavelet transforms allow a signal to be split into its constituent frequency components without losing relative timing information. Initial simulations and an injection study have been performed. Additionally, acquisition and analysis software has been written which allowed the technique to be evaluated in real-time under more realistic operating conditions. The technique performed well when compared to more traditional triggering techniques with its performance primarily limited by false alarms due to prominent features in the signal. An initial investigation into the potential rejection and classification of these false alarms has also shown promise.
Gabor wavelet associative memory for face recognition.
Zhang, Haihong; Zhang, Bailing; Huang, Weimin; Tian, Qi
2005-01-01
This letter describes a high-performance face recognition system by combining two recently proposed neural network models, namely Gabor wavelet network (GWN) and kernel associative memory (KAM), into a unified structure called Gabor wavelet associative memory (GWAM). GWAM has superior representation capability inherited from GWN and consequently demonstrates a much better recognition performance than KAM. Extensive experiments have been conducted to evaluate a GWAM-based recognition scheme using three popular face databases, i.e., FERET database, Olivetti-Oracle Research Lab (ORL) database and AR face database. The experimental results consistently show our scheme's superiority and demonstrate its very high-performance comparing favorably to some recent face recognition methods, achieving 99.3% and 100% accuracy, respectively, on the former two databases, exhibiting very robust performance on the last database against varying illumination conditions. PMID:15732406
Adaptive wavelet methods - Matrix-vector multiplication
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-12-01
The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.
Wavelet excited measurement of system transfer function.
Olkkonen, H; Olkkonen, J T
2007-02-01
This article introduces a new method, which is referred to as the wavelet excitation method (WEM), for the measurement of the system transfer function. Instead of commonly used impulse or sine wave excitations, the method uses a sequential excitation by biorthogonal symmetric wavelets. The system transfer function is reconstructed from the output measurements. In the WEM the signals can be designed so that if N different excitation sequences are used and the excitation rate is f, the sampling rate of the analog-to-digital converter can be reduced to f/N. The WEM is especially advantageous in testing systems, where high quality impulse excitation cannot be applied. The WEM gave consistent results in transfer function measurements of various multistage amplifiers with the linear circuit analysis (SPICE) and the sine wave excitation methods. The WEM makes available new high speed sensor applications, where the sampling rate of the sensor may be considerably lower compared with the system bandwidth. PMID:17578145
Wavelet analysis of the impedance cardiogram waveforms
NASA Astrophysics Data System (ADS)
Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.
2012-12-01
Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.
Wavelets and their applications past and future
NASA Astrophysics Data System (ADS)
Coifman, Ronald R.
2009-04-01
As this is a conference on mathematical tools for defense, I would like to dedicate this talk to the memory of Louis Auslander, who through his insights and visionary leadership, brought powerful new mathematics into DARPA, he has provided the main impetus to the development and insertion of wavelet based processing in defense. My goal here is to describe the evolution of a stream of ideas in Harmonic Analysis, ideas which in the past have been mostly applied for the analysis and extraction of information from physical data, and which now are increasingly applied to organize and extract information and knowledge from any set of digital documents, from text to music to questionnaires. This form of signal processing on digital data, is part of the future of wavelet analysis.
Development of wavelet analysis tools for turbulence
NASA Technical Reports Server (NTRS)
Bertelrud, A.; Erlebacher, G.; Dussouillez, PH.; Liandrat, M. P.; Liandrat, J.; Bailly, F. Moret; Tchamitchian, PH.
1992-01-01
Presented here is the general framework and the initial results of a joint effort to derive novel research tools and easy to use software to analyze and model turbulence and transition. Given here is a brief review of the issues, a summary of some basic properties of wavelets, and preliminary results. Technical aspects of the implementation, the physical conclusions reached at this time, and current developments are discussed.
Wavelet features in motion data classification
NASA Astrophysics Data System (ADS)
Szczesna, Agnieszka; Świtoński, Adam; Słupik, Janusz; Josiński, Henryk; Wojciechowski, Konrad
2016-06-01
The paper deals with the problem of motion data classification based on result of multiresolution analysis implemented in form of quaternion lifting scheme. Scheme processes directly on time series of rotations coded in form of unit quaternion signal. In the work new features derived from wavelet energy and entropy are proposed. To validate the approach gait database containing data of 30 different humans is used. The obtained results are satisfactory. The classification has over than 91% accuracy.
Multiscale peak detection in wavelet space.
Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng
2015-12-01
Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at . PMID:26514234
Scope and applications of translation invariant wavelets to image registration
NASA Technical Reports Server (NTRS)
Chettri, Samir; LeMoigne, Jacqueline; Campbell, William
1997-01-01
The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.
The 2D large deformation analysis using Daubechies wavelet
NASA Astrophysics Data System (ADS)
Liu, Yanan; Qin, Fei; Liu, Yinghua; Cen, Zhangzhi
2010-01-01
In this paper, Daubechies (DB) wavelet is used for solution of 2D large deformation problems. Because the DB wavelet scaling functions are directly used as basis function, no meshes are needed in function approximation. Using the DB wavelet, the solution formulations based on total Lagrangian approach for two-dimensional large deformation problems are established. Due to the lack of Kroneker delta properties in wavelet scaling functions, Lagrange multipliers are used for imposition of boundary condition. Numerical examples of 2D large deformation problems illustrate that this method is effective and stable.
Wavelet-based moment invariants for pattern recognition
NASA Astrophysics Data System (ADS)
Chen, Guangyi; Xie, Wenfang
2011-07-01
Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.
Correlation Filtering of Modal Dynamics using the Laplace Wavelet
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.
1997-01-01
Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.
Image denoising with the dual-tree complex wavelet transform
NASA Astrophysics Data System (ADS)
Yaseen, Alauldeen S.; Pavlova, Olga N.; Pavlov, Alexey N.; Hramov, Alexander E.
2016-04-01
The purpose of this study is to compare image denoising techniques based on real and complex wavelet-transforms. Possibilities provided by the classical discrete wavelet transform (DWT) with hard and soft thresholding are considered, and influences of the wavelet basis and image resizing are discussed. The quality of image denoising for the standard 2-D DWT and the dual-tree complex wavelet transform (DT-CWT) is studied. It is shown that DT-CWT outperforms 2-D DWT at the appropriate selection of the threshold level.
Theory and application of frequency-selective wavelets
Tomas, B.
1992-01-01
Orthonormal compactly supported wavelets have been successfully applied to generate sparse representations of piecewise-smooth functions, yielding fast numerical algorithms. The authors consider the case of case of piecewise oscillatory functions, and construct a variation of the original Daubechies family of wavelets which efficiently represents the oscillations. This new family is constructed by moving some of the zeros of the underlying symbol away from [pi], shifting the approximation properties of the wavelets. The zeros may be chosen to give a sparse representation of an oscillatory function whose spectrum is known. In this sense, these wavelets are frequency-selective. Existence, uniqueness, and regularity results are proved for this family of wavelets. A natural application is the numerical solution of the electric field integral equation in two spatial dimensions: The kernel is singular on the diagonal, and oscillatory within a narrow frequency spectrum away from the diagonal. Applying frequency selective wavelets with the discrete wavelet transform, the discrete equations are transformed into a sparse linear system which is economically solved by a multi-grid scheme based upon the discrete wavelet transform. Substantial computational savings are obtained over the same method using the original Daubechies family of wavelets, and a factor of 10 savings is obtained over standard LU-factorization.
Wavelet-based verification of the quantitative precipitation forecast
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi; Jakubiak, Bogumil
2016-06-01
This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.
Doppler ultrasound signal denoising based on wavelet frames.
Zhang, Y; Wang, Y; Wang, W; Liu, B
2001-05-01
A novel approach was proposed to denoise the Doppler ultrasound signal. Using this method, wavelet coefficients of the Doppler signal at multiple scales were first obtained using the discrete wavelet frame analysis. Then, a soft thresholding-based denoising algorithm was employed to deal with these coefficients to get the denoised signal. In the simulation experiments, the SNR improvements and the maximum frequency estimation precision were studied for the denoised signal. From the simulation and clinical studies, it was concluded that the performance of this discrete wavelet frame (DWF) approach is higher than that of the standard (critically sampled) wavelet transform (DWT) for the Doppler ultrasound signal denoising. PMID:11381694
Microbunching and RF Compression
Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.
2010-05-23
Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.
Bayesian Wavelet Shrinkage of the Haar-Fisz Transformed Wavelet Periodogram
2015-01-01
It is increasingly being realised that many real world time series are not stationary and exhibit evolving second-order autocovariance or spectral structure. This article introduces a Bayesian approach for modelling the evolving wavelet spectrum of a locally stationary wavelet time series. Our new method works by combining the advantages of a Haar-Fisz transformed spectrum with a simple, but powerful, Bayesian wavelet shrinkage method. Our new method produces excellent and stable spectral estimates and this is demonstrated via simulated data and on differenced infant electrocardiogram data. A major additional benefit of the Bayesian paradigm is that we obtain rigorous and useful credible intervals of the evolving spectral structure. We show how the Bayesian credible intervals provide extra insight into the infant electrocardiogram data. PMID:26381141
Bayesian Wavelet Shrinkage of the Haar-Fisz Transformed Wavelet Periodogram.
Nason, Guy; Stevens, Kara
2015-01-01
It is increasingly being realised that many real world time series are not stationary and exhibit evolving second-order autocovariance or spectral structure. This article introduces a Bayesian approach for modelling the evolving wavelet spectrum of a locally stationary wavelet time series. Our new method works by combining the advantages of a Haar-Fisz transformed spectrum with a simple, but powerful, Bayesian wavelet shrinkage method. Our new method produces excellent and stable spectral estimates and this is demonstrated via simulated data and on differenced infant electrocardiogram data. A major additional benefit of the Bayesian paradigm is that we obtain rigorous and useful credible intervals of the evolving spectral structure. We show how the Bayesian credible intervals provide extra insight into the infant electrocardiogram data. PMID:26381141
Multispectral multisensor image fusion using wavelet transforms
Lemeshewsky, George P.
1999-01-01
Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.
Denoising solar radiation data using coiflet wavelets
Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir
2014-10-24
Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.
Multiscale medical image fusion in wavelet domain.
Singh, Rajiv; Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Reducing the complexity of the CCSDS standard for image compression decreasing the DWT filter order
NASA Astrophysics Data System (ADS)
Ito, Leandro H.; Pinho, Marcelo S.
2014-10-01
The goal for this work is to evaluate the impact of utilizing shorter wavelet filters in the CCSDS standard for lossy and lossless image compression. Another constraint considered was the existence of symmetry in the filters. That approach was desired to maintain the symmetric extension compatibility of the filter banks. Even though this strategy works well for oat wavelets, it is not always the case for their integer approximations. The periodic extension was utilized whenever symmetric extension was not applicable. Even though the latter outperforms the former, for fair comparison the symmetric extension compatible integer-to-integer wavelet approximations were evaluated under both extensions. The evaluation methods adopted were bit rate (bpp), PSNR and the number of operations required by each wavelet transforms. All these results were compared against the ones obtained utilizing the standard CCSDS with 9/7 filter banks, for lossy and lossless compression. The tests were performed over tallies (512x512) of raw remote sensing images from CBERS-2B (China-Brazil Earth Resources Satellites) captured from its high resolution CCD camera. These images were cordially made available by INPE (National Institute for Space Research) in Brazil. For the CCSDS implementation, it was utilized the source code developed by Hongqiang Wang from the Electrical Department at Nebraska-Lincoln University, applying the appropriate changes on the wavelet transform. For lossy compression, the results have shown that the filter bank built from the Deslauriers-Dubuc scaling function, with respectively 2 and 4 vanishing moments on the synthesis and analysis banks, presented not only a reduction of 21% in the number of operations required, but also a performance on par with the 9/7 filter bank. In the lossless case, the biorthogonal Cohen-Daubechies-Feauveau with 2 vanishing moments presented a performance close to the 9/7 integer approximation of the CCSDS, with the number of operations
Hildebrand, Richard J.; Wozniak, John J.
2001-01-01
A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.
Compressible turbulent mixing: Effects of compressibility
NASA Astrophysics Data System (ADS)
Ni, Qionglin
2016-04-01
We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
Wavelet based free-form deformations for nonrigid registration
NASA Astrophysics Data System (ADS)
Sun, Wei; Niessen, Wiro J.; Klein, Stefan
2014-03-01
In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.
Lossy Text Compression Techniques
NASA Astrophysics Data System (ADS)
Palaniappan, Venka; Latifi, Shahram
Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.
Efficient seismic volume compression using the lifting scheme
NASA Astrophysics Data System (ADS)
Khene, Faouzi M.; Abdul-Jauwad, Samir H.
2000-12-01
An advanced seismic compression technique is proposed to mange seismic data in a world of ever increasing data volumes in order to maintain productivity without compromising interpretation results. A separable 3D discrete wavelet transform using long biorthogonal filters is used. The computation efficiency of the DWT is improved by factoring the wavelet filters using the lifting scheme. In addition, the lifting scheme offers: 1) a dramatic reduction of the required auxiliary memory, 2) an efficient combination with parallel rendering algorithms to perform arbitrary surface and volume rendering for interactive visualization, and 3) an easy integration in the parallel I/O seismic data loading routines. The proposed technique is tested on a seismic volume from the Stratton field in South Texas. The resulting 3-level multiresolution decomposition yields 21 detail sub-volumes and a unique low-resolution sub-volume. The detail wavelet coefficients are quantized with an adaptive threshold uniform scalar quantizer. The scale-dependent thresholds are determined with the Stein unbiased risk estimate principle. As the approximation coefficients represents a smooth low-resolution version of the input data they are only quantized using a uniform scalar quantizer. Finally, a run-length plus a Huffman encoding are applied for binary coding of the quantized coefficients.
High-Frequency Subband Compressed Sensing MRI Using Quadruplet Sampling
Sung, Kyunghyun; Hargreaves, Brian A
2013-01-01
Purpose To presents and validates a new method that formalizes a direct link between k-space and wavelet domains to apply separate undersampling and reconstruction for high- and low-spatial-frequency k-space data. Theory and Methods High- and low-spatial-frequency regions are defined in k-space based on the separation of wavelet subbands, and the conventional compressed sensing (CS) problem is transformed into one of localized k-space estimation. To better exploit wavelet-domain sparsity, CS can be used for high-spatial-frequency regions while parallel imaging can be used for low-spatial-frequency regions. Fourier undersampling is also customized to better accommodate each reconstruction method: random undersampling for CS and regular undersampling for parallel imaging. Results Examples using the proposed method demonstrate successful reconstruction of both low-spatial-frequency content and fine structures in high-resolution 3D breast imaging with a net acceleration of 11 to 12. Conclusion The proposed method improves the reconstruction accuracy of high-spatial-frequency signal content and avoids incoherent artifacts in low-spatial-frequency regions. This new formulation also reduces the reconstruction time due to the smaller problem size. PMID:23280540
Prediction of coefficients for lossless compression of multispectral images
NASA Astrophysics Data System (ADS)
Ruedin, Ana M. C.; Acevedo, Daniel G.
2005-08-01
We present a lossless compressor for multispectral Landsat images that exploits interband and intraband correlations. The compressor operates on blocks of 256 x 256 pixels, and performs two kinds of predictions. For bands 1, 2, 3, 4, 5, 6.2 and 7, the compressor performs an integer-to-integer wavelet transform, which is applied to each block separately. The wavelet coefficients that have not yet been encoded are predicted by means of a linear combination of already coded coefficients that belong to the same orientation and spatial location in the same band, and coefficients of the same location from other spectral bands. A fast block classification is performed in order to use the best weights for each landscape. The prediction errors or differences are finally coded with an entropy - based coder. For band 6.1, we do not use wavelet transforms, instead, a median edge detector is applied to predict a pixel, with the information of the neighbouring pixels and the equalized pixel from band 6.2. This technique exploits better the great similarity between histograms of bands 6.1 and 6.2. The prediction differences are finally coded with a context-based entropy coder. The two kinds of predictions used reduce both spatial and spectral correlations, increasing the compression rates. Our compressor has shown to be superior to the lossless compressors Winzip, LOCO-I, PNG and JPEG2000.
Radiological Image Compression
NASA Astrophysics Data System (ADS)
Lo, Shih-Chung Benedict
The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.
A Load Balanced Domain Decomposition Method Using Wavelet Analysis
Jameson, L; Johnson, J; Hesthaven, J
2001-05-31
Wavelet Analysis provides an orthogonal basis set which is localized in both the physical space and the Fourier transform space. We present here a domain decomposition method that uses wavelet analysis to maintain roughly uniform error throughout the computation domain while keeping the computational work balanced in a parallel computing environment.
Wavelet spectrum analysis approach to model validation of dynamic systems
NASA Astrophysics Data System (ADS)
Jiang, Xiaomo; Mahadevan, Sankaran
2011-02-01
Feature-based validation techniques for dynamic system models could be unreliable for nonlinear, stochastic, and transient dynamic behavior, where the time series is usually non-stationary. This paper presents a wavelet spectral analysis approach to validate a computational model for a dynamic system. Continuous wavelet transform is performed on the time series data for both model prediction and experimental observation using a Morlet wavelet function. The wavelet cross-spectrum is calculated for the two sets of data to construct a time-frequency phase difference map. The Box-plot, an exploratory data analysis technique, is applied to interpret the phase difference for validation purposes. In addition, wavelet time-frequency coherence is calculated using the locally and globally smoothed wavelet power spectra of the two data sets. Significance tests are performed to quantitatively verify whether the wavelet time-varying coherence is significant at a specific time and frequency point, considering uncertainties in both predicted and observed time series data. The proposed wavelet spectrum analysis approach is illustrated with a dynamics validation challenge problem developed at the Sandia National Laboratories. A comparison study is conducted to demonstrate the advantages of the proposed methodologies over classical frequency-independent cross-correlation analysis and time-independent cross-coherence analysis for the validation of dynamic systems.
Experimental wavelet based denoising for indoor infrared wireless communications.
Rajbhandari, Sujan; Ghassemlooy, Zabih; Angelova, Maia
2013-06-01
This paper reports the experimental wavelet denoising techniques carried out for the first time for a number of modulation schemes for indoor optical wireless communications in the presence of fluorescent light interference. The experimental results are verified using computer simulations, clearly illustrating the advantage of the wavelet denoising technique in comparison to the high pass filtering for all baseband modulation schemes. PMID:23736631
Modified wavelet kernel methods for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Hsu, Pai-Hui; Huang, Xiu-Man
2015-10-01
Hyperspectral images have the capability of acquiring images of earth surface with several hundred of spectral bands. Providing such abundant spectral data should increase the abilities in classifying land use/cover type. However, due to the high dimensionality of hyperspectral data, traditional classification methods are not suitable for hyperspectral data classification. The common method to solve this problem is dimensionality reduction by using feature extraction before classification. Kernel methods such as support vector machine (SVM) and multiple kernel learning (MKL) have been successfully applied to hyperspectral images classification. In kernel methods applications, the selection of kernel function plays an important role. The wavelet kernel with multidimensional wavelet functions can find the optimal approximation of data in feature space for classification. The SVM with wavelet kernels (called WSVM) have been also applied to hyperspectral data and improve classification accuracy. In this study, wavelet kernel method combined multiple kernel learning algorithm and wavelet kernels was proposed for hyperspectral image classification. After the appropriate selection of a linear combination of kernel functions, the hyperspectral data will be transformed to the wavelet feature space, which should have the optimal data distribution for kernel learning and classification. Finally, the proposed methods were compared with the existing methods. A real hyperspectral data set was used to analyze the performance of wavelet kernel method. According to the results the proposed wavelet kernel methods in this study have well performance, and would be an appropriate tool for hyperspectral image classification.
A Wavelet Based Dissipation Method for ALE Schemes
Cabot, B; Eliason, D.; Jameson, L.
2000-07-01
Wavelet analysis is natural tool to detect the presence of numerical noise, shocks and other features which might drive a calculation to become unstable. Here we suggest ways where wavelets can be used effectively to define a dissipation flag to replace dissipation flags traditionally used in ALE numerical schemes.
On precision of wavelet phase synchronization of chaotic systems
Postnikov, E. B.
2007-10-15
It is shown that time-scale synchronization of chaotic systems with ill-defined conventional phase is achieved by using wavelet transforms with center frequencies above a certain threshold value. It is found that the possibility of synchronization detection by introducing a wavelet phase is related to diffusion averaging of the analyzed signals.
Schrödinger like equation for wavelets
NASA Astrophysics Data System (ADS)
Zúñiga-Segundo, A.; Moya-Cessa, H. M.; Soto-Eguibar, F.
2016-01-01
An explicit phase space representation of the wave function is build based on a wavelet transformation. The wavelet transformation allows us to understand the relationship between s - ordered Wigner function, (or Wigner function when s = 0), and the Torres-Vega-Frederick's wave functions. This relationship is necessary to find a general solution of the Schrödinger equation in phase-space.
Face recognition by using optical correlator with wavelet preprocessing
NASA Astrophysics Data System (ADS)
Strzelecki, Jacek; Chalasinska-Macukow, Katarzyna
2004-08-01
The method of face recognition by using optical correlator with wavelet preprocessing is presented. The wavelet transform is used to improve the performance of standard Vander Lugt correlator with phase only filter (POF). The influence of various wavelet transforms of images of human faces on the recognition results has been analyzed. The quality of the face recognition process was tested according to two criteria: the peak to correlation energy ratio (PCE), and the discrimination capability (DC). Additionally, proper localization of correlation peak has been controlled. During the preprocessing step a set of three wavelets -- mexican hat, Haar, and Gabor wavelets, with various scales was used. In addition, Gabor wavelets were tested for various orientation angles. During the recognition procedure the input scene and POF are transformed by the same wavelet. We show the results of the computer simulation for a variety of images of human faces: original images without any distortions, noisy images, and images with non-uniform light ilumination. A comparison of results of recognition obtained with and without wavelet preprocessing is given.
Wavelet based feature extraction and visualization in hyperspectral tissue characterization
Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes
2014-01-01
Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437
Directional wavelet based features for colonic polyp classification.
Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas
2016-07-01
In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state
Wavelet Analysis of Satellite Images for Coastal Watch
NASA Technical Reports Server (NTRS)
Liu, Antony K.; Peng, Chich Y.; Chang, Steve Y.-S.
1997-01-01
The two-dimensional wavelet transform is a very efficient bandpass filter, which can be used to separate various scales of processes and show their relative phase/location. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite imagery employing wavelet analysis are developed. The wavelet transform has been applied to satellite images, such as those from synthetic aperture radar (SAR), advanced very-high-resolution radiometer (AVHRR), and coastal zone color scanner (CZCS) for feature extraction. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ship wakes can be tracked by the wavelet analysis using satellite data from repeating paths. Several examples of the wavelet analysis applied to various satellite Images demonstrate the feasibility of this technique for coastal monitoring.
Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation
NASA Astrophysics Data System (ADS)
Andreão, Rodrigo Varejão; Boudy, Jérôme
2006-12-01
This work aims at providing new insights on the electrocardiogram (ECG) segmentation problem using wavelets. The wavelet transform has been originally combined with a hidden Markov models (HMMs) framework in order to carry out beat segmentation and classification. A group of five continuous wavelet functions commonly used in ECG analysis has been implemented and compared using the same framework. All experiments were realized on the QT database, which is composed of a representative number of ambulatory recordings of several individuals and is supplied with manual labels made by a physician. Our main contribution relies on the consistent set of experiments performed. Moreover, the results obtained in terms of beat segmentation and premature ventricular beat (PVC) detection are comparable to others works reported in the literature, independently of the type of the wavelet. Finally, through an original concept of combining two wavelet functions in the segmentation stage, we achieve our best performances.
A New Adaptive Mother Wavelet for Electromagnetic Transient Analysis
NASA Astrophysics Data System (ADS)
Guillén, Daniel; Idárraga-Ospina, Gina; Cortes, Camilo
2016-01-01
Wavelet Transform (WT) is a powerful technique of signal processing, its applications in power systems have been increasing to evaluate power system conditions, such as faults, switching transients, power quality issues, among others. Electromagnetic transients in power systems are due to changes in the network configuration, producing non-periodic signals, which have to be identified to avoid power outages in normal operation or transient conditions. In this paper a methodology to develop a new adaptive mother wavelet for electromagnetic transient analysis is proposed. Classification is carried out with an innovative technique based on adaptive wavelets, where filter bank coefficients will be adapted until a discriminant criterion is optimized. Then, its corresponding filter coefficients will be used to get the new mother wavelet, named wavelet ET, which allowed to identify and to distinguish the high frequency information produced by different electromagnetic transients.
Wavelet approach to accelerator problems. 1: Polynomial dynamics
Fedorova, A.; Zeitlin, M.; Parsa, Z.
1997-05-01
This is the first part of a series of talks in which the authors present applications of methods from wavelet analysis to polynomial approximations for a number of accelerator physics problems. In the general case they have the solution as a multiresolution expansion in the base of compactly supported wavelet basis. The solution is parameterized by solutions of two reduced algebraical problems, one is nonlinear and the second is some linear problem, which is obtained from one of the next wavelet constructions: Fast Wavelet Transform, Stationary Subdivision Schemes, the method of Connection Coefficients. In this paper the authors consider the problem of calculation of orbital motion in storage rings. The key point in the solution of this problem is the use of the methods of wavelet analysis, relatively novel set of mathematical methods, which gives one a possibility to work with well-localized bases in functional spaces and with the general type of operators (including pseudodifferential) in such bases.
[CCD spectrum processing of automobile lamp testing with wavelet analysis].
Zheng, Yong-mei; Gao, Chun-ge; Jiang, Yong-heng
2003-06-01
According to the wavelet multi-resolution analysis, the optical spectrum signal received by CCD was processed and analyzed by using the wavelet transformation technique. The noise in the optical spectrum signal can be removed by using the wavelet multi-resolution analysis technique. The curve of spectral signal can be smoothed. The ratio of signal to noise was enhanced. The spectrum of Asource by CCD was smoothed with wavelet multi-resolution analysis. The processing results with different wavelets in different orders were discussed. An effective data processing method was used for spectrum analysis. The difficulty in the analysis and processing of real-time signal can be solved, which is significant in the color testing field. PMID:12953544
Centralized and interactive compression of multiview images
NASA Astrophysics Data System (ADS)
Gelman, Andriy; Dragotti, Pier Luigi; Velisavljević, Vladan
2011-09-01
In this paper, we propose two multiview image compression methods. The basic concept of both schemes is the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission of all data to the users. By contrast, in an interactive setting, the users request only a subset of the captured images, but in an unknown order a priori. We address this scenario in the second algorithm using Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random access at the image level. We demonstrate that the proposed centralized and interactive methods outperform H.264/MVC and JPEG 2000, respectively.
Imaging industry expectations for compressed sensing in MRI
NASA Astrophysics Data System (ADS)
King, Kevin F.; Kanwischer, Adriana; Peters, Rob
2015-09-01
Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm
Image Compression Algorithm Altered to Improve Stereo Ranging
NASA Technical Reports Server (NTRS)
Kiely, Aaron
2008-01-01
A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.
An image fusion method based on biorthogonal wavelet
NASA Astrophysics Data System (ADS)
Li, Jianlin; Yu, Jiancheng; Sun, Shengli
2008-03-01
Image fusion could process and utilize the source images, with complementing different image information, to achieve the more objective and essential understanding of the identical object. Recently, image fusion has been extensively applied in many fields such as medical imaging, micro photographic imaging, remote sensing, and computer vision as well as robot. There are various methods have been proposed in the past years, such as pyramid decomposition and wavelet transform algorithm. As for wavelet transform algorithm, due to the virtue of its multi-resolution, wavelet transform has been applied in image processing successfully. Another advantage of wavelet transform is that it can be much more easily realized in hardware, because its data format is very simple, so it could save a lot of resources, besides, to some extent, it can solve the real-time problem of huge-data image fusion. However, as the orthogonal filter of wavelet transform doesn't have the characteristics of linear phase, the phase distortion will lead to the distortion of the image edge. To make up for this shortcoming, the biorthogonal wavelet is introduced here. So, a novel image fusion scheme based on biorthogonal wavelet decomposition is presented in this paper. As for the low-frequency and high-frequency wavelet decomposition coefficients, the local-area-energy-weighted-coefficient fusion rule is adopted and different thresholds of low-frequency and high-frequency are set. Based on biorthogonal wavelet transform and traditional pyramid decomposition algorithm, an MMW image and a visible image are fused in the experiment. Compared with the traditional pyramid decomposition, the fusion scheme based biorthogonal wavelet is more capable to retain and pick up image information, and make up the distortion of image edge. So, it has a wide application potential.
Volumetric Rendering of Geophysical Data on Adaptive Wavelet Grid
NASA Astrophysics Data System (ADS)
Vezolainen, A.; Erlebacher, G.; Vasilyev, O.; Yuen, D. A.
2005-12-01
Numerical modeling of geological phenomena frequently involves processes across a wide range of spatial and temporal scales. In the last several years, transport phenomena governed by the Navier-Stokes equations have been simulated in wavelet space using second generation wavelets [1], and most recently on fully adaptive meshes. Our objective is to visualize this time-dependent data using volume rendering while capitalizing on the available sparse data representation. We present a technique for volumetric ray casting of multi-scale datasets in wavelet space. Rather of working with the wavelets at the finest possible resolution, we perform a partial inverse wavelet transform as a preprocessing step to obtain scaling functions on a uniform grid at a user-prescribed resolution. As a result, a function in physical space is represented by a superposition of scaling functions on a coarse regular grid and wavelets on an adaptive mesh. An efficient and accurate ray casting algorithm is based just on these scaling functions. Additional detail is added during the ray tracing by taking an appropriate number of wavelets into account based on support overlap with the interpolation point, wavelet amplitude, and other characteristics, such as opacity accumulation (front to back ordering) and deviation from frontal viewing direction. Strategies for hardware implementation will be presented if available, inspired by the work in [2]. We will pressent error measures as a function of the number of scaling and wavelet functions used for interpolation. Data from mantle convection will be used to illustrate the method. [1] Vasilyev, O.V. and Bowman, C., Second Generation Wavelet Collocation Method for the Solution of Partial Differential Equations. J. Comp. Phys., 165, pp. 660-693, 2000. [2] Guthe, S., Wand, M., Gonser, J., and Straßer, W. Interactive rendering of large volume data sets. In Proceedings of the Conference on Visualization '02 (Boston, Massachusetts, October 27 - November
Lung tissue classification using wavelet frames.
Depeursinge, Adrien; Sage, Daniel; Hidki, Asmâa; Platon, Alexandra; Poletti, Pierre-Alexandre; Unser, Michael; Müller, Henning
2007-01-01
We describe a texture classification system that identifies lung tissue patterns from high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD). This pattern recognition task is part of an image-based diagnostic aid system for ILDs. Five lung tissue patterns (healthy, emphysema, ground glass, fibrosis and microdules) selected from a multimedia database are classified using the overcomplete discrete wavelet frame decompostion combined with grey-level histogram features. The overall multiclass accuracy reaches 92.5% of correct matches while combining the two types of features, which are found to be complementary. PMID:18003452
Spike detection using the continuous wavelet transform.
Nenadic, Zoran; Burdick, Joel W
2005-01-01
This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution. PMID:15651566
Characteristic Extraction of Speech Signal Using Wavelet
NASA Astrophysics Data System (ADS)
Moriai, Shogo; Hanazaki, Izumi
In the analysis-synthesis coding of speech signals, realization of the high quality in the low bit rate coding depends on the extraction of its characteristic parameters in the pre-processing. The precise extraction of the fundamental frequency, one of the parameters of the source information, guarantees the quality in the speech synthesis. But its extraction is diffcult because of the influence of the consonant, non-periodicity of vocal cords vibration, wide range of the fundamental frequency, etc.. In this paper, we will propose a new fundamental frequency extraction of the speech signals using the Wavelet transform with the criterion based on its harmonics structure.
Image encryption in the wavelet domain
NASA Astrophysics Data System (ADS)
Bao, Long; Zhou, Yicong; Chen, C. L. Philip
2013-05-01
Most existing image encryption algorithms often transfer the original image into a noise-like image which is an apparent visual sign indicating the presence of an encrypted image. Motivated by the data hiding technologies, this paper proposes a novel concept of image encryption, namely transforming an encrypted original image into another meaningful image which is the final resulting encrypted image and visually the same as the cover image, overcoming the mentioned problem. Using this concept, we introduce a new image encryption algorithm based on the wavelet decomposition. Simulations and security analysis are given to show the excellent performance of the proposed concept and algorithm.
Holographic features of spatial coherence wavelets.
Castaneda, Roman; Betancur, Rafael; Hincapie, Diego
2008-08-01
The behavior of the marginal power spectrum as a two-channel-multiplexed hologram is analyzed. Its "negative energies" make it quite different from the conventional holograms, i.e., it is not recordable in general and the objects to be reconstructed (the cross-spectral densities at both the aperture and the observation planes) are virtual. The holographic reconstruction results from the superposition of the spatial coherence wavelets that carry the marginal power spectrum. These features make the marginal power spectrum a powerful tool for analysis and synthesis of optical fields, for instance, in optical information processing (signal encryption) and beam shaping for microlithography. PMID:18677351
Young's experiment with electromagnetic spatial coherence wavelets.
Castaneda, Roman; Carrasquilla, Juan; Garcia-Sucerquia, Jorge
2006-10-01
We discuss Young's experiment with electromagnetic random fields at arbitrary states of coherence and polarization within the framework of the electric spatial coherence wavelets. The use of this approach for the electromagnetic spatial coherence theory allows us to envisage the existence of polarization domains inside the observation plane. We show that it is possible to locally control those polarization domains by means of the correlation properties of the electromagnetic wave. To show the validity of this alternative approach, we derive by means of numerical modeling the classical Fresnel-Arago interference laws. PMID:16985537
Musculoskeletal ultrasound image denoising using Daubechies wavelets
NASA Astrophysics Data System (ADS)
Gupta, Rishu; Elamvazuthi, I.; Vasant, P.
2012-11-01
Among various existing medical imaging modalities Ultrasound is providing promising future because of its ease availability and use of non-ionizing radiations. In this paper we have attempted to denoise ultrasound image using daubechies wavelet and analyze the results with peak signal to noise ratio and coefficient of correlation as performance measurement index. The different daubechies from 1 to 6 is used on four different ultrasound bone fracture images with three different levels from 1 to 3. The images for visual inspection and PSNR, Coefficient of correlation values are graphically shown for quantitaive analysis of resultant images.
On ECG reconstruction using weighted-compressive sensing
Kassim, Ashraf A.
2014-01-01
The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices. PMID:26609381
Compressed Sensing Based Fingerprint Identification for Wireless Transmitters
Zhao, Caidan; Wu, Xiongpeng; Huang, Lianfen; Yao, Yan; Chang, Yao-Chung
2014-01-01
Most of the existing fingerprint identification techniques are unable to distinguish different wireless transmitters, whose emitted signals are highly attenuated, long-distance propagating, and of strong similarity to their transient waveforms. Therefore, this paper proposes a new method to identify different wireless transmitters based on compressed sensing. A data acquisition system is designed to capture the wireless transmitter signals. Complex analytical wavelet transform is used to obtain the envelope of the transient signal, and the corresponding features are extracted by using the compressed sensing theory. Feature selection utilizing minimum redundancy maximum relevance (mRMR) is employed to obtain the optimal feature subsets for identification. The results show that the proposed method is more efficient for the identification of wireless transmitters with similar transient waveforms. PMID:24892053
Compressing Image Data While Limiting the Effects of Data Losses
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Klimesh, Matthew
2006-01-01
ICER is computer software that can perform both lossless and lossy compression and decompression of gray-scale-image data using discrete wavelet transforms. Designed for primary use in transmitting scientific image data from distant spacecraft to Earth, ICER incorporates an error-containment scheme that limits the adverse effects of loss of data and is well suited to the data packets transmitted by deep-space probes. The error-containment scheme includes utilization of the algorithm described in "Partitioning a Gridded Rectangle Into Smaller Rectangles " (NPO-30479), NASA Tech Briefs, Vol. 28, No. 7 (July 2004), page 56. ICER has performed well in onboard compression of thousands of images transmitted from the Mars Exploration Rovers.
On ECG reconstruction using weighted-compressive sensing.
Zonoobi, Dornoosh; Kassim, Ashraf A
2014-06-01
The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices. PMID:26609381
NASA Astrophysics Data System (ADS)
Riel, B.; Simons, M.; Agram, P.
2012-12-01
Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic accumulation of strain in the crust. Over seismically and volcanically active regions, these signals are often challenging to detect due to noise and other modes of deformation. Geodetic datasets that provide precise measurements of surface displacement over wide areas are ideal for exploiting both the spatial and temporal coherence of transient signals. We present an extension to the Multiscale InSAR Time Series (MInTS) approach for analyzing geodetic data by combining the localization benefits of wavelet transforms (localizing signals in space) with sparse optimization techniques (localizing signals in time). Our time parameterization approach allows us to reduce geodetic time series to sparse, compressible signals with very few non-zero coefficients corresponding to transient events. We first demonstrate the temporal transient detection by analyzing GPS data over the Long Valley caldera in California and along the San Andreas fault near Parkfield, CA. For Long Valley, we are able to resolve the documented 2002-2003 uplift event with greater temporal precision. Similarly for Parkfield, we model the postseismic deformation by specific integrated basis splines characterized by timescales that are largely consistent with postseismic relaxation times. We then apply our method to ERS and Envisat InSAR datasets consisting of over 200 interferograms for Long Valley and over 100 interferograms for Parkfield. The wavelet transforms reduce the impact of spatially correlated atmospheric noise common in InSAR data since the wavelet coefficients themselves are essentially uncorrelated. The spatial density and extended temporal coverage of the InSAR data allows us to effectively localize ground deformation events in both space and time with greater precision than has been previously accomplished.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
NASA Astrophysics Data System (ADS)
Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek
2009-02-01
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands
An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform
NASA Astrophysics Data System (ADS)
Huang, Xiaosheng; Zhao, Sujuan
At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.
A constrained two-layer compression technique for ECG waves.
Byun, Kyungguen; Song, Eunwoo; Shim, Hwan; Lim, Hyungjoon; Kang, Hong-Goo
2015-08-01
This paper proposes a constrained two-layer compression technique for electrocardiogram (ECG) waves, of which encoded parameters can be directly used for the diagnosis of arrhythmia. In the first layer, a single ECG beat is represented by one of the registered templates in the codebook. Since the required coding parameter in this layer is only the codebook index of the selected template, its compression ratio (CR) is very high. Note that the distribution of registered templates is also related to the characteristics of ECG waves, thus it can be used as a metric to detect various types of arrhythmias. The residual error between the input and the selected template is encoded by a wavelet-based transform coding in the second layer. The number of wavelet coefficients is constrained by pre-defined maximum distortion to be allowed. The MIT-BIH arrhythmia database is used to evaluate the performance of the proposed algorithm. The proposed algorithm shows around 7.18 CR when the reference value of percentage root mean square difference (PRD) is set to ten. PMID:26737691
Block-based conditional entropy coding for medical image compression
NASA Astrophysics Data System (ADS)
Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng
2003-05-01
In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.
NASA Astrophysics Data System (ADS)
Reckinger, Scott J.; Livescu, Daniel; Vasilyev, Oleg V.
2016-05-01
An investigation of compressible Rayleigh-Taylor instability (RTI) using Direct Numerical Simulations (DNS) requires efficient numerical methods, advanced boundary conditions, and consistent initialization in order to capture the wide range of scales and vortex dynamics present in the system, while reducing the computational impact associated with acoustic wave generation and the subsequent interaction with the flow. An advanced computational framework is presented that handles the challenges introduced by considering the compressive nature of RTI systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification dependent vorticity production. The foundation of the numerical methodology described here is the wavelet-based grid adaptivity of the Parallel Adaptive Wavelet Collocation Method (PAWCM) that maintains symmetry in single-mode RTI systems to extreme late-times. PAWCM is combined with a consistent initialization, which reduces the generation of acoustic disturbances, and effective boundary treatments, which prevent acoustic reflections. A dynamic time integration scheme that can handle highly nonlinear and potentially stiff systems, such as compressible RTI, completes the computational framework. The numerical methodology is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.
Wavelet Denoising of Mobile Radiation Data
Campbell, D B
2008-10-31
The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.
Continuous wavelet transform in quantum field theory
NASA Astrophysics Data System (ADS)
Altaisky, M. V.; Kaputkina, N. E.
2013-07-01
We describe the application of the continuous wavelet transform to calculation of the Green functions in quantum field theory: scalar ϕ4 theory, quantum electrodynamics, and quantum chromodynamics. The method of continuous wavelet transform in quantum field theory, presented by Altaisky [Phys. Rev. D 81, 125003 (2010)] for the scalar ϕ4 theory, consists in substitution of the local fields ϕ(x) by those dependent on both the position x and the resolution a. The substitution of the action S[ϕ(x)] by the action S[ϕa(x)] makes the local theory into a nonlocal one and implies the causality conditions related to the scale a, the region causality [J. D. Christensen and L. Crane, J. Math. Phys. (N.Y.) 46, 122502 (2005)]. These conditions make the Green functions G(x1,a1,…,xn,an)=⟨ϕa1(x1)…ϕan(xn)⟩ finite for any given set of regions by means of an effective cutoff scale A=min(a1,…,an).
NASA Astrophysics Data System (ADS)
Hu, Li-Yun; Fan, Hong-Yi
2010-07-01
In a preceding letter (2007 Opt. Lett. 32 554) we propose complex continuous wavelet transforms and found Laguerre-Gaussian mother wavelets family. In this work we present the inversion formula and Parseval theorem for complex continuous wavelet transform by virtue of the entangled state representation, which makes the complex continuous wavelet transform theory complete. A new orthogonal property of mother wavelet in parameter space is revealed.
Wavelet Methods Developed to Detect and Control Compressor Stall
NASA Technical Reports Server (NTRS)
Le, Dzu K.
1997-01-01
A "wavelet" is, by definition, an amplitude-varying, short waveform with a finite bandwidth (e.g., that shown in the first two graphs). Naturally, wavelets are more effective than the sinusoids of Fourier analysis for matching and reconstructing signal features. In wavelet transformation and inversion, all transient or periodic data features (as in compressor-inlet pressures) can be detected and reconstructed by stretching or contracting a single wavelet to generate the matching building blocks. Consequently, wavelet analysis provides many flexible and effective ways to reduce noise and extract signals which surpass classical techniques - making it very attractive for data analysis, modeling, and active control of stall and surge in high-speed turbojet compressors. Therefore, fast and practical wavelet methods are being developed in-house at the NASA Lewis Research Center to assist in these tasks. This includes establishing user-friendly links between some fundamental wavelet analysis ideas and the classical theories (or practices) of system identification, data analysis, and processing.
Space-based RF signal classification using adaptive wavelet features
Caffrey, M.; Briles, S.
1995-04-01
RF signals are dispersed in frequency as they propagate through the ionosphere. For wide-band signals, this results in nonlinearly- chirped-frequency, transient signals in the VHF portion of the spectrum. This ionospheric dispersion provide a means of discriminating wide-band transients from other signals (e.g., continuous-wave carriers, burst communications, chirped-radar signals, etc.). The transient nature of these dispersed signals makes them candidates for wavelet feature selection. Rather than choosing a wavelet ad hoc, we adaptively compute an optimal mother wavelet via a neural network. Gaussian weighted, linear frequency modulate (GLFM) wavelets are linearly combined by the network to generate our application specific mother wavelet, which is optimized for its capacity to select features that discriminate between the dispersed signals and clutter (e.g., multiple continuous-wave carriers), not for its ability to represent the dispersed signal. The resulting mother wavelet is then used to extract features for a neutral network classifier. The performance of the adaptive wavelet classifier is the compared to an FFT based neural network classifier.
Wavelet analysis as a nonstationary plasma fluctuation diagnostic tool
Santoso, S.; Powers, E.J.; Ouroua, A.; Heard, J.W.; Bengtson, R.D.
1996-12-31
Analysis of nonstationary plasma fluctuation data has been a long-time challenge for the plasma diagnostic community. For this reason, in this paper the authors present and apply wavelet transforms as a new diagnostic tool to analyze nonstationary plasma fluctuation data. Unlike the Fourier transform, which represents a given signal globally without temporal resolution, the wavelet transform provides a local representation of the given signal in the time-scale domain. The fundamental concepts and multiresolution properties of wavelet transforms, along with a brief comparison with the short-time Fourier transform, are presented in this paper. The selection of a prototype wavelet or a mother wavelet is also discussed. Digital implementation of wavelet spectral analysis, which include time-scale power spectra and scale power spectra are described. The efficacy of the wavelet approach is demonstrated by analyzing transient broadband electrostatic potential fluctuations inside the inversion radius of sawtoothing TEXT-U plasmas during electron cyclotron resonance heating. The potential signals are collected using a 2 MeV heavy ion beam probe.
Application of wavelet transforms to reservoir data analysis and scaling
Panda, M.N.; Mosher, C.; Chopra, A.K.
1996-12-31
General characterization of physical systems uses two aspects of data analysis methods: decomposition of empirical data to determine model parameters and reconstruction of the image using these characteristic parameters. Spectral methods, involving a frequency based representation of data, usually assume stationarity. These methods, therefore, extract only the average information and hence are not suitable for analyzing data with isolated or deterministic discontinuities, such as faults or fractures in reservoir rocks or image edges in computer vision. Wavelet transforms provide a multiresolution framework for data representation. They are a family of orthogonal basis functions that separate a function or a signal into distinct frequency packets that are localized in the time domain. Thus, wavelets are well suited for analyzing non-stationary data. In other words, a projection of a function or a discrete data set onto a time-frequency space using wavelets shows how the function behaves at different scales of measurement. Because wavelets have compact support, it is easy to apply this transform to large data sets with minimal computations. We apply the wavelet transforms to one-dimensional and two-dimensional permeability data to determine the locations of layer boundaries and other discontinuities. By binning in the time-frequency plane with wavelet packets, permeability structures of arbitrary size are analyzed. We also apply orthogonal wavelets for scaling up of spatially correlated heterogeneous permeability fields.
On-Line Loss of Control Detection Using Wavelets
NASA Technical Reports Server (NTRS)
Brenner, Martin J. (Technical Monitor); Thompson, Peter M.; Klyde, David H.; Bachelder, Edward N.; Rosenthal, Theodore J.
2005-01-01
Wavelet transforms are used for on-line detection of aircraft loss of control. Wavelet transforms are compared with Fourier transform methods and shown to more rapidly detect changes in the vehicle dynamics. This faster response is due to a time window that decreases in length as the frequency increases. New wavelets are defined that further decrease the detection time by skewing the shape of the envelope. The wavelets are used for power spectrum and transfer function estimation. Smoothing is used to tradeoff the variance of the estimate with detection time. Wavelets are also used as front-end to the eigensystem reconstruction algorithm. Stability metrics are estimated from the frequency response and models, and it is these metrics that are used for loss of control detection. A Matlab toolbox was developed for post-processing simulation and flight data using the wavelet analysis methods. A subset of these methods was implemented in real time and named the Loss of Control Analysis Tool Set or LOCATS. A manual control experiment was conducted using a hardware-in-the-loop simulator for a large transport aircraft, in which the real time performance of LOCATS was demonstrated. The next step is to use these wavelet analysis tools for flight test support.
Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data
NASA Technical Reports Server (NTRS)
Bose, Tamal
2000-01-01
A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.
Uma Vetri Selvi, G; Nadarajan, R
2015-12-01
Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all
CVS Filtering of 3D Turbulent Mixing Layers Using Orthogonal Wavelets
NASA Technical Reports Server (NTRS)
Schneider, Kai; Farge, Marie; Pellegrino, Giulio; Rogers, Michael
2000-01-01
Coherent Vortex Simulation (CVS) filtering has been applied to Direct Numerical Simulation (DNS) data of forced and unforced time-developing turbulent mixing layers. CVS filtering splits the turbulent flow into two orthogonal parts, one corresponding to coherent vortices and the other to incoherent background flow. We have shown that the coherent vortices can be represented by few wavelet modes and that these modes are sufficient to reproduce the vorticity probability distribution function (PDF) and the energy spectrum over the entire inertial range. The remaining incoherent background flow is homogeneous, has small amplitude, and is uncorrelated. These results are compared with those obtained for the same compression rate using large eddy simulation (LES) filtering. In contrast to the incoherent background flow of CVS filtering, the LES subgrid scales have a much larger amplitude and are correlated, which makes their statistical modeling more difficult.
Remacha, Clément; Coëtmellec, Sébastien; Brunel, Marc; Lebrun, Denis
2013-02-01
Wavelet analysis provides an efficient tool in numerous signal processing problems and has been implemented in optical processing techniques, such as in-line holography. This paper proposes an improvement of this tool for the case of an elliptical, astigmatic Gaussian (AEG) beam. We show that this mathematical operator allows reconstructing an image of a spherical particle without compression of the reconstructed image, which increases the accuracy of the 3D location of particles and of their size measurement. To validate the performance of this operator we have studied the diffraction pattern produced by a particle illuminated by an AEG beam. This study used mutual intensity propagation, and the particle is defined as a chirped Gaussian sum. The proposed technique was applied and the experimental results are presented. PMID:23385926
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
A wavelet approach to binary blackholes with asynchronous multitasking
NASA Astrophysics Data System (ADS)
Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo
2016-03-01
Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.
Medical image fusion by wavelet transform modulus maxima
NASA Astrophysics Data System (ADS)
Guihong, Qu; Dali, Zhang; Pingfan, Yan
2001-08-01
Medical image fusion has been used to derive useful information from multimodality medical image data. In this research, we propose a novel method for multimodality medical image fusion. Using wavelet transform, we achieved a fusion scheme. Afusion rule is proposed and used for calculating the wavelet transformation modulus maxima of input images at different bandwidths and levels. To evaluate the fusion result, a metric based on mutual information (MI) is presented for measuring fusion effect. The performances of other two methods of image fusion based on wavelet transform are briefly described for comparison. The experiment results demonstrate the effectiveness of the fusion scheme.
EEG seizure identification by using optimized wavelet decomposition.
Pinzon-Morales, R D; Orozco-Gutierrez, A; Castellanos-Dominguez, G
2011-01-01
A methodology for wavelet synthesis based on lifting scheme and genetic algorithms is presented. Often, the wavelet synthesis is addressed to solve the problem of choosing properly a wavelet function from an existing library, but which may be not specially designed to the application in hand. The task under consideration is the identification of epileptic seizures over electroencephalogram recordings. Although basic classifiers are employed, results rendered that the proposed methodology is successful in the considered study achieving similar classification rates that had been reported in literature. PMID:22254892
Wavelet analysis and scaling properties of time series
NASA Astrophysics Data System (ADS)
Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.
On the Daubechies-based wavelet differentiation matrix
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a Daubechies-based wavelet basis is constructed and superconvergence is proven. That is, it will be proven that under the assumption of periodic boundary conditions that the differentiation matrix is accurate of order 2M, even though the approximation subspace can represent exactly only polynomials up to degree M-1, where M is the number of vanishing moments of the associated wavelet. It is illustrated that Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small-scale structure is present.
Multiresolution local tomography in dental radiology using wavelets.
Niinimäki, K; Siltanen, S; Kolehmainen, V
2007-01-01
A Bayesian multiresolution model for local tomography in dental radiology is proposed. In this model a wavelet basis is used to present dental structures and the prior information is modeled in terms of Besov norm penalty. The proposed wavelet-based multiresolution method is used to reduce the number of unknowns in the reconstruction problem by abandoning fine-scale wavelets outside the region of interest (ROI). This multiresolution model allows significant reduction in the number of unknowns without the loss of reconstruction accuracy inside the ROI. The feasibility of the proposed method is tested with two-dimensional (2D) examples using simulated and experimental projection data from dental specimens. PMID:18002604
Wavelet analysis and scaling properties of time series.
Manimaran, P; Panigrahi, Prasanta K; Parikh, Jitendra C
2005-10-01
We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior. PMID:16383481
Using Wavelets to Search Swift GRBs for QPOs
Morris, David; Battista, Fred; Dhuga, Kalvir; MacLachlan, Glen
2010-10-15
Motivated by discussion of possible quasi-periodic oscillations in GRB090709A, we examine the wavelet transform of GRB090709A for evidence of the purported QPO. Our approach accounts for both white noise and red noise, with the red noise based on the auto-correlation function of the GRB090709A lightcurve. Encouraged by the robustness of the result, we repeat the analysis on all Swift-triggered GRBs, treating the red noise properties of each GRB individually. We find that time-integrated wavelet peak power agrees well with peak power derived from FFT techniques but the wavelet technique suggests the power is often localized early in the burst.
EEG Artifact Removal Using a Wavelet Neural Network
NASA Technical Reports Server (NTRS)
Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom
2011-01-01
!n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.
Template-free wavelet-based detection of local symmetries.
Puspoki, Zsuzsanna; Unser, Michael
2015-10-01
Our goal is to detect and group different kinds of local symmetries in images in a scale- and rotation-invariant way. We propose an efficient wavelet-based method to determine the order of local symmetry at each location. Our algorithm relies on circular harmonic wavelets which are used to generate steerable wavelet channels corresponding to different symmetry orders. To give a measure of local symmetry, we use the F-test to examine the distribution of the energy across different channels. We provide experimental results on synthetic images, biological micrographs, and electron-microscopy images to demonstrate the performance of the algorithm. PMID:26011883
Li, Jingsong; Yu, Benli; Fischer, Horst
2015-04-01
This paper presents a novel methodology-based discrete wavelet transform (DWT) and the choice of the optimal wavelet pairs to adaptively process tunable diode laser absorption spectroscopy (TDLAS) spectra for quantitative analysis, such as molecular spectroscopy and trace gas detection. The proposed methodology aims to construct an optimal calibration model for a TDLAS spectrum, regardless of its background structural characteristics, thus facilitating the application of TDLAS as a powerful tool for analytical chemistry. The performance of the proposed method is verified using analysis of both synthetic and observed signals, characterized with different noise levels and baseline drift. In terms of fitting precision and signal-to-noise ratio, both have been improved significantly using the proposed method. PMID:25741689
NASA Astrophysics Data System (ADS)
Corona, Enrique; Nutter, Brian; Mitra, Sunanda; Guo, Jiangling; Karp, Tanja
2008-03-01
Efficient retrieval of high quality Regions-Of-Interest (ROI) from high resolution medical images is essential for reliable interpretation and accurate diagnosis. Random access to high quality ROI from codestreams is becoming an essential feature in many still image compression applications, particularly in viewing diseased areas from large medical images. This feature is easier to implement in block based codecs because of the inherent spatial independency of the code blocks. This independency implies that the decoding order of the blocks is unimportant as long as the position for each is properly identified. In contrast, wavelet-tree based codecs naturally use some interdependency that exploits the decaying spectrum model of the wavelet coefficients. Thus one must keep track of the decoding order from level to level with such codecs. We have developed an innovative multi-rate image subband coding scheme using "Backward Coding of Wavelet Trees (BCWT)" which is fast, memory efficient, and resolution scalable. It offers far less complexity than many other existing codecs including both, wavelet-tree, and block based algorithms. The ROI feature in BCWT is implemented through a transcoder stage that generates a new BCWT codestream containing only the information associated with the user-defined ROI. This paper presents an efficient technique that locates a particular ROI within the BCWT coded domain, and decodes it back to the spatial domain. This technique allows better access and proper identification of pathologies in high resolution images since only a small fraction of the codestream is required to be transmitted and analyzed.
Addison, Paul S
2015-08-01
A novel method of identifying stable phase coupling behavior of two signals within the wavelet transform time-frequency plane is presented. The technique employs the cross-wavelet transform to provide a map of phase coupling followed by synchrosqueezing to collect the stable phase regime information. The resulting synchrosqueezed cross-wavelet transform method (Synchro-CrWT) is illustrated using a synthetic signal and then applied to the analysis of the relationship between biosignals used in the analysis of cerebral autoregulation function. PMID:26737649