Science.gov

Sample records for non-perfect wavelet compression

  1. Perceptually Lossless Wavelet Compression

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John

    1996-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  2. Fast fractal image compression with triangulation wavelets

    NASA Astrophysics Data System (ADS)

    Hebert, D. J.; Soundararajan, Ezekiel

    1998-10-01

    We address the problem of improving the performance of wavelet based fractal image compression by applying efficient triangulation methods. We construct iterative function systems (IFS) in the tradition of Barnsley and Jacquin, using non-uniform triangular range and domain blocks instead of uniform rectangular ones. We search for matching domain blocks in the manner of Zhang and Chen, performing a fast wavelet transform on the blocks and eliminating low resolution mismatches to gain speed. We obtain further improvements by the efficiencies of binary triangulations (including the elimination of affine and symmetry calculations and reduced parameter storage), and by pruning the binary tree before construction of the IFS. Our wavelets are triangular Haar wavelets and `second generation' interpolation wavelets as suggested by Sweldens' recent work.

  3. Compression of echocardiographic scan line data using wavelet packet transform

    NASA Technical Reports Server (NTRS)

    Hang, X.; Greenberg, N. L.; Qin, J.; Thomas, J. D.

    2001-01-01

    An efficient compression strategy is indispensable for digital echocardiography. Previous work has suggested improved results utilizing wavelet transforms in the compression of 2D echocardiographic images. Set partitioning in hierarchical trees (SPIHT) was modified to compress echocardiographic scanline data based on the wavelet packet transform. A compression ratio of at least 94:1 resulted in preserved image quality.

  4. Compressive sensing exploiting wavelet-domain dependencies for ECG compression

    NASA Astrophysics Data System (ADS)

    Polania, Luisa F.; Carrillo, Rafael E.; Blanco-Velasco, Manuel; Barner, Kenneth E.

    2012-06-01

    Compressive sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist sampling of sparse signals. Extensive previous work has exploited the sparse representation of ECG signals in compression applications. In this paper, we propose the use of wavelet domain dependencies to further reduce the number of samples in compressive sensing-based ECG compression while decreasing the computational complexity. R wave events manifest themselves as chains of large coefficients propagating across scales to form a connected subtree of the wavelet coefficient tree. We show that the incorporation of this connectedness as additional prior information into a modified version of the CoSaMP algorithm can significantly reduce the required number of samples to achieve good quality in the reconstruction. This approach also allows more control over the ECG signal reconstruction, in particular, the QRS complex, which is typically distorted when prior information is not included in the recovery. The compression algorithm was tested upon records selected from the MIT-BIH arrhythmia database. Simulation results show that the proposed algorithm leads to high compression ratios associated with low distortion levels relative to state-of-the-art compression algorithms.

  5. Wavelet compression techniques for hyperspectral data

    NASA Technical Reports Server (NTRS)

    Evans, Bruce; Ringer, Brian; Yeates, Mathew

    1994-01-01

    Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet

  6. Improved Compression of Wavelet-Transformed Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Klimesh, Matthew

    2005-01-01

    A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average

  7. Embedded wavelet packet transform technique for texture compression

    NASA Astrophysics Data System (ADS)

    Li, Jin; Cheng, Po-Yuen; Kuo, C.-C. Jay

    1995-09-01

    A highly efficient texture compression scheme is proposed in this research. With this scheme, energy compaction of texture images is first achieved by the wavelet packet transform, and an embedding approach is then adopted for the coding of the wavelet packet transform coefficients. By comparing the proposed algorithm with the JPEG standard, FBI wavelet/scalar quantization standard and the EZW scheme with extensive experimental results, we observe a significant improvement in the rate-distortion performance and visual quality.

  8. Wavelet-based compression of multichannel climate data

    NASA Astrophysics Data System (ADS)

    Sharifahmadian, Ershad; Choi, Yoonsuk; Latifi, Shahram; Dascalu, Sergiu; Harris, Frederick C.

    2014-05-01

    To simultaneously compress multichannel climate data, the Wavelet Subbands Arranging Technique (WSAT) is studied. The proposed technique is based on the wavelet transform, and has been designed to improve the transmission of voluminous climate data. The WSAT method significantly reduces the number of transmitted or stored bits in a bit stream, and preserves required quality. In the proposed technique, the arranged wavelet subbands of input channels provide more efficient compression for multichannel climate data due to building appropriate parent-offspring relations among wavelet coefficients. To test and evaluate the proposed technique, data from the Nevada climate change database is utilized. Based on results, the proposed technique can be an appropriate choice for the compression of multichannel climate data with significantly high compression ratio at low error.

  9. Coresident sensor fusion and compression using the wavelet transform

    SciTech Connect

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  10. Context Modeler for Wavelet Compression of Spectral Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.

  11. The effects of wavelet compression on Digital Elevation Models (DEMs)

    USGS Publications Warehouse

    Oimoen, M.J.

    2004-01-01

    This paper investigates the effects of lossy compression on floating-point digital elevation models using the discrete wavelet transform. The compression of elevation data poses a different set of problems and concerns than does the compression of images. Most notably, the usefulness of DEMs depends largely in the quality of their derivatives, such as slope and aspect. Three areas extracted from the U.S. Geological Survey's National Elevation Dataset were transformed to the wavelet domain using the third order filters of the Daubechies family (DAUB6), and were made sparse by setting 95 percent of the smallest wavelet coefficients to zero. The resulting raster is compressible to a corresponding degree. The effects of the nulled coefficients on the reconstructed DEM are noted as residuals in elevation, derived slope and aspect, and delineation of drainage basins and streamlines. A simple masking technique also is presented, that maintains the integrity and flatness of water bodies in the reconstructed DEM.

  12. Adaptive video compressed sampling in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Dai, Hui-dong; Gu, Guo-hua; He, Wei-ji; Chen, Qian; Mao, Tian-yi

    2016-07-01

    In this work, we propose a multiscale video acquisition framework called adaptive video compressed sampling (AVCS) that involves sparse sampling and motion estimation in the wavelet domain. Implementing a combination of a binary DMD and a single-pixel detector, AVCS acquires successively finer resolution sparse wavelet representations in moving regions directly based on extended wavelet trees, and alternately uses these representations to estimate the motion in the wavelet domain. Then, we can remove the spatial and temporal redundancies and provide a method to reconstruct video sequences from compressed measurements in real time. In addition, the proposed method allows adaptive control over the reconstructed video quality. The numerical simulation and experimental results indicate that AVCS performs better than the conventional CS-based methods at the same sampling rate even under the influence of noise, and the reconstruction time and measurements required can be significantly reduced.

  13. Image compression using wavelet transform and multiresolution decomposition.

    PubMed

    Averbuch, A; Lazar, D; Israeli, M

    1996-01-01

    Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512x512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480x480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.

  14. Wavelet/scalar quantization compression standard for fingerprint images

    SciTech Connect

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  15. Low-complexity wavelet filter design for image compression

    NASA Technical Reports Server (NTRS)

    Majani, E.

    1994-01-01

    Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.

  16. Medical image compression by using three-dimensional wavelet transformation.

    PubMed

    Wang, J; Huang, K

    1996-01-01

    This paper proposes a three-dimensional (3-D) medical image compression method for computed tomography (CT) and magnetic resonance (MR) that uses a separable nonuniform 3-D wavelet transform. The separable wavelet transform employs one filter bank within two-dimensional (2-D) slices and then a second filter bank on the slice direction. CT and MR image sets normally have different resolutions within a slice and between slices. The pixel distances within a slice are normally less than 1 mm and the distance between slices can vary from 1 mm to 10 mm. To find the best filter bank in the slice direction, the authors use the various filter banks in the slice direction and compare the compression results. The results from the 12 selected MR and CT image sets at various slice thickness show that the Haar transform in the slice direction gives the optimum performance for most image sets, except for a CT image set which has 1 mm slice distance. Compared with 2-D wavelet compression, compression ratios of the 3-D method are about 70% higher for CT and 35% higher for MR image sets at a peak signal to noise ratio (PSNR) of 50 dB, In general, the smaller the slice distance, the better the 3-D compression performance. PMID:18215935

  17. Application of wavelet packet transform to compressing Raman spectra data

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract The Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. Every substance has its own feature Raman spectroscopy, which can analyze the structure, components, concentrations and some other properties of samples easily. RS is a powerful analytical tool for detection and identification. There are many databases of RS. But the data of Raman spectrum needs large space to storing and long time to searching. In this paper, Wavelet packet is chosen to compress Raman spectra data of some benzene series. The obtained results show that the energy retained is as high as 99.9% after compression, while the percentage for number of zeros is 87.50%. It was concluded that the Wavelet packet has significance in compressing the RS data.

  18. Wavelet-based pavement image compression and noise reduction

    NASA Astrophysics Data System (ADS)

    Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen

    2005-08-01

    For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.

  19. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  20. Solution of Reactive Compressible Flows Using an Adaptive Wavelet Method

    NASA Astrophysics Data System (ADS)

    Zikoski, Zachary; Paolucci, Samuel; Powers, Joseph

    2008-11-01

    This work presents numerical simulations of reactive compressible flow, including detailed multicomponent transport, using an adaptive wavelet algorithm. The algorithm allows for dynamic grid adaptation which enhances our ability to fully resolve all physically relevant scales. The thermodynamic properties, equation of state, and multicomponent transport properties are provided by CHEMKIN and TRANSPORT libraries. Results for viscous detonation in a H2:O2:Ar mixture, and other problems in multiple dimensions, are included.

  1. Improved wavelet packet compression of electrocardiogram data: 1. noise filtering

    NASA Astrophysics Data System (ADS)

    Bradie, Brian D.

    1995-09-01

    The improvement in the performance of a wavelet packet based compression scheme for single lead electrocardiogram (ECG) data, obtained by prefiltering noise from the ECG signals, is investigated. The removal of powerline interference and the attenuation of high-frequency muscle noise are considered. Selected records from the MIT-BIH Arrhythmia Database are used as test signals. After both types of noise artifact were filtered, an average data rate of 167.6 bits per second (corresponding to a compression ratio of 23.62), with an average root mean-square (rms) error of 15.886 (mu) V, was achieved. These figures represent better than a 9% improvement in data rate and a 13.5% reduction in rms error over compressing the unfiltered signals.

  2. Adaptive segmentation of wavelet transform coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Wasilewski, Piotr

    2000-04-01

    This paper presents video compression algorithm suitable for inexpensive real-time hardware implementation. This algorithm utilizes Discrete Wavelet Transform (DWT) with the new Adaptive Spatial Segmentation Algorithm (ASSA). The algorithm was designed to obtain better or similar decompressed video quality in compare to H.263 recommendation and MPEG standard using lower computational effort, especially at high compression rates. The algorithm was optimized for hardware implementation in low-cost Field Programmable Gate Array (FPGA) devices. The luminance and chrominance components of every frame are encoded with 3-level Wavelet Transform with biorthogonal filters bank. The low frequency subimage is encoded with an ADPCM algorithm. For the high frequency subimages the new Adaptive Spatial Segmentation Algorithm is applied. It divides images into rectangular blocks that may overlap each other. The width and height of the blocks are set independently. There are two kinds of blocks: Low Variance Blocks (LVB) and High Variance Blocks (HVB). The positions of the blocks and the values of the WT coefficients belonging to the HVB are encoded with the modified zero-tree algorithms. LVB are encoded with the mean value. Obtained results show that presented algorithm gives similar or better quality of decompressed images in compare to H.263, even up to 5 dB in PSNR measure.

  3. Wavelet Compression of Satellite-Transmitted Digital Mammograms

    NASA Technical Reports Server (NTRS)

    Zheng, Yuan F.

    2001-01-01

    Breast cancer is one of the major causes of cancer death in women in the United States. The most effective way to treat breast cancer is to detect it at an early stage by screening patients periodically. Conventional film-screening mammography uses X-ray films which are effective in detecting early abnormalities of the breast. Direct digital mammography has the potential to improve the image quality and to take advantages of convenient storage, efficient transmission, and powerful computer-aided diagnosis, etc. One effective alternative to direct digital imaging is secondary digitization of X-ray films. This technique may not provide as high an image quality as the direct digital approach, but definitely have other advantages inherent to digital images. One of them is the usage of satellite-transmission technique for transferring digital mammograms between a remote image-acquisition site and a central image-reading site. This technique can benefit a large population of women who reside in remote areas where major screening and diagnosing facilities are not available. The NASA-Lewis Research Center (LeRC), in collaboration with the Cleveland Clinic Foundation (CCF), has begun a pilot study to investigate the application of the Advanced Communications Technology Satellite (ACTS) network to telemammography. The bandwidth of the T1 transmission is limited (1.544 Mbps) while the size of a mammographic image is huge. It takes a long time to transmit a single mammogram. For example, a mammogram of 4k by 4k pixels with 16 bits per pixel needs more than 4 minutes to transmit. Four images for a typical screening exam would take more than 16 minutes. This is too long a time period for a convenient screening. Consequently, compression is necessary for making satellite-transmission of mammographic images practically possible. The Wavelet Research Group of the Department of Electrical Engineering at The Ohio State University (OSU) participated in the LeRC-CCF collaboration by

  4. Methods of compression of digital holograms, based on 1-level wavelet transform

    NASA Astrophysics Data System (ADS)

    Kurbatova, E. A.; Cheremkhin, P. A.; Evtikhiev, N. N.

    2016-08-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared.

  5. Haar wavelet processor for adaptive on-line image compression

    NASA Astrophysics Data System (ADS)

    Diaz, F. Javier; Buron, Angel M.; Solana, Jose M.

    2005-06-01

    An image coding processing scheme based on a variant of the Haar Wavelet Transform that uses only addition and subtraction is presented. After computing the transform, the selection and coding of the coefficients is performed using a methodology optimized to attain the lowest hardware implementation complexity. Coefficients are sorted in groups according to the number of pixels used in their computing. The idea behind it is to use a different threshold for each group of coefficients; these thresholds are obtained recurrently from an initial one. Parameter values used to achieve the desired compression level are established "on-line", adapting their values to each image, which leads to an improvement in the quality obtained for a preset compression level. Despite its adaptive characteristic, the coding scheme presented leads to a hardware implementation of markedly low circuit complexity. The compression reached for images of 512x512 pixels (256 grey levels) is over 22:1 (~0.4 bits/pixel) with a rmse of 8-10%. An image processor (excluding memory) prototype designed to compute the proposed transform has been implemented using FPGA chips. The processor for images of 256x256 pixels has been implemented using only one general-purpose low-cost FPGA chip, thus proving the design reliability and its relative simplicity.

  6. Remotely sensed image compression based on wavelet transform

    NASA Technical Reports Server (NTRS)

    Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.

    1995-01-01

    In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.

  7. Optimization of orthonormal wavelet decomposition: implication of data accuracy, feature preservation, and compression effects

    NASA Astrophysics Data System (ADS)

    Lo, Shih-Chung B.; Li, Huai; Wang, Yue J.; Freedman, Matthew T.; Mun, Seong K.

    1996-04-01

    A neural network based framework has been developed to search for an optimal wavelet kernel that is most suitable for a specific image processing task. In this paper, we demonstrate that only the low-pass filter, hu, is needed for orthonormal wavelet decomposition. A convolution neural network can be trained to obtain a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications on mammograms. We have used this method to evaluate the performance of tap-4 orthonormal wavelets on mammograms, CTs, MRIs, and Lena image. We found that Daubechies' wavelet (or those wavelets possessing similar filtering characteristics) produces satisfactory compression efficiency with the smallest error using a global measure (e.g., mean- square-error). However, we found that Harr's wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet, whose low-pass filter coefficients are (0.32252136, 0.85258927, 0.38458542, -0.14548269), can greatly preserve the microcalcification features such as signal-to-noise ratio during a course of compression. Several interesting wavelet filters (i.e., the g filters) were reviewed and explanations of the results are provided. We believe that this newly developed optimization method can be generalized to other image analysis applications where a wavelet decomposition is employed.

  8. Wavelet-based compression of medical images: filter-bank selection and evaluation.

    PubMed

    Saffor, A; bin Ramli, A R; Ng, K H

    2003-06-01

    Wavelet-based image coding algorithms (lossy and lossless) use a fixed perfect reconstruction filter-bank built into the algorithm for coding and decoding of images. However, no systematic study has been performed to evaluate the coding performance of wavelet filters on medical images. We evaluated the best types of filters suitable for medical images in providing low bit rate and low computational complexity. In this study a variety of wavelet filters are used to compress and decompress computed tomography (CT) brain and abdomen images. We applied two-dimensional wavelet decomposition, quantization and reconstruction using several families of filter banks to a set of CT images. Discreet Wavelet Transform (DWT), which provides efficient framework of multi-resolution frequency was used. Compression was accomplished by applying threshold values to the wavelet coefficients. The statistical indices such as mean square error (MSE), maximum absolute error (MAE) and peak signal-to-noise ratio (PSNR) were used to quantify the effect of wavelet compression of selected images. The code was written using the wavelet and image processing toolbox of the MATLAB (version 6.1). This results show that no specific wavelet filter performs uniformly better than others except for the case of Daubechies and bi-orthogonal filters which are the best among all. MAE values achieved by these filters were 5 x 10(-14) to 12 x 10(-14) for both CT brain and abdomen images at different decomposition levels. This indicated that using these filters a very small error (approximately 7 x 10(-14)) can be achieved between original and the filtered image. The PSNR values obtained were higher for the brain than the abdomen images. For both the lossy and lossless compression, the 'most appropriate' wavelet filter should be chosen adaptively depending on the statistical properties of the image being coded to achieve higher compression ratio. PMID:12956184

  9. The wavelet/scalar quantization compression standard for digital fingerprint images

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  10. [Detection of reducing sugar content of potato granules based on wavelet compression by near infrared spectroscopy].

    PubMed

    Dong, Xiao-Ling; Sun, Xu-Dong

    2013-12-01

    The feasibility was explored in determination of reducing sugar content of potato granules based on wavelet compression algorithm combined with near-infrared spectroscopy. The spectra of 250 potato granules samples were recorded by Fourier transform near-infrared spectrometer in the range of 4000- 10000 cm-1. The three parameters of vanishing moments, wavelet coefficients and principal component factor were optimized. The optimization results of three parameters were 10, 100 and 20, respectively. The original spectra of 1501 spectral variables were transfered to 100 wavelet coefficients using db wavelet function. The partial least squares (PLS) calibration models were developed by 1501 spectral variables and 100 wavelet coefficients. Sixty two unknown samples of prediction set were applied to evaluate the performance of PLS models. By comparison, the optimal result was obtained by wavelet compression combined with PLS calibration model. The correlation coefficient of prediction and root mean square error of prediction were 0.98 and 0.181%, respectively. Experimental results show that the dimensions of spectral data were reduced, scarcely losing effective information by wavelet compression algorithm combined with near-infrared spectroscopy technology in determination of reducing sugar in potato granules. The PLS model is simplified, and the predictive ability is improved. PMID:24611373

  11. Performance evaluation of integer to integer wavelet transform for synthetic aperture radar image compression

    NASA Astrophysics Data System (ADS)

    Xue, Wentong; Song, Jianshe; Yuan, Lihai; Shen, Tao

    2005-11-01

    An efficient and novel imagery compression system for Synthetic Aperture Radar (SAR) which uses integer to integer wavelet transform and Modified Set Partitioning Embedded Block Coder (M-SPECK) has been presented in this paper. The presence of speckle noise, detailed texture, high dynamic range in SAR images, and even its vast data volume show the great differences of SAR imagery. Integer to integer wavelet transform is invertible in finite precision arithmetic, it maps integers to integers, and approximates linear wavelet transforms from which they are derived. Considering in terms of computational load, compression ratio and subjective visual quality metrics, several filter banks are compared together and some factors affecting the compression performance of the integer to integer wavelet transform are discussed in details. Then the optimal filter banks which are more appropriate for the SAR images compression are given. Information of high frequency has relatively larger proportion in SAR images compared with those of nature images. Measures to modify the quantizing thresholds in traditional SPECK are taken, which could be suitable to the contents of SAR imagery for the purpose of compression. Both the integer to integer wavelet transform and modified SPECK have the desirable feature of low computational complexity. Experimental results show its superiority over the traditional approaches in the condition of tradeoffs between compression efficiency and computational complexity.

  12. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  13. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm. PMID:26890900

  14. Compression of the electrocardiogram (ECG) using an adaptive orthonomal wavelet basis architecture

    NASA Astrophysics Data System (ADS)

    Anandkumar, Janavikulam; Szu, Harold H.

    1995-04-01

    This paper deals with the compression of electrocardiogram (ECG) signals using a large library of orthonormal bases functions that are translated and dilated versions of Daubechies wavelets. The wavelet transform has been implemented using quadrature mirror filters (QMF) employed in a sub-band coding scheme. Interesting transients and notable frequencies of the ECG are captured by appropriately scaled waveforms chosen in a parallel fashion from this collection of wavelets. Since there is a choice of orthonormal bases functions for the efficient transcription of the ECG, it is then possible to choose the best one by various criterion. We have imposed very stringent threshold conditions on the wavelet expansion coefficients, such as in maintaining a very large percentage of the energy of the current signal segment, and this has resulted in reconstructed waveforms with negligible distortion relative to the source signal. Even without the use of any specialized quantizers and encoders, the compression ratio numbers look encouraging, with preliminary results indicating compression ratios ranging from 40:1 to 15:1 at percentage rms distortions ranging from about 22% to 2.3%, respectively. Irrespective of the ECG lead chosen, or the signal deviations that may occur due to either noise or arrhythmias, only one wavelet family that correlates best with that particular portion of the signal, is chosen. The main reason for the compression is because the chosen mother wavelet and its variations match the shape of the ECG and are able to efficiently transcribe the source with few wavelet coefficients. The adaptive template matching architecture that carries out a parallel search of the transform domain is described, and preliminary simulation results are discussed. The adaptivity of the architecture comes from the fine tuning of the wavelet selection process that is based on localized constraints, such as shape of the signal and its energy.

  15. JPEG2000 vs. full frame wavelet packet compression for smart card medical records.

    PubMed

    Leehan, Joaquín Azpirox; Lerallut, Jean-Francois

    2006-01-01

    This paper describes a comparison among different compression methods to be used in the context of electronic health records in the newer version of "smart cards". The JPEG2000 standard is compared to a full-frame wavelet packet compression method at high (33:1 and 50:1) compression rates. Results show that the full-frame method outperforms the JPEG2K standard qualitatively and quantitatively.

  16. Clinical utility of wavelet compression for resolution-enhanced chest radiography

    NASA Astrophysics Data System (ADS)

    Andriole, Katherine P.; Hovanes, Michael E.; Rowberg, Alan H.

    2000-05-01

    This study evaluates the usefulness of wavelet compression for resolution-enhanced storage phosphor chest radiographs in the detection of subtle interstitial disease, pneumothorax and other abnormalities. A wavelet compression technique, MrSIDTM (LizardTech, Inc., Seattle, WA), is implemented which compresses the images from their original 2,000 by 2,000 (2K) matrix size, and then decompresses the image data for display at optimal resolution by matching the spatial frequency characteristics of image objects using a 4,000- square matrix. The 2K-matrix computed radiography (CR) chest images are magnified to a 4K-matrix using wavelet series expansion. The magnified images are compared with the original uncompressed 2K radiographs and with two-times magnification of the original images. Preliminary results show radiologist preference for MrSIDTM wavelet-based magnification over magnification of original data, and suggest that the compressed/decompressed images may provide an enhancement to the original. Data collection for clinical trials of 100 chest radiographs including subtle interstitial abnormalities and/or subtle pneumothoraces and normal cases, are in progress. Three experienced thoracic radiologists will view images side-by- side on calibrated softcopy workstations under controlled viewing conditions, and rank order preference tests will be performed. This technique combines image compression with image enhancement, and suggests that compressed/decompressed images can actually improve the originals.

  17. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  18. ECG compression using non-recursive wavelet transform with quality control

    NASA Astrophysics Data System (ADS)

    Liu, Je-Hung; Hung, King-Chu; Wu, Tsung-Ching

    2016-09-01

    While wavelet-based electrocardiogram (ECG) data compression using scalar quantisation (SQ) yields excellent compression performance, a wavelet's SQ scheme, however, must select a set of multilevel quantisers for each quantisation process. As a result of the properties of multiple-to-one mapping, however, this scheme is not conducive for reconstruction error control. In order to address this problem, this paper presents a single-variable control SQ scheme able to guarantee the reconstruction quality of wavelet-based ECG data compression. Based on the reversible round-off non-recursive discrete periodised wavelet transform (RRO-NRDPWT), the SQ scheme is derived with a three-stage design process that first uses genetic algorithm (GA) for high compression ratio (CR), followed by a quadratic curve fitting for linear distortion control, and the third uses a fuzzy decision-making for minimising data dependency effect and selecting the optimal SQ. The two databases, Physikalisch-Technische Bundesanstalt (PTB) and Massachusetts Institute of Technology (MIT) arrhythmia, are used to evaluate quality control performance. Experimental results show that the design method guarantees a high compression performance SQ scheme with statistically linear distortion. This property can be independent of training data and can facilitate rapid error control.

  19. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    PubMed

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol.

  20. Wavelet-based ECG data compression system with linear quality control scheme.

    PubMed

    Ku, Cheng-Tung; Hung, King-Chu; Wu, Tsung-Ching; Wang, Huan-Sheng

    2010-06-01

    Maintaining reconstructed signals at a desired level of quality is crucial for lossy ECG data compression. Wavelet-based approaches using a recursive decomposition process are unsuitable for real-time ECG signal recoding and commonly obtain a nonlinear compression performance with distortion sensitive to quantization error. The sensitive response is caused without compromising the influences of word-length-growth (WLG) effect and unfavorable for the reconstruction quality control of ECG data compression. In this paper, the 1-D reversible round-off nonrecursive discrete periodic wavelet transform is applied to overcome the WLG magnification effect in terms of the mechanisms of error propagation resistance and significant normalization of octave coefficients. The two mechanisms enable the design of a multivariable quantization scheme that can obtain a compression performance with the approximate characteristics of linear distortion. The quantization scheme can be controlled with a single control variable. Based on the linear compression performance, a linear quantization scale prediction model is presented for guaranteeing reconstruction quality. Following the use of the MIT-BIH arrhythmia database, the experimental results show that the proposed system, with lower computational complexity, can obtain much better reconstruction quality control than other wavelet-based methods.

  1. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  2. DSP accelerator for the wavelet compression/decompression of high- resolution images

    SciTech Connect

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  3. Compression of multispectral Landsat imagery using the Embedded Zerotree Wavelet (EZW) algorithm

    NASA Technical Reports Server (NTRS)

    Shapiro, Jerome M.; Martucci, Stephen A.; Czigler, Martin

    1994-01-01

    The Embedded Zerotree Wavelet (EZW) algorithm has proven to be an extremely efficient and flexible compression algorithm for low bit rate image coding. The embedding algorithm attempts to order the bits in the bit stream in numerical importance and thus a given code contains all lower rate encodings of the same algorithm. Therefore, precise bit rate control is achievable and a target rate or distortion metric can be met exactly. Furthermore, the technique is fully image adaptive. An algorithm for multispectral image compression which combines the spectral redundancy removal properties of the image-dependent Karhunen-Loeve Transform (KLT) with the efficiency, controllability, and adaptivity of the embedded zerotree wavelet algorithm is presented. Results are shown which illustrate the advantage of jointly encoding spectral components using the KLT and EZW.

  4. Accelerating patch-based directional wavelets with multicore parallel computing in compressed sensing MRI.

    PubMed

    Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong

    2015-06-01

    Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds.

  5. Review of digital fingerprint acquisition systems and wavelet compression

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    2003-04-01

    Over the last decade many criminal justice agencies have replaced their fingerprint card based systems with electronic processing. We examine these new systems and find that image acquisition to support the identification application is consistently a challenge. Image capture and compression are widely dispersed and relatively new technologies within criminal justice information systems. Image quality assurance programs are just beginning to mature.

  6. The wavelet transform and the suppression theory of binocular vision for stereo image compression

    SciTech Connect

    Reynolds, W.D. Jr; Kenyon, R.V.

    1996-08-01

    In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.

  7. Applications of wavelet-based compression to multidimensional earth science data

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  8. Applications of wavelet-based compression to multidimensional earth science data

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  9. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  10. SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.

    1993-12-01

    The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.

  11. Texture characterization for joint compression and classification based on human perception in the wavelet domain.

    PubMed

    Fahmy, Gamal; Black, John; Panchanathan, Sethuraman

    2006-06-01

    Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented. PMID:16764265

  12. Study on the application of embedded zero-tree wavelet algorithm in still images compression

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Lu, Yanhe; Li, Taifu; Lei, Gang

    2005-12-01

    An image has directional selection capability with high frequency through wavelet transformation. It is coincident with the visual characteristics of human eyes. The most important visual characteristic in human eyes is the visual covering effect. The embedded Zero-tree Wavelet (EZW) coding method completes the same level coding for a whole image. In an image, important regions (regions of interest) and background regions (indifference regions) are coded through the same levels. On the basis of studying the human visual characteristics, that is, the visual covering effect, this paper employs an image-compressing method with regions of interest, i.e., an algorithm of Embedded Zero-tree Wavelet with Regions of Interest (EZWROI Algorism) to encode the regions of interest and regions of non-interest separately. In this way, the lost important information in the image is much less. It makes full use of channel resource and memory space, and improves the image quality in the regions of interest. Experimental study showed that a resumed image using an EZW_ROI algorithm is better in visual effects than that of EZW on condition of high compression ratio.

  13. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    PubMed

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol. PMID:26779641

  14. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  15. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Hopper, T.

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  16. A comparison of spectral decorrelation techniques and performance evaluation metrics for a wavelet-based, multispectral data compression algorithm

    NASA Technical Reports Server (NTRS)

    Matic, Roy M.; Mosley, Judith I.

    1994-01-01

    Future space-based, remote sensing systems will have data transmission requirements that exceed available downlinks necessitating the use of lossy compression techniques for multispectral data. In this paper, we describe several algorithms for lossy compression of multispectral data which combine spectral decorrelation techniques with an adaptive, wavelet-based, image compression algorithm to exploit both spectral and spatial correlation. We compare the performance of several different spectral decorrelation techniques including wavelet transformation in the spectral dimension. The performance of each technique is evaluated at compression ratios ranging from 4:1 to 16:1. Performance measures used are visual examination, conventional distortion measures, and multispectral classification results. We also introduce a family of distortion metrics that are designed to quantify and predict the effect of compression artifacts on multi spectral classification of the reconstructed data.

  17. Accelerating patch-based directional wavelets with multicore parallel computing in compressed sensing MRI.

    PubMed

    Li, Qiyue; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Lai, Zongying; Ye, Jing; Chen, Zhong

    2015-06-01

    Compressed sensing MRI (CS-MRI) is a promising technology to accelerate magnetic resonance imaging. Both improving the image quality and reducing the computation time are important for this technology. Recently, a patch-based directional wavelet (PBDW) has been applied in CS-MRI to improve edge reconstruction. However, this method is time consuming since it involves extensive computations, including geometric direction estimation and numerous iterations of wavelet transform. To accelerate computations of PBDW, we propose a general parallelization of patch-based processing by taking the advantage of multicore processors. Additionally, two pertinent optimizations, excluding smooth patches and pre-arranged insertion sort, that make use of sparsity in MR images are also proposed. Simulation results demonstrate that the acceleration factor with the parallel architecture of PBDW approaches the number of central processing unit cores, and that pertinent optimizations are also effective to make further accelerations. The proposed approaches allow compressed sensing MRI reconstruction to be accomplished within several seconds. PMID:25620521

  18. Dataflow and remapping for wavelet compression and realtime view-dependent optimization of billion-triangle isosurfaces

    SciTech Connect

    Duchaineau, M A; Porumbescu, S D; Bertram, M; Hamann, B; Joy, K I

    2000-10-06

    Currently, large physics simulations produce 3D fields whose individual surfaces, after conventional extraction processes, contain upwards of hundreds of millions of triangles. Detailed interactive viewing of these surfaces requires powerful compression to minimize storage, and fast view-dependent optimization of display triangulations to drive high-performance graphics hardware. In this work we provide an overview of an end-to-end multiresolution dataflow strategy whose goal is to increase efficiencies in practice by several orders of magnitude. Given recent advancements in subdivision-surface wavelet compression and view-dependent optimization, we present algorithms here that provide the ''glue'' that makes this strategy hold together. Shrink-wrapping converts highly detailed unstructured surfaces of arbitrary topology to the semi-structured form needed for wavelet compression. Remapping to triangle bintrees minimizes disturbing ''pops'' during real-time display-triangulation optimization and provides effective selective-transmission compression for out-of-core and remote access to these huge surfaces.

  19. R-D optimized tree-structured compression algorithms with discrete directional wavelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Ma, Siliang

    2008-09-01

    A new image coding method based on discrete directional wavelet transform (S-WT) and quad-tree decomposition is proposed here. The S-WT is a kind of transform proposed in [V. Velisavljevic, B. Beferull-Lozano, M. Vetterli, P.L. Dragotti, Directionlets: anisotropic multidirectional representation with separable filtering, IEEE Trans. Image Process. 15(7) (2006)], which is based on lattice theory, and with the difference with the standard wavelet transform is that the former allows more transform directions. Because the directional property in a small region is more regular than in a big block generally, in order to sufficiently make use of the multidirectionality and directional vanishing moment (DVM) of S-WT, the input image is divided into many small regions by means of the popular quad-tree segmentation, and the splitting criterion is on the rate-distortion sense. After the optimal quad-tree is obtained, by means of the embedded property of SPECK, a resource bit allocation algorithm is fast implemented utilizing the model proposed in [M. Rajpoot, Model based optimal bit allocation, in: IEEE Data Compression Conference, 2004, Proceedings, DCC 2004.19]. Experiment results indicate that our algorithms perform better compared to some state-of-the-art image coders.

  20. Adaptive variable-fidelity wavelet-based eddy-capturing approaches for compressible turbulence

    NASA Astrophysics Data System (ADS)

    Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-11-01

    Multiresolution wavelet methods have been developed for efficient simulation of compressible turbulence. They rely upon a filter to identify dynamically important coherent flow structures and adapt the mesh to resolve them. The filter threshold parameter, which can be specified globally or locally, allows for a continuous tradeoff between computational cost and fidelity, ranging seamlessly between DNS and adaptive LES. There are two main approaches to specifying the adaptive threshold parameter. It can be imposed as a numerical error bound, or alternatively, derived from real-time flow phenomena to ensure correct simulation of desired turbulent physics. As LES relies on often imprecise model formulations that require a high-quality mesh, this variable-fidelity approach offers a further tool for improving simulation by targeting deficiencies and locally increasing the resolution. Simultaneous physical and numerical criteria, derived from compressible flow physics and the governing equations, are used to identify turbulent regions and evaluate the fidelity. Several benchmark cases are considered to demonstrate the ability to capture variable density and thermodynamic effects in compressible turbulence. This work was supported by NSF under grant No. CBET-1236505.

  1. Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.

    PubMed

    Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong

    2016-01-01

    Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.

  2. ECG compression using Slantlet and lifting wavelet transform with and without normalisation

    NASA Astrophysics Data System (ADS)

    Aggarwal, Vibha; Singh Patterh, Manjeet

    2013-05-01

    This article analyses the performance of: (i) linear transform: Slantlet transform (SLT), (ii) nonlinear transform: lifting wavelet transform (LWT) and (iii) nonlinear transform (LWT) with normalisation for electrocardiogram (ECG) compression. First, an ECG signal is transformed using linear transform and nonlinear transform. The transformed coefficients (TC) are then thresholded using bisection algorithm in order to match the predefined user-specified percentage root mean square difference (UPRD) within the tolerance. Then, the binary look up table is made to store the position map for zero and nonzero coefficients (NZCs). The NZCs are quantised by Max-Lloyd quantiser followed by Arithmetic coding. The look up table is encoded by Huffman coding. The results show that the LWT gives the best result as compared to SLT evaluated in this article. This transform is then considered to evaluate the effect of normalisation before thresholding. In case of normalisation, the TC is normalised by dividing the TC by ? (where ? is number of samples) to reduce the range of TC. The normalised coefficients (NC) are then thresholded. After that the procedure is same as in case of coefficients without normalisation. The results show that the compression ratio (CR) in case of LWT with normalisation is improved as compared to that without normalisation.

  3. Comparison of wavelet scalar quantization and JPEG for fingerprint image compression

    NASA Astrophysics Data System (ADS)

    Kidd, Robert C.

    1995-01-01

    An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.

  4. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  5. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  6. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift.

    PubMed

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  7. A Novel Compressed Sensing Method for Magnetic Resonance Imaging: Exponential Wavelet Iterative Shrinkage-Thresholding Algorithm with Random Shift

    PubMed Central

    Zhang, Yudong; Yang, Jiquan; Yang, Jianfei; Liu, Aijun; Sun, Ping

    2016-01-01

    Aim. It can help improve the hospital throughput to accelerate magnetic resonance imaging (MRI) scanning. Patients will benefit from less waiting time. Task. In the last decade, various rapid MRI techniques on the basis of compressed sensing (CS) were proposed. However, both computation time and reconstruction quality of traditional CS-MRI did not meet the requirement of clinical use. Method. In this study, a novel method was proposed with the name of exponential wavelet iterative shrinkage-thresholding algorithm with random shift (abbreviated as EWISTARS). It is composed of three successful components: (i) exponential wavelet transform, (ii) iterative shrinkage-thresholding algorithm, and (iii) random shift. Results. Experimental results validated that, compared to state-of-the-art approaches, EWISTARS obtained the least mean absolute error, the least mean-squared error, and the highest peak signal-to-noise ratio. Conclusion. EWISTARS is superior to state-of-the-art approaches. PMID:27066068

  8. Compressed sensing with wavelet domain dependencies for coronary MRI: a retrospective study.

    PubMed

    Akçakaya, Mehmet; Nam, Seunghoon; Hu, Peng; Moghari, Mehdi H; Ngo, Long H; Tarokh, Vahid; Manning, Warren J; Nezafat, Reza

    2011-05-01

    Coronary magnetic resonance imaging (MRI) is a noninvasive imaging modality for diagnosis of coronary artery disease. One of the limitations of coronary MRI is its long acquisition time due to the need of imaging with high spatial resolution and constraints on respiratory and cardiac motions. Compressed sensing (CS) has been recently utilized to accelerate image acquisition in MRI. In this paper, we develop an improved CS reconstruction method, Bayesian least squares-Gaussian scale mixture (BLS-GSM), that uses dependencies of wavelet domain coefficients to reduce the observed blurring and reconstruction artifacts in coronary MRI using traditional l(1) regularization. Images of left and right coronary MRI was acquired in 7 healthy subjects with fully-sampled k-space data. The data was retrospectively undersampled using acceleration rates of 2, 4, 6, and 8 and reconstructed using l(1) thresholding, l(1) minimization and BLS-GSM thresholding. Reconstructed right and left coronary images were compared with fully-sampled reconstructions in vessel sharpness and subjective image quality (1-4 for poor-excellent). Mean square error (MSE) was also calculated for each reconstruction. There were no significant differences between the fully sampled image score versus rate 2, 4, or 6 for BLS-GSM for both right and left coronaries (=N.S.). However, for l(1) thresholding significant differences were observed for rates higher than 2 and 4 for right and left coronaries respectively. l(1) minimization also yields images with lower scores compared to the reference for rates higher than 4 for both coronaries. These results were consistent with the quantitative vessel sharpness readings. BLS-GSM allows acceleration of coronary MRI with acceleration rates beyond what can be achieved with l(1) regularization. PMID:21536523

  9. Compression of ECG signals using variable-length classifıed vector sets and wavelet transforms

    NASA Astrophysics Data System (ADS)

    Gurkan, Hakan

    2012-12-01

    In this article, an improved and more efficient algorithm for the compression of the electrocardiogram (ECG) signals is presented, which combines the processes of modeling ECG signal by variable-length classified signature and envelope vector sets (VL-CSEVS), and residual error coding via wavelet transform. In particular, we form the VL-CSEVS derived from the ECG signals, which exploits the relationship between energy variation and clinical information. The VL-CSEVS are unique patterns generated from many of thousands of ECG segments of two different lengths obtained by the energy based segmentation method, then they are presented to both the transmitter and the receiver used in our proposed compression system. The proposed algorithm is tested on the MIT-BIH Arrhythmia Database and MIT-BIH Compression Test Database and its performance is evaluated by using some evaluation metrics such as the percentage root-mean-square difference (PRD), modified PRD (MPRD), maximum error, and clinical evaluation. Our experimental results imply that our proposed algorithm achieves high compression ratios with low level reconstruction error while preserving the diagnostic information in the reconstructed ECG signal, which has been supported by the clinical tests that we have carried out.

  10. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    NASA Technical Reports Server (NTRS)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; Moore, Thomas E.

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  11. Numerical solution of multi-dimensional compressible reactive flow using a parallel wavelet adaptive multi-resolution method

    NASA Astrophysics Data System (ADS)

    Grenga, Temistocle

    The aim of this research is to further develop a dynamically adaptive algorithm based on wavelets that is able to solve efficiently multi-dimensional compressible reactive flow problems. This work demonstrates the great potential for the method to perform direct numerical simulation (DNS) of combustion with detailed chemistry and multi-component diffusion. In particular, it addresses the performance obtained using a massive parallel implementation and demonstrates important savings in memory storage and computational time over conventional methods. In addition, fully-resolved simulations of challenging three dimensional problems involving mixing and combustion processes are performed. These problems are particularly challenging due to their strong multiscale characteristics. For these solutions, it is necessary to combine the advanced numerical techniques applied to modern computational resources.

  12. Gravity inversion using wavelet-based compression on parallel hybrid CPU/GPU systems: application to southwest Ghana

    NASA Astrophysics Data System (ADS)

    Martin, Roland; Monteiller, Vadim; Komatitsch, Dimitri; Perrouty, Stéphane; Jessell, Mark; Bonvalot, Sylvain; Lindsay, Mark

    2013-12-01

    We solve the 3-D gravity inverse problem using a massively parallel voxel (or finite element) implementation on a hybrid multi-CPU/multi-GPU (graphics processing units/GPUs) cluster. This allows us to obtain information on density distributions in heterogeneous media with an efficient computational time. In a new software package called TOMOFAST3D, the inversion is solved with an iterative least-square or a gradient technique, which minimizes a hybrid L1-/L2-norm-based misfit function. It is drastically accelerated using either Haar or fourth-order Daubechies wavelet compression operators, which are applied to the sensitivity matrix kernels involved in the misfit minimization. The compression process behaves like a pre-conditioning of the huge linear system to be solved and a reduction of two or three orders of magnitude of the computational time can be obtained for a given number of CPU processor cores. The memory storage required is also significantly reduced by a similar factor. Finally, we show how this CPU parallel inversion code can be accelerated further by a factor between 3.5 and 10 using GPU computing. Performance levels are given for an application to Ghana, and physical information obtained after 3-D inversion using a sensitivity matrix with around 5.37 trillion elements is discussed. Using compression the whole inversion process can last from a few minutes to less than an hour for a given number of processor cores instead of tens of hours for a similar number of processor cores when compression is not used.

  13. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression.

    PubMed

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-05-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  14. Fast Bayesian Inference of Copy Number Variants using Hidden Markov Models with Wavelet Compression

    PubMed Central

    Wiedenhoeft, John; Brugel, Eric; Schliep, Alexander

    2016-01-01

    By integrating Haar wavelets with Hidden Markov Models, we achieve drastically reduced running times for Bayesian inference using Forward-Backward Gibbs sampling. We show that this improves detection of genomic copy number variants (CNV) in array CGH experiments compared to the state-of-the-art, including standard Gibbs sampling. The method concentrates computational effort on chromosomal segments which are difficult to call, by dynamically and adaptively recomputing consecutive blocks of observations likely to share a copy number. This makes routine diagnostic use and re-analysis of legacy data collections feasible; to this end, we also propose an effective automatic prior. An open source software implementation of our method is available at http://schlieplab.org/Software/HaMMLET/ (DOI: 10.5281/zenodo.46262). This paper was selected for oral presentation at RECOMB 2016, and an abstract is published in the conference proceedings. PMID:27177143

  15. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    SciTech Connect

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  16. Wavelet-based watermarking and compression for ECG signals with verification evaluation.

    PubMed

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-01-01

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636

  17. Wavelet-Based Watermarking and Compression for ECG Signals with Verification Evaluation

    PubMed Central

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-01-01

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible. PMID:24566636

  18. Wavelet-based watermarking and compression for ECG signals with verification evaluation.

    PubMed

    Tseng, Kuo-Kun; He, Xialong; Kung, Woon-Man; Chen, Shuo-Tsung; Liao, Minghong; Huang, Huang-Nan

    2014-01-01

    In the current open society and with the growth of human rights, people are more and more concerned about the privacy of their information and other important data. This study makes use of electrocardiography (ECG) data in order to protect individual information. An ECG signal can not only be used to analyze disease, but also to provide crucial biometric information for identification and authentication. In this study, we propose a new idea of integrating electrocardiogram watermarking and compression approach, which has never been researched before. ECG watermarking can ensure the confidentiality and reliability of a user's data while reducing the amount of data. In the evaluation, we apply the embedding capacity, bit error rate (BER), signal-to-noise ratio (SNR), compression ratio (CR), and compressed-signal to noise ratio (CNR) methods to assess the proposed algorithm. After comprehensive evaluation the final results show that our algorithm is robust and feasible.

  19. Psychophysical evaluation of the effect of JPEG, full-frame discrete cosine transform (DCT) and wavelet image compression on signal detection in medical image noise

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Morioka, Craig A.; Whiting, James S.; Eigler, Neal L.

    1995-04-01

    Image quality associated with image compression has been either arbitrarily evaluated through visual inspection, loosely defined in terms of some subjective criteria such as image sharpness or blockiness, or measured by arbitrary measures such as the mean square error between the uncompressed and compressed image. The present paper psychophysically evaluated the effect of three different compression algorithms (JPEG, full-frame, and wavelet) on human visual detection of computer-simulated low-contrast lesions embedded in real medical image noise from patient coronary angiogram. Performance identifying the signal present location as measure by d' index of detectability decreased for all three algorithms by approximately 30% and 62% for the 16:1 and 30:1 compression rations respectively. We evaluated the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm. The MSE predicted significantly higher image quality for the JPEG algorithm in the 16:1 compression ratio and for both JPEG and full-frame for the 30:1 compression ratio. The NNND predicted significantly high image quality for the full-frame algorithm for both compassion rations. These findings suggest that these two measures of image quality may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms.

  20. Scalable medical data compression and transmission using wavelet transform for telemedicine applications.

    PubMed

    Hwang, Wen-Jyi; Chine, Ching-Fung; Li, Kuo-Jung

    2003-03-01

    In this paper, a novel medical data compression algorithm, termed layered set partitioning in hierarchical trees (LSPIHT) algorithm, is presented for telemedicine applications. In the LSPIHT, the encoded bit streams are divided into a number of layers for transmission and reconstruction. Starting from the base layer, by accumulating bit streams up to different enhancement layers, we can reconstruct medical data with various signal-to-noise ratios (SNRs) and/or resolutions. Receivers with distinct specifications can then share the same source encoder to reduce the complexity of telecommunication networks for telemedicine applications. Numerical results show that, besides having low network complexity, the LSPIHT attains better rate-distortion performance as compared with other algorithms for encoding medical data. PMID:12670019

  1. Scalable medical data compression and transmission using wavelet transform for telemedicine applications.

    PubMed

    Hwang, Wen-Jyi; Chine, Ching-Fung; Li, Kuo-Jung

    2003-03-01

    In this paper, a novel medical data compression algorithm, termed layered set partitioning in hierarchical trees (LSPIHT) algorithm, is presented for telemedicine applications. In the LSPIHT, the encoded bit streams are divided into a number of layers for transmission and reconstruction. Starting from the base layer, by accumulating bit streams up to different enhancement layers, we can reconstruct medical data with various signal-to-noise ratios (SNRs) and/or resolutions. Receivers with distinct specifications can then share the same source encoder to reduce the complexity of telecommunication networks for telemedicine applications. Numerical results show that, besides having low network complexity, the LSPIHT attains better rate-distortion performance as compared with other algorithms for encoding medical data.

  2. Study on Optimization Method of Quantization Step and the Image Quality Evaluation for Medical Ultrasonic Echo Image Compression by Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Khieovongphachanh, Vimontha; Hamamoto, Kazuhiko; Kondo, Shozo

    In this paper, we investigate optimized quantization method in JPEG2000 application for medical ultrasonic echo images. JPEG2000 has been issued as the new standard for image compression technique, which is based on Wavelet Transform (WT) and JPEG2000 incorporated into DICOM (Digital Imaging and Communications in Medicine). There are two quantization methods. One is the scalar derived quantization (SDQ), which is usually used in standard JPEG2000. The other is the scalar expounded quantization (SEQ), which can be optimized by user. Therefore, this paper is an optimization of quantization step, which is determined by Genetic Algorithm (GA). Then, the results are compared with SDQ and SEQ determined by arithmetic average method. The purpose of this paper is to improve image quality and compression ratio for medical ultrasonic echo images. The image quality is evaluated by objective assessment, PSNR (Peak Signal to Noise Ratio) and subjective assessment is evaluated by ultrasonographers from Tokai University Hospital and Tokai University Hachioji Hospital. The results show that SEQ determined by GA provides better image quality than SDQ and SEQ determined by arithmetic average method. Additionally, three optimization methods of quantization step apply to thin wire target image for analysis of point spread function.

  3. Wavelet multiscale processing of remote sensing data

    NASA Astrophysics Data System (ADS)

    Bagmanov, Valeriy H.; Kharitonov, Svyatoslav V.; Meshkov, Ivan K.; Sultanov, Albert H.

    2008-12-01

    There is comparative analysis of methods for estimation and definition of Hoerst index (index of self-similarity) and comparative analysis of wavelet types using for image decomposition are offered. Five types of compared wavelets are used for analysis: Haar wavelets, Daubechies wavelets, Discrete Meyer wavelets, symplets and coiflets. Best quality of restored image Meyer and Haar wavelets demonstrate, because of they are characterised by minimal errors of recomposition. But compression index for these types smaller, than for Daubechies wavelets, symplets and coiflets. Contrariwise the latter obtain less precision of decompression. As it is necessary to take into consideration the complexity of realization some wavelet transformation on digital signal processors (DSP), simplest method is Haar wavelet transformation.

  4. Using PACS and wavelet-based image compression in a wide-area network to support radiation therapy imaging applications for satellite hospitals

    NASA Astrophysics Data System (ADS)

    Smith, Charles L.; Chu, Wei-Kom; Wobig, Randy; Chao, Hong-Yang; Enke, Charles

    1999-07-01

    An ongoing PACS project at our facility has been expanded to include providing and managing images used for routine clinical operation of the department of radiation oncology. The intent of our investigation has been to enable out clinical radiotherapy service to enter the tele-medicine environment through the use of a PACS system initially implemented in the department of radiology. The backbone for the imaging network includes five CT and three MR scanners located across three imaging centers. A PC workstation in the department of radiation oncology was used to transmit CT imags to a satellite facility located approximately 60 miles from the primary center. Chest CT images were used to analyze network transmission performance. Connectivity established between the primary department and satellite has fulfilled all image criteria required by the oncologist. Establishing the link tot eh oncologist at the satellite diminished bottlenecking of imaging related tasks at the primary facility due to physician absence. A 30:1 compression ratio using a wavelet-based algorithm provided clinically acceptable images treatment planning. Clinical radiotherapy images can be effectively managed in a wide- area-network to link satellite facilities to larger clinical centers.

  5. Wavelets meet genetic imaging

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Ping

    2005-08-01

    Genetic image analysis is an interdisciplinary area, which combines microscope image processing techniques with the use of biochemical probes for the detection of genetic aberrations responsible for cancers and genetic diseases. Recent years have witnessed parallel and significant progress in both image processing and genetics. On one hand, revolutionary multiscale wavelet techniques have been developed in signal processing and applied mathematics in the last decade, providing sophisticated tools for genetic image analysis. On the other hand, reaping the fruit of genome sequencing, high resolution genetic probes have been developed to facilitate accurate detection of subtle and cryptic genetic aberrations. In the meantime, however, they bring about computational challenges for image analysis. In this paper, we review the fruitful interaction between wavelets and genetic imaging. We show how wavelets offer a perfect tool to address a variety of chromosome image analysis problems. In fact, the same word "subband" has been used in the nomenclature of cytogenetics to describe the multiresolution banding structure of the chromosome, even before its appearance in the wavelet literature. The application of wavelets to chromosome analysis holds great promise in addressing several computational challenges in genetics. A variety of real world examples such as the chromosome image enhancement, compression, registration and classification will be demonstrated. These examples are drawn from fluorescence in situ hybridization (FISH) and microarray (gene chip) imaging experiments, which indicate the impact of wavelets on the diagnosis, treatments and prognosis of cancers and genetic diseases.

  6. Why are wavelets so effective

    SciTech Connect

    Resnikoff, H.L. )

    1993-01-01

    The theory of compactly supported wavelets is now 4 yr old. In that short period, it has stimulated significant research in pure mathematics; has been the source of new numerical methods for the solution of nonlinear partial differential equations, including Navier-Stokes; and has been applied to digital signal-processing problems, ranging from signal detection and classification to signal compression for speech, audio, images, seismic signals, and sonar. Wavelet channel coding has even been proposed for code division multiple access digital telephony. In each of these applications, prototype wavelet solutions have proved to be competitive with established methods, and in many cases they are already superior.

  7. Wavelet Representation of Contour Sets

    SciTech Connect

    Bertram, M; Laney, D E; Duchaineau, M A; Hansen, C D; Hamann, B; Joy, K I

    2001-07-19

    We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contoum and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields.

  8. Low-Oscillation Complex Wavelets

    NASA Astrophysics Data System (ADS)

    ADDISON, P. S.; WATSON, J. N.; FENG, T.

    2002-07-01

    In this paper we explore the use of two low-oscillation complex wavelets—Mexican hat and Morlet—as powerful feature detection tools for data analysis. These wavelets, which have been largely ignored to date in the scientific literature, allow for a decomposition which is more “temporal than spectral” in wavelet space. This is shown to be useful for the detection of small amplitude, short duration signal features which are masked by much larger fluctuations. Wavelet transform-based methods employing these wavelets (based on both wavelet ridges and modulus maxima) are developed and applied to sonic echo NDT signals used for the analysis of structural elements. A new mobility scalogram and associated reflectogram is defined for analysis of impulse response characteristics of structural elements and a novel signal compression technique is described in which the pertinent signal information is contained within a few modulus maxima coefficients. As an example of its usefulness, the signal compression method is employed as a pre-processor for a neural network classifier. The authors believe that low oscillation complex wavelets have wide applicability to other practical signal analysis problems. Their possible application to two such problems is discussed briefly—the interrogation of arrhythmic ECG signals and the detection and characterization of coherent structures in turbulent flow fields.

  9. Wavelet theory and its applications

    SciTech Connect

    Faber, V.; Bradley, JJ.; Brislawn, C.; Dougherty, R.; Hawrylycz, M.

    1996-07-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We investigated the theory of wavelet transforms and their relation to Laboratory applications. The investigators have had considerable success in the past applying wavelet techniques to the numerical solution of optimal control problems for distributed- parameter systems, nonlinear signal estimation, and compression of digital imagery and multidimensional data. Wavelet theory involves ideas from the fields of harmonic analysis, numerical linear algebra, digital signal processing, approximation theory, and numerical analysis, and the new computational tools arising from wavelet theory are proving to be ideal for many Laboratory applications. 10 refs.

  10. Optical wavelet transform for fingerprint identification

    NASA Astrophysics Data System (ADS)

    MacDonald, Robert P.; Rogers, Steven K.; Burns, Thomas J.; Fielding, Kenneth H.; Warhola, Gregory T.; Ruck, Dennis W.

    1994-03-01

    The Federal Bureau of Investigation (FBI) has recently sanctioned a wavelet fingerprint image compression algorithm developed for reducing storage requirements of digitized fingerprints. This research implements an optical wavelet transform of a fingerprint image, as the first step in an optical fingerprint identification process. Wavelet filters are created from computer- generated holograms of biorthogonal wavelets, the same wavelets implemented in the FBI algorithm. Using a detour phase holographic technique, a complex binary filter mask is created with both symmetry and linear phase. The wavelet transform is implemented with continuous shift using an optical correlation between binarized fingerprints written on a Magneto-Optic Spatial Light Modulator and the biorthogonal wavelet filters. A telescopic lens combination scales the transformed fingerprint onto the filters, providing a means of adjusting the biorthogonal wavelet filter dilation continuously. The wavelet transformed fingerprint is then applied to an optical fingerprint identification process. Comparison between normal fingerprints and wavelet transformed fingerprints shows improvement in the optical identification process, in terms of rotational invariance.

  11. Wavelets for sign language translation

    NASA Astrophysics Data System (ADS)

    Wilson, Beth J.; Anspach, Gretel

    1993-10-01

    Wavelet techniques are applied to help extract the relevant parameters of sign language from video images of a person communicating in American Sign Language or Signed English. The compression and edge detection features of two-dimensional wavelet analysis are exploited to enhance the algorithms under development to classify the hand motion, hand location with respect to the body, and handshape. These three parameters have different processing requirements and complexity issues. The results are described for applying various quadrature mirror filter designs to a filterbank implementation of the desired wavelet transform. The overall project is to develop a system that will translate sign language to English to facilitate communication between deaf and hearing people.

  12. Visibility of wavelet quantization noise

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Yang, G. Y.; Solomon, J. A.; Villasenor, J.

    1997-01-01

    The discrete wavelet transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that we call DWT uniform quantization noise; it is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2-lambda, where r is display visual resolution in pixels/degree, and lambda is the wavelet level. Thresholds increase rapidly with wavelet spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from lowpass to horizontal/vertical to diagonal. We construct a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  13. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  14. Multiresolution With Super-Compact Wavelets

    NASA Technical Reports Server (NTRS)

    Lee, Dohyung

    2000-01-01

    The solution data computed from large scale simulations are sometimes too big for main memory, for local disks, and possibly even for a remote storage disk, creating tremendous processing time as well as technical difficulties in analyzing the data. The excessive storage demands a corresponding huge penalty in I/O time, rendering time and transmission time between different computer systems. In this paper, a multiresolution scheme is proposed to compress field simulation or experimental data without much loss of important information in the representation. Originally, the wavelet based multiresolution scheme was introduced in image processing, for the purposes of data compression and feature extraction. Unlike photographic image data which has rather simple settings, computational field simulation data needs more careful treatment in applying the multiresolution technique. While the image data sits on a regular spaced grid, the simulation data usually resides on a structured curvilinear grid or unstructured grid. In addition to the irregularity in grid spacing, the other difficulty is that the solutions consist of vectors instead of scalar values. The data characteristics demand more restrictive conditions. In general, the photographic images have very little inherent smoothness with discontinuities almost everywhere. On the other hand, the numerical solutions have smoothness almost everywhere and discontinuities in local areas (shock, vortices, and shear layers). The wavelet bases should be amenable to the solution of the problem at hand and applicable to constraints such as numerical accuracy and boundary conditions. In choosing a suitable wavelet basis for simulation data among a variety of wavelet families, the supercompact wavelets designed by Beam and Warming provide one of the most effective multiresolution schemes. Supercompact multi-wavelets retain the compactness of Haar wavelets, are piecewise polynomial and orthogonal, and can have arbitrary order of

  15. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  16. Interframe vector wavelet coding technique

    NASA Astrophysics Data System (ADS)

    Wus, John P.; Li, Weiping

    1997-01-01

    Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.

  17. Visibility of Wavelet Quantization Noise

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp)-L , where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We describe a mathematical model to predict DWT noise detection thresholds as a function of level, orientation, and display visual resolution. This allows calculation of a "perceptually lossless" quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  18. Wavelet Analysis of Space Solar Telescope Images

    NASA Astrophysics Data System (ADS)

    Zhu, Xi-An; Jin, Sheng-Zhen; Wang, Jing-Yu; Ning, Shu-Nian

    2003-12-01

    The scientific satellite SST (Space Solar Telescope) is an important research project strongly supported by the Chinese Academy of Sciences. Every day, SST acquires 50 GB of data (after processing) but only 10GB can be transmitted to the ground because of limited time of satellite passage and limited channel volume. Therefore, the data must be compressed before transmission. Wavelets analysis is a new technique developed over the last 10 years, with great potential of application. We start with a brief introduction to the essential principles of wavelet analysis, and then describe the main idea of embedded zerotree wavelet coding, used for compressing the SST images. The results show that this coding is adequate for the job.

  19. Application specific compression : final report.

    SciTech Connect

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  20. Progressive Compression of Volumetric Subdivision Meshes

    SciTech Connect

    Laney, D; Pascucci, V

    2004-04-16

    We present a progressive compression technique for volumetric subdivision meshes based on the slow growing refinement algorithm. The system is comprised of a wavelet transform followed by a progressive encoding of the resulting wavelet coefficients. We compare the efficiency of two wavelet transforms. The first transform is based on the smoothing rules used in the slow growing subdivision technique. The second transform is a generalization of lifted linear B-spline wavelets to the same multi-tier refinement structure. Direct coupling with a hierarchical coder produces progressive bit streams. Rate distortion metrics are evaluated for both wavelet transforms. We tested the practical performance of the scheme on synthetic data as well as data from laser indirect-drive fusion simulations with multiple fields per vertex. Both wavelet transforms result in high quality trade off curves and produce qualitatively good coarse representations.

  1. Real-time video codec using reversible wavelets

    NASA Astrophysics Data System (ADS)

    Huang, Gen Dow; Chiang, David J.; Huang, Yi-En; Cheng, Allen

    2003-04-01

    This paper describes the hardware implementation of a real-time video codec using reversible Wavelets. The TechSoft (TS) real-time video system employs the Wavelet differencing for the inter-frame compression based on the independent Embedded Block Coding with Optimized Truncation (EBCOT) of the embedded bit stream. This high performance scalable image compression using EBCOT has been selected as part of the ISO new image compression standard, JPEG2000. The TS real-time video system can process up to 30 frames per second (fps) of the DVD format. In addition, audio signals are also processed by the same design for the cost reduction. Reversible Wavelets are used not only for the cost reduction, but also for the lossless applications. Design and implementation issues of the TS real-time video system are discussed.

  2. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  3. Wavelet image coding with parametric thresholding: application to JPEG2000

    NASA Astrophysics Data System (ADS)

    Zaid, Azza O.; Olivier, Christian; Marmoiton, Francois

    2003-05-01

    With the increasing use of multimedia technologies, image compression requires higher performance as well as new features. To address this need in the specific area of image coding, the latest ISO/IEC image compression standard, JPEG2000, has been developed. In part II of the standard, the Wavelet Trellis Coded Quantization (WTCQ) algorithm was adopted. It has been proved that this quantization design provides subjective image quality superior to other existing quantization techniques. In this paper we are aiming to improve the rate-distortion performance of WTCQ, by incorporating a thresholding process in JPEG2000 coding chain. The threshold decisions are derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD). The threshold value depends on the parametric model estimation of the subband wavelet coefficient distribution. Our algorithm approaches the lowest possible memory usage by using line-based wavelet transform and a scan-based bit allocation technique. In our work, we investigate an efficient way to apply the TCQ to wavelet image coding with regard to both the computational complexity and the compression performance. Experimental results show that the proposed algorithm performs competitively with the best available coding algorithms reported in the literature in terms quality performance.

  4. Wavelet analysis of electric adjustable speed drive waveforms

    SciTech Connect

    Czarkowski, D.; Domijan, A. Jr.

    1998-10-01

    The three most common adjustable speed drives (ASDs) used in HVAC equipment, namely, pulse-width modulated (PWM) induction drive, brushless-dc drive, and switched-reluctance drive, generate non-periodic and nonstationary electric waveforms with sharp edges and transients. Deficiencies of Fourier transform methods in analysis of such ASD waveforms prompted an application of the wavelet transform. Results of discrete wavelet transform (DWT) analysis of PWM inverter-fed motor waveforms are presented. The best mother wavelet for analysis of the recorded waveforms is selected. Data compression properties of the selected mother wavelet are compared to those of the fast Fourier transform (FFT). Multilevel feature detection of ASD waveforms using the DWT is shown.

  5. Periodized Daubechies wavelets

    SciTech Connect

    Restrepo, J.M.; Leaf, G.K.; Schlossnagle, G.

    1996-03-01

    The properties of periodized Daubechies wavelets on [0,1] are detailed and counterparts which form a basis for L{sup 2}(R). Numerical examples illustrate the analytical estimates for convergence and demonstrated by comparison with Fourier spectral methods the superiority of wavelet projection methods for approximations. The analytical solution to inner products of periodized wavelets and their derivatives, which are known as connection coefficients, is presented, and their use ius illustrated in the approximation of two commonly used differential operators. The periodization of the connection coefficients in Galerkin schemes is presented in detail.

  6. Sparse imaging of cortical electrical current densities via wavelet transforms

    NASA Astrophysics Data System (ADS)

    Liao, Ke; Zhu, Min; Ding, Lei; Valette, Sébastien; Zhang, Wenbo; Dickens, Deanna

    2012-11-01

    While the cerebral cortex in the human brain is of functional importance, functions defined on this structure are difficult to analyze spatially due to its highly convoluted irregular geometry. This study developed a novel L1-norm regularization method using a newly proposed multi-resolution face-based wavelet method to estimate cortical electrical activities in electroencephalography (EEG) and magnetoencephalography (MEG) inverse problems. The proposed wavelets were developed based on multi-resolution models built from irregular cortical surface meshes, which were realized in this study too. The multi-resolution wavelet analysis was used to seek sparse representation of cortical current densities in transformed domains, which was expected due to the compressibility of wavelets, and evaluated using Monte Carlo simulations. The EEG/MEG inverse problems were solved with the use of the novel L1-norm regularization method exploring the sparseness in the wavelet domain. The inverse solutions obtained from the new method using MEG data were evaluated by Monte Carlo simulations too. The present results indicated that cortical current densities could be efficiently compressed using the proposed face-based wavelet method, which exhibited better performance than the vertex-based wavelet method. In both simulations and auditory experimental data analysis, the proposed L1-norm regularization method showed better source detection accuracy and less estimation errors than other two classic methods, i.e. weighted minimum norm (wMNE) and cortical low-resolution electromagnetic tomography (cLORETA). This study suggests that the L1-norm regularization method with the use of face-based wavelets is a promising tool for studying functional activations of the human brain.

  7. Global and Local Distortion Inference During Embedded Zerotree Wavelet Decompression

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.

    1996-01-01

    This paper presents algorithms for inferring global and spatially local estimates of the squared-error distortion measures for the Embedded Zerotree Wavelet (EZW) image compression algorithm. All distortion estimates are obtained at the decoder without significantly compromising EZW's rate-distortion performance. Two methods are given for propagating distortion estimates from the wavelet domain to the spatial domain, thus giving individual estimates of distortion for each pixel of the decompressed image. These local distortion estimates seem to provide only slight improvement in the statistical characterization of EZW compression error relative to the global measure, unless actual squared errors are propagated. However, they provide qualitative information about the asymptotic nature of the error that may be helpful in wavelet filter selection for low bit rate applications.

  8. Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data

    SciTech Connect

    Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin; Clyne, John; Childs, Hank

    2015-10-25

    I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed and lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.

  9. Wavelets and electromagnetics

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.

    1992-01-01

    Wavelets are an exciting new topic in applied mathematics and signal processing. This paper will provide a brief review of wavelets which are also known as families of functions with an emphasis on interpretation rather than rigor. We will derive an indirect use of wavelets for the solution of integral equations based techniques adapted from image processing. Examples for resistive strips will be given illustrating the effect of these techniques as well as their promise in reducing dramatically the requirement in order to solve an integral equation for large bodies. We also will present a direct implementation of wavelets to solve an integral equation. Both methods suggest future research topics and may hold promise for a variety of uses in computational electromagnetics.

  10. Wavelet transform analysis of transient signals: the seismogram and the electrocardiogram

    SciTech Connect

    Anant, K.S.

    1997-06-01

    In this dissertation I quantitatively demonstrate how the wavelet transform can be an effective mathematical tool for the analysis of transient signals. The two key signal processing applications of the wavelet transform, namely feature identification and representation (i.e., compression), are shown by solving important problems involving the seismogram and the electrocardiogram. The seismic feature identification problem involved locating in time the P and S phase arrivals. Locating these arrivals accurately (particularly the S phase) has been a constant issue in seismic signal processing. In Chapter 3, I show that the wavelet transform can be used to locate both the P as well as the S phase using only information from single station three-component seismograms. This is accomplished by using the basis function (wave-let) of the wavelet transform as a matching filter and by processing information across scales of the wavelet domain decomposition. The `pick` time results are quite promising as compared to analyst picks. The representation application involved the compression of the electrocardiogram which is a recording of the electrical activity of the heart. Compression of the electrocardiogram is an important problem in biomedical signal processing due to transmission and storage limitations. In Chapter 4, I develop an electrocardiogram compression method that applies vector quantization to the wavelet transform coefficients. The best compression results were obtained by using orthogonal wavelets, due to their ability to represent a signal efficiently. Throughout this thesis the importance of choosing wavelets based on the problem at hand is stressed. In Chapter 5, I introduce a wavelet design method that uses linear prediction in order to design wavelets that are geared to the signal or feature being analyzed. The use of these designed wavelets in a test feature identification application led to positive results. The methods developed in this thesis; the

  11. Design of a wavelet slave processor for audio and video decompression

    NASA Astrophysics Data System (ADS)

    Lu, Yan; Zhao, Debin; Chan, Yiu K.

    2001-03-01

    In terms of image and video compression, it is well known that Wavelet Transform (WT) can achieve higher compression efficiency than Discrete Cosine Transform (DCT) when post transform coding scheme of similar computational complexity is used. On the other hand it is also well known that wavelet approach has a higher computational complexity than DCT both in software and in hardware. When both audio and video compression are required as in the case of video recording, it is desirable to achieve higher compression efficiency using WT and to share the same hardware that is based on WT technology. It is the intention of this paper to present an architecture for a WT slave processor. In this paper, our own results for image and audio compression will be presented to show the effectiveness of wavelet transform. We will then show that integer based wavelet transform has enough accuracy for both audio and video base on our own experience. We will then present decompression executable codes which is an intermediate step before the hardware architecture. We will then show an architectural design for an integer Wavelet Slave Processor (WSP) for decompression. This proposed WSP can be designed, as variation on a theme, for the compression of audio and video data.

  12. Research on the technique of public watermarking system based on wavelet transform and neural network

    NASA Astrophysics Data System (ADS)

    Xu, Li; Tao, Gu

    2007-04-01

    A hybrid algorithm of using a wavelet transform and a neural network is presented which solves the problems confronted in public watermarking systems. First, to get the wavelet coefficients, db1 wavelet is used to decompose the selected image. Second, to ensure better quality of the watermarked image, some wavelet coefficients and their closely relevant wavelet coefficients are randomly selected from the wavelet coefficients decomposed by the low pass filter and used to establish the relational model by using a neural network. Third, the bit information of the watermark is also enlarged by increasing the amount of zeros or ones and then one bit of the results is embedded by adjusting the polarity between a chosen wavelet coefficient and the output value of the model. Finally, a new image with watermark information is reconstructed by using the modified wavelet coefficients and other unmodified wavelet coefficients. On the other hand, the process of retrieving the watermark is the inverse of the embedding process. The embedded watermark can also be retrieved by using the hybrid algorithm and the restore function without knowing the original image and watermark. Experimental results show that the proposed technique is very robust against some image processing operations and JPEG lossy compression. Meanwhile, the extracted watermark can be proved by the proposed method. Because of the neural network, the proposed method is also robust against attack of false authentication. Therefore, the hybrid algorithm can be used to protect the copyright of one important image.

  13. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  14. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received

  15. Spatial compression algorithm for the analysis of very large multivariate images

    DOEpatents

    Keenan, Michael R.

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  16. Raman spectral data denoising based on wavelet analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chen; Peng, Fei; Cheng, Qinghua; Xu, Dahai

    2008-12-01

    Abstract As one kind of molecule scattering spectroscopy, Raman spectroscopy (RS) is characterized by the frequency excursion that can show the information of molecule. RS has a broad application in biological, chemical, environmental and industrial fields. But signals in Raman spectral analysis often have noise, which greatly influences the achievement of accurate analytical results. The de-noising of RS signals is an important part of spectral analysis. Wavelet transform has been established with the Fourier transform as a data-processing method in analytical fields. The main fields of application are related to de-noising, compression, variable reduction, and signal suppression. In de-noising of Raman Spectroscopy, wavelet is chosen to construct de-noising function because of its excellent properties. In this paper, bior wavelet is adopted to remove the noise in the Raman spectra. It eliminates noise obviously and the result is satisfying. This method can provide some bases for practical de-noising in Raman spectra.

  17. Stationary wavelet transform for under-sampled MRI reconstruction.

    PubMed

    Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M

    2014-12-01

    In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions.

  18. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  19. Wavelet Domain Radiofrequency Pulse Design Applied to Magnetic Resonance Imaging.

    PubMed

    Huettner, Andrew M; Mickevicius, Nikolai J; Ersoz, Ali; Koch, Kevin M; Muftuler, L Tugan; Nencka, Andrew S

    2015-01-01

    A new method for designing radiofrequency (RF) pulses with numerical optimization in the wavelet domain is presented. Numerical optimization may yield solutions that might otherwise have not been discovered with analytic techniques alone. Further, processing in the wavelet domain reduces the number of unknowns through compression properties inherent in wavelet transforms, providing a more tractable optimization problem. This algorithm is demonstrated with simultaneous multi-slice (SMS) spin echo refocusing pulses because reduced peak RF power is necessary for SMS diffusion imaging with high acceleration factors. An iterative, nonlinear, constrained numerical minimization algorithm was developed to generate an optimized RF pulse waveform. Wavelet domain coefficients were modulated while iteratively running a Bloch equation simulator to generate the intermediate slice profile of the net magnetization. The algorithm minimizes the L2-norm of the slice profile with additional terms to penalize rejection band ripple and maximize the net transverse magnetization across each slice. Simulations and human brain imaging were used to demonstrate a new RF pulse design that yields an optimized slice profile and reduced peak energy deposition when applied to a multiband single-shot echo planar diffusion acquisition. This method may be used to optimize factors such as magnitude and phase spectral profiles and peak RF pulse power for multiband simultaneous multi-slice (SMS) acquisitions. Wavelet-based RF pulse optimization provides a useful design method to achieve a pulse waveform with beneficial amplitude reduction while preserving appropriate magnetization response for magnetic resonance imaging. PMID:26517262

  20. Wavelet Domain Radiofrequency Pulse Design Applied to Magnetic Resonance Imaging

    PubMed Central

    Huettner, Andrew M.; Mickevicius, Nikolai J.; Ersoz, Ali; Koch, Kevin M.; Muftuler, L. Tugan; Nencka, Andrew S.

    2015-01-01

    A new method for designing radiofrequency (RF) pulses with numerical optimization in the wavelet domain is presented. Numerical optimization may yield solutions that might otherwise have not been discovered with analytic techniques alone. Further, processing in the wavelet domain reduces the number of unknowns through compression properties inherent in wavelet transforms, providing a more tractable optimization problem. This algorithm is demonstrated with simultaneous multi-slice (SMS) spin echo refocusing pulses because reduced peak RF power is necessary for SMS diffusion imaging with high acceleration factors. An iterative, nonlinear, constrained numerical minimization algorithm was developed to generate an optimized RF pulse waveform. Wavelet domain coefficients were modulated while iteratively running a Bloch equation simulator to generate the intermediate slice profile of the net magnetization. The algorithm minimizes the L2-norm of the slice profile with additional terms to penalize rejection band ripple and maximize the net transverse magnetization across each slice. Simulations and human brain imaging were used to demonstrate a new RF pulse design that yields an optimized slice profile and reduced peak energy deposition when applied to a multiband single-shot echo planar diffusion acquisition. This method may be used to optimize factors such as magnitude and phase spectral profiles and peak RF pulse power for multiband simultaneous multi-slice (SMS) acquisitions. Wavelet-based RF pulse optimization provides a useful design method to achieve a pulse waveform with beneficial amplitude reduction while preserving appropriate magnetization response for magnetic resonance imaging. PMID:26517262

  1. Correlative weighted stacking for seismic data in the wavelet domain

    USGS Publications Warehouse

    Zhang, S.; Xu, Y.; Xia, J.; ,

    2004-01-01

    Horizontal stacking plays a crucial role for modern seismic data processing, for it not only compresses random noise and multiple reflections, but also provides a foundational data for subsequent migration and inversion. However, a number of examples showed that random noise in adjacent traces exhibits correlation and coherence. The average stacking and weighted stacking based on the conventional correlative function all result in false events, which are caused by noise. Wavelet transform and high order statistics are very useful methods for modern signal processing. The multiresolution analysis in wavelet theory can decompose signal on difference scales, and high order correlative function can inhibit correlative noise, for which the conventional correlative function is of no use. Based on the theory of wavelet transform and high order statistics, high order correlative weighted stacking (HOCWS) technique is presented in this paper. Its essence is to stack common midpoint gathers after the normal moveout correction by weight that is calculated through high order correlative statistics in the wavelet domain. Synthetic examples demonstrate its advantages in improving the signal to noise (S/N) ration and compressing the correlative random noise.

  2. Optical asymmetric image encryption using gyrator wavelet transform

    NASA Astrophysics Data System (ADS)

    Mehra, Isha; Nishchal, Naveen K.

    2015-11-01

    In this paper, we propose a new optical information processing tool termed as gyrator wavelet transform to secure a fully phase image, based on amplitude- and phase-truncation approach. The gyrator wavelet transform constitutes four basic parameters; gyrator transform order, type and level of mother wavelet, and position of different frequency bands. These parameters are used as encryption keys in addition to the random phase codes to the optical cryptosystem. This tool has also been applied for simultaneous compression and encryption of an image. The system's performance and its sensitivity to the encryption parameters, such as, gyrator transform order, and robustness has also been analyzed. It is expected that this tool will not only update current optical security systems, but may also shed some light on future developments. The computer simulation results demonstrate the abilities of the gyrator wavelet transform as an effective tool, which can be used in various optical information processing applications, including image encryption, and image compression. Also this tool can be applied for securing the color image, multispectral, and three-dimensional images.

  3. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  4. Wavelets in medical imaging

    NASA Astrophysics Data System (ADS)

    Zahra, Noor e.; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-01

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  5. Wavelets in medical imaging

    SciTech Connect

    Zahra, Noor e; Sevindir, Huliya A.; Aslan, Zafar; Siddiqi, A. H.

    2012-07-17

    The aim of this study is to provide emerging applications of wavelet methods to medical signals and images, such as electrocardiogram, electroencephalogram, functional magnetic resonance imaging, computer tomography, X-ray and mammography. Interpretation of these signals and images are quite important. Nowadays wavelet methods have a significant impact on the science of medical imaging and the diagnosis of disease and screening protocols. Based on our initial investigations, future directions include neurosurgical planning and improved assessment of risk for individual patients, improved assessment and strategies for the treatment of chronic pain, improved seizure localization, and improved understanding of the physiology of neurological disorders. We look ahead to these and other emerging applications as the benefits of this technology become incorporated into current and future patient care. In this chapter by applying Fourier transform and wavelet transform, analysis and denoising of one of the important biomedical signals like EEG is carried out. The presence of rhythm, template matching, and correlation is discussed by various method. Energy of EEG signal is used to detect seizure in an epileptic patient. We have also performed denoising of EEG signals by SWT.

  6. Lossless Video Sequence Compression Using Adaptive Prediction

    NASA Technical Reports Server (NTRS)

    Li, Ying; Sayood, Khalid

    2007-01-01

    We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.

  7. Evaluation of the Use of Second Generation Wavelets in the Coherent Vortex Simulation Approach

    NASA Technical Reports Server (NTRS)

    Goldstein, D. E.; Vasilyev, O. V.; Wray, A. A.; Rogallo, R. S.

    2000-01-01

    The objective of this study is to investigate the use of the second generation bi-orthogonal wavelet transform for the field decomposition in the Coherent Vortex Simulation of turbulent flows. The performances of the bi-orthogonal second generation wavelet transform and the orthogonal wavelet transform using Daubechies wavelets with the same number of vanishing moments are compared in a priori tests using a spectral direct numerical simulation (DNS) database of isotropic turbulence fields: 256(exp 3) and 512(exp 3) DNS of forced homogeneous turbulence (Re(sub lambda) = 168) and 256(exp 3) and 512(exp 3) DNS of decaying homogeneous turbulence (Re(sub lambda) = 55). It is found that bi-orthogonal second generation wavelets can be used for coherent vortex extraction. The results of a priori tests indicate that second generation wavelets have better compression and the residual field is closer to Gaussian. However, it was found that the use of second generation wavelets results in an integral length scale for the incoherent part that is larger than that derived from orthogonal wavelets. A way of dealing with this difficulty is suggested.

  8. Anisotropic analysis of trabecular architecture in human femur bone radiographs using quaternion wavelet transforms.

    PubMed

    Sangeetha, S; Sujatha, C M; Manamalli, D

    2014-01-01

    In this work, anisotropy of compressive and tensile strength regions of femur trabecular bone are analysed using quaternion wavelet transforms. The normal and abnormal femur trabecular bone radiographic images are considered for this study. The sub-anatomic regions, which include compressive and tensile regions, are delineated using pre-processing procedures. These delineated regions are subjected to quaternion wavelet transforms and statistical parameters are derived from the transformed images. These parameters are correlated with apparent porosity, which is derived from the strength regions. Further, anisotropy is also calculated from the transformed images and is analyzed. Results show that the anisotropy values derived from second and third phase components of quaternion wavelet transform are found to be distinct for normal and abnormal samples with high statistical significance for both compressive and tensile regions. These investigations demonstrate that architectural anisotropy derived from QWT analysis is able to differentiate normal and abnormal samples. PMID:25571265

  9. Compression of gray-scale fingerprint images

    NASA Astrophysics Data System (ADS)

    Hopper, Thomas

    1994-03-01

    The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.

  10. Digital watermarking algorithm based on HVS in wavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Qiuhong; Xia, Ping; Liu, Xiaomei

    2013-10-01

    As a new technique used to protect the copyright of digital productions, the digital watermark technique has drawn extensive attention. A digital watermarking algorithm based on discrete wavelet transform (DWT) was presented according to human visual properties in the paper. Then some attack analyses were given. Experimental results show that the watermarking scheme proposed in this paper is invisible and robust to cropping, and also has good robustness to cut , compression , filtering , and noise adding .

  11. An Evolved Wavelet Library Based on Genetic Algorithm

    PubMed Central

    Vaithiyanathan, D.; Seshasayanan, R.; Kunaraj, K.; Keerthiga, J.

    2014-01-01

    As the size of the images being captured increases, there is a need for a robust algorithm for image compression which satiates the bandwidth limitation of the transmitted channels and preserves the image resolution without considerable loss in the image quality. Many conventional image compression algorithms use wavelet transform which can significantly reduce the number of bits needed to represent a pixel and the process of quantization and thresholding further increases the compression. In this paper the authors evolve two sets of wavelet filter coefficients using genetic algorithm (GA), one for the whole image portion except the edge areas and the other for the portions near the edges in the image (i.e., global and local filters). Images are initially separated into several groups based on their frequency content, edges, and textures and the wavelet filter coefficients are evolved separately for each group. As there is a possibility of the GA settling in local maximum, we introduce a new shuffling operator to prevent the GA from this effect. The GA used to evolve filter coefficients primarily focuses on maximizing the peak signal to noise ratio (PSNR). The evolved filter coefficients by the proposed method outperform the existing methods by a 0.31 dB improvement in the average PSNR and a 0.39 dB improvement in the maximum PSNR. PMID:25405225

  12. Neural network wavelet technology: A frontier of automation

    NASA Technical Reports Server (NTRS)

    Szu, Harold

    1994-01-01

    Neural networks are an outgrowth of interdisciplinary studies concerning the brain. These studies are guiding the field of Artificial Intelligence towards the, so-called, 6th Generation Computer. Enormous amounts of resources have been poured into R/D. Wavelet Transforms (WT) have replaced Fourier Transforms (FT) in Wideband Transient (WT) cases since the discovery of WT in 1985. The list of successful applications includes the following: earthquake prediction; radar identification; speech recognition; stock market forecasting; FBI finger print image compression; and telecommunication ISDN-data compression.

  13. The decoding method based on wavelet image En vector quantization

    NASA Astrophysics Data System (ADS)

    Liu, Chun-yang; Li, Hui; Wang, Tao

    2013-12-01

    With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.

  14. Experimental Studies on a Compact Storage Scheme for Wavelet-based Multiresolution Subregion Retrieval

    NASA Technical Reports Server (NTRS)

    Poulakidas, A.; Srinivasan, A.; Egecioglu, O.; Ibarra, O.; Yang, T.

    1996-01-01

    Wavelet transforms, when combined with quantization and a suitable encoding, can be used to compress images effectively. In order to use them for image library systems, a compact storage scheme for quantized coefficient wavelet data must be developed with a support for fast subregion retrieval. We have designed such a scheme and in this paper we provide experimental studies to demonstrate that it achieves good image compression ratios, while providing a natural indexing mechanism that facilitates fast retrieval of portions of the image at various resolutions.

  15. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  16. A generalized wavelet extrema representation

    SciTech Connect

    Lu, Jian; Lades, M.

    1995-10-01

    The wavelet extrema representation originated by Stephane Mallat is a unique framework for low-level and intermediate-level (feature) processing. In this paper, we present a new form of wavelet extrema representation generalizing Mallat`s original work. The generalized wavelet extrema representation is a feature-based multiscale representation. For a particular choice of wavelet, our scheme can be interpreted as representing a signal or image by its edges, and peaks and valleys at multiple scales. Such a representation is shown to be stable -- the original signal or image can be reconstructed with very good quality. It is further shown that a signal or image can be modeled as piecewise monotonic, with all turning points between monotonic segments given by the wavelet extrema. A new projection operator is introduced to enforce piecewise inonotonicity of a signal in its reconstruction. This leads to an enhancement to previously developed algorithms in preventing artifacts in reconstructed signal.

  17. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  18. ECG compression: evaluation of FFT, DCT, and WT performance.

    PubMed

    GholamHosseini, H; Nazeran, H; Moran, B

    1998-12-01

    This work investigates a set of ECG data compression schemes to compare their performances in compressing and preparing ECG signals for automatic cardiac arrhythmia classification. These schemes are based on transform methods such as fast Fourier transform (FFT), discrete cosine transform (DCT), wavelet transform (WT), and their combinations. Each specific transform is applied to a pre-selected data segment from the MIT-BIH database and then compression is performed in the new domain. These transformation methods are known as an important class of ECG compression techniques. The WT has been shown as the most efficient method for further improvement. A compression ratio of 7.98 to 1 has been achieved with a percent of root mean square difference (PRD) of 0.25%, indicating that the wavelet compression technique offers the best performance over the other evaluated methods.

  19. An Introduction to Wavelet Theory and Analysis

    SciTech Connect

    Miner, N.E.

    1998-10-01

    This report reviews the history, theory and mathematics of wavelet analysis. Examination of the Fourier Transform and Short-time Fourier Transform methods provides tiormation about the evolution of the wavelet analysis technique. This overview is intended to provide readers with a basic understanding of wavelet analysis, define common wavelet terminology and describe wavelet amdysis algorithms. The most common algorithms for performing efficient, discrete wavelet transforms for signal analysis and inverse discrete wavelet transforms for signal reconstruction are presented. This report is intended to be approachable by non- mathematicians, although a basic understanding of engineering mathematics is necessary.

  20. Highly efficient codec based on significance-linked connected-component analysis of wavelet coefficients

    NASA Astrophysics Data System (ADS)

    Chai, Bing-Bing; Vass, Jozsef; Zhuang, Xinhua

    1997-04-01

    Recent success in wavelet coding is mainly attributed to the recognition of importance of data organization. There has been several very competitive wavelet codecs developed, namely, Shapiro's Embedded Zerotree Wavelets (EZW), Servetto et. al.'s Morphological Representation of Wavelet Data (MRWD), and Said and Pearlman's Set Partitioning in Hierarchical Trees (SPIHT). In this paper, we propose a new image compression algorithm called Significant-Linked Connected Component Analysis (SLCCA) of wavelet coefficients. SLCCA exploits both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. A so-called significant link between connected components is designed to reduce the positional overhead of MRWD. In addition, the significant coefficients' magnitude are encoded in bit plane order to match the probability model of the adaptive arithmetic coder. Experiments show that SLCCA outperforms both EZW and MRWD, and is tied with SPIHT. Furthermore, it is observed that SLCCA generally has the best performance on images with large portion of texture. When applied to fingerprint image compression, it outperforms FBI's wavelet scalar quantization by about 1 dB.

  1. FBI compression standard for digitized fingerprint images

    NASA Astrophysics Data System (ADS)

    Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas

    1996-11-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  2. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  3. Large Scale Isosurface Bicubic Subdivision-Surface Wavelets for Representation and Visualization

    SciTech Connect

    Bertram, M.; Duchaineau, M.A.; Hamann, B.; Joy, K.I.

    2000-01-05

    We introduce a new subdivision-surface wavelet transform for arbitrary two-manifolds with boundary that is the first to use simple lifting-style filtering operations with bicubic precision. We also describe a conversion process for re-mapping large-scale isosurfaces to have subdivision connectivity and fair parameterizations so that the new wavelet transform can be used for compression and visualization. The main idea enabling our wavelet transform is the circular symmetrization of the filters in irregular neighborhoods, which replaces the traditional separation of filters into two 1-D passes. Our wavelet transform uses polygonal base meshes to represent surface topology, from which a Catmull-Clark-style subdivision hierarchy is generated. The details between these levels of resolution are quickly computed and compactly stored as wavelet coefficients. The isosurface conversion process begins with a contour triangulation computed using conventional techniques, which we subsequently simplify with a variant edge-collapse procedure, followed by an edge-removal process. This provides a coarse initial base mesh, which is subsequently refined, relaxed and attracted in phases to converge to the contour. The conversion is designed to produce smooth, untangled and minimally-skewed parameterizations, which improves the subsequent compression after applying the transform. We have demonstrated our conversion and transform for an isosurface obtained from a high-resolution turbulent-mixing hydrodynamics simulation, showing the potential for compression and level-of-detail visualization.

  4. An overview of the quantum wavelet transform, focused on earth science applications

    NASA Astrophysics Data System (ADS)

    Shehab, O.; LeMoigne, J.; Lomonaco, S.; Halem, M.

    2015-12-01

    Registering the images from the MODIS system and the OCO-2 satellite is currently being done by classical image registration techniques. One such technique is wavelet transformation. Besides image registration, wavelet transformation is also used in other areas of earth science, for example, processinga and compressing signal variation, etc. In this talk, we investigate the applicability of few quantum wavelet transformation algorithms to perform image registration on the MODIS and OCO-2 data. Most of the known quantum wavelet transformation algorithms are data agnostic. We investigate their applicability in transforming Flexible Representation for Quantum Images. Similarly, we also investigate the applicability of the algorithms in signal variation analysis. We also investigate the transformation of the models into pseudo-boolean functions to implement them on commercially available quantum annealing computers, such as the D-Wave computer located at NASA Ames.

  5. Peak finding using biorthogonal wavelets

    SciTech Connect

    Tan, C.Y.

    2000-02-01

    The authors show in this paper how they can find the peaks in the input data if the underlying signal is a sum of Lorentzians. In order to project the data into a space of Lorentzian like functions, they show explicitly the construction of scaling functions which look like Lorentzians. From this construction, they can calculate the biorthogonal filter coefficients for both the analysis and synthesis functions. They then compare their biorthogonal wavelets to the FBI (Federal Bureau of Investigations) wavelets when used for peak finding in noisy data. They will show that in this instance, their filters perform much better than the FBI wavelets.

  6. Birdsong Denoising Using Wavelets

    PubMed Central

    Priyadarshani, Nirosha; Marsland, Stephen; Castro, Isabel; Punchihewa, Amal

    2016-01-01

    Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings. PMID:26812391

  7. Wavelet entropy of stochastic processes

    NASA Astrophysics Data System (ADS)

    Zunino, L.; Pérez, D. G.; Garavaglia, M.; Rosso, O. A.

    2007-06-01

    We compare two different definitions for the wavelet entropy associated to stochastic processes. The first one, the normalized total wavelet entropy (NTWS) family [S. Blanco, A. Figliola, R.Q. Quiroga, O.A. Rosso, E. Serrano, Time-frequency analysis of electroencephalogram series, III. Wavelet packets and information cost function, Phys. Rev. E 57 (1998) 932-940; O.A. Rosso, S. Blanco, J. Yordanova, V. Kolev, A. Figliola, M. Schürmann, E. Başar, Wavelet entropy: a new tool for analysis of short duration brain electrical signals, J. Neurosci. Method 105 (2001) 65-75] and a second introduced by Tavares and Lucena [Physica A 357(1) (2005) 71-78]. In order to understand their advantages and disadvantages, exact results obtained for fractional Gaussian noise ( -1<α< 1) and fractional Brownian motion ( 1<α< 3) are assessed. We find out that the NTWS family performs better as a characterization method for these stochastic processes.

  8. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios). PMID:22255464

  9. Wavelet-based polarimetry analysis

    NASA Astrophysics Data System (ADS)

    Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik

    2014-06-01

    Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.

  10. A New Approach for Fingerprint Image Compression

    SciTech Connect

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  11. DNABIT Compress - Genome compression algorithm.

    PubMed

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  12. Tests for Wavelets as a Basis Set

    NASA Astrophysics Data System (ADS)

    Baker, Thomas; Evenbly, Glen; White, Steven

    A wavelet transformation is a special type of filter usually reserved for image processing and other applications. We develop metrics to evaluate wavelets for general problems on test one-dimensional systems. The goal is to eventually use a wavelet basis in electronic structure calculations. We compare a variety of orthogonal wavelets such as coiflets, symlets, and daubechies wavelets. We also evaluate a new type of orthogonal wavelet with dilation factor three which is both symmetric and compact in real space. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award #DE-SC008696.

  13. Independent component analysis (ICA) using wavelet subband orthogonality

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Yamakawa, Takeshi

    1998-03-01

    There are two kinds of RRP: (1) invertible ones, such as global Fourier transform (FT), local wavelet transform (WT), and adaptive wavelet transform (AWT); and (2) non-invertible ones, e.g. ICA including the global principle component analysis (PCA). The invertible FT and WT can be related to the non-invertible ICA when the continuous transforms are approximate din discrete matrix-vector operations. The landmark accomplishment of ICA is to obtain, by unsupervised learning algorithm, the edge-map as image feature ayields, shown by Helsinki researchers using fourth order statistics of nyields -- Kurosis K(uyields), and derived from information- theoretical first principle is augmented by the orthogonality property of the DWT subband used necessarily for usual image compression. If we take the advantage of the subband decorrelation, we have potentially an efficient utilization of a pari of communication channels if we could send several more mixed subband images through the pair of channels.

  14. Interactive Display of Surfaces Using Subdivision Surfaces and Wavelets

    SciTech Connect

    Duchaineau, M A; Bertram, M; Porumbescu, S; Hamann, B; Joy, K I

    2001-10-03

    Complex surfaces and solids are produced by large-scale modeling and simulation activities in a variety of disciplines. Productive interaction with these simulations requires that these surfaces or solids be viewable at interactive rates--yet many of these surfaced solids can contain hundreds of millions of polygondpolyhedra. Interactive display of these objects requires compression techniques to minimize storage, and fast view-dependent triangulation techniques to drive the graphics hardware. In this paper, we review recent advances in subdivision-surface wavelet compression and optimization that can be used to provide a framework for both compression and triangulation. These techniques can be used to produce suitable approximations of complex surfaces of arbitrary topology, and can be used to determine suitable triangulations for display. The techniques can be used in a variety of applications in computer graphics, computer animation and visualization.

  15. Image quality (IQ) guided multispectral image compression

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  16. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  17. Microscopic off-axis holographic image compression with JPEG 2000

    NASA Astrophysics Data System (ADS)

    Bruylants, Tim; Blinder, David; Ottevaere, Heidi; Munteanu, Adrian; Schelkens, Peter

    2014-05-01

    With the advent of modern computing and imaging technologies, the use of digital holography became practical in many applications such as microscopy, interferometry, non-destructive testing, data encoding, and certification. In this respect the need for an efficient representation technology becomes imminent. However, microscopic holographic off-axis recordings have characteristics that differ significantly from that of regular natural imagery, because they represent a recorded interference pattern that mainly manifests itself in the high-frequency bands. Since regular image compression schemes are typically based on a Laplace frequency distribution, they are unable to optimally represent such holographic data. However, unlike most image codecs, the JPEG 2000 standard can be modified to efficiently cope with images containing such alternative frequency distributions by applying the arbitrary wavelet decomposition of Part 2. As such, employing packet decompositions already significantly improves the compression performance for off-axis holographic images over that of regular image compression schemes. Moreover, extending JPEG 2000 with directional wavelet transforms shows even higher compression efficiency improvements. Such an extension to the standard would only require signaling the applied directions, and would not impact any other existing functionality. In this paper, we show that wavelet packet decomposition combined with directional wavelet transforms provides efficient lossy-to-lossless compression of microscopic off-axis holographic imagery.

  18. Group-normalized wavelet packet signal processing

    NASA Astrophysics Data System (ADS)

    Shi, Zhuoer; Bao, Zheng

    1997-04-01

    Since the traditional wavelet and wavelet packet coefficients do not exactly represent the strength of signal components at the very time(space)-frequency tilling, group- normalized wavelet packet transform (GNWPT), is presented for nonlinear signal filtering and extraction from the clutter or noise, together with the space(time)-frequency masking technique. The extended F-entropy improves the performance of GNWPT. For perception-based image, soft-logic masking is emphasized to remove the aliasing with edge preserved. Lawton's method for complex valued wavelets construction is extended to generate the complex valued compactly supported wavelet packets for radar signal extraction. This kind of wavelet packets are symmetry and unitary orthogonal. Well-defined wavelet packets are chosen by the analysis remarks on their time-frequency characteristics. For real valued signal processing, such as images and ECG signal, the compactly supported spline or bi- orthogonal wavelet packets are preferred for perfect de- noising and filtering qualities.

  19. A Mellin transform approach to wavelet analysis

    NASA Astrophysics Data System (ADS)

    Alotta, Gioacchino; Di Paola, Mario; Failla, Giuseppe

    2015-11-01

    The paper proposes a fractional calculus approach to continuous wavelet analysis. Upon introducing a Mellin transform expression of the mother wavelet, it is shown that the wavelet transform of an arbitrary function f(t) can be given a fractional representation involving a suitable number of Riesz integrals of f(t), and corresponding fractional moments of the mother wavelet. This result serves as a basis for an original approach to wavelet analysis of linear systems under arbitrary excitations. In particular, using the proposed fractional representation for the wavelet transform of the excitation, it is found that the wavelet transform of the response can readily be computed by a Mellin transform expression, with fractional moments obtained from a set of algebraic equations whose coefficient matrix applies for any scale a of the wavelet transform. Robustness and computationally efficiency of the proposed approach are shown in the paper.

  20. Adaptive Multilinear Tensor Product Wavelets.

    PubMed

    Weiss, Kenneth; Lindstrom, Peter

    2016-01-01

    Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.

  1. Wavelet-based Evapotranspiration Forecasts

    NASA Astrophysics Data System (ADS)

    Bachour, R.; Maslova, I.; Ticlavilca, A. M.; McKee, M.; Walker, W.

    2012-12-01

    Providing a reliable short-term forecast of evapotranspiration (ET) could be a valuable element for improving the efficiency of irrigation water delivery systems. In the last decade, wavelet transform has become a useful technique for analyzing the frequency domain of hydrological time series. This study shows how wavelet transform can be used to access statistical properties of evapotranspiration. The objective of the research reported here is to use wavelet-based techniques to forecast ET up to 16 days ahead, which corresponds to the LANDSAT 7 overpass cycle. The properties of the ET time series, both physical and statistical, are examined in the time and frequency domains. We use the information about the energy decomposition in the wavelet domain to extract meaningful components that are used as inputs for ET forecasting models. Seasonal autoregressive integrated moving average (SARIMA) and multivariate relevance vector machine (MVRVM) models are coupled with the wavelet-based multiresolution analysis (MRA) results and used to generate short-term ET forecasts. Accuracy of the models is estimated and model robustness is evaluated using the bootstrap approach.

  2. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  3. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  4. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  5. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  6. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  7. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  8. Wavelet-based Poisson solver for use in particle-in-cell simulations.

    PubMed

    Terzić, Balsa; Pogorelov, Ilya V

    2005-06-01

    We report on a successful implementation of a wavelet-based Poisson solver for use in three-dimensional particle-in-cell simulations. Our method harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and additional compression of relevant data sets. We present and discuss preliminary results relating to the application of the new solver to test problems in accelerator physics and astrophysics. PMID:15980304

  9. Recent advances in wavelet technology

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    Wavelet research has been developing rapidly over the past five years, and in particular in the academic world there has been significant activity at numerous universities. In the industrial world, there has been developments at Aware, Inc., Lockheed, Martin-Marietta, TRW, Kodak, Exxon, and many others. The government agencies supporting wavelet research and development include ARPA, ONR, AFOSR, NASA, and many other agencies. The recent literature in the past five years includes a recent book which is an index of citations in the past decade on this subject, and it contains over 1,000 references and abstracts.

  10. Adaptive wavelets and relativistic magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hirschmann, Eric; Neilsen, David; Anderson, Matthe; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    We present a method for integrating the relativistic magnetohydrodynamics equations using iterated interpolating wavelets. Such provide an adaptive implementation for simulations in multidimensions. A measure of the local approximation error for the solution is provided by the wavelet coefficients. They place collocation points in locations naturally adapted to the flow while providing expected conservation. We present demanding 1D and 2D tests includingthe Kelvin-Helmholtz instability and the Rayleigh-Taylor instability. Finally, we consider an outgoing blast wave that models a GRB outflow.

  11. The New CCSDS Image Compression Recommendation

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph

    2005-01-01

    The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.

  12. Exploiting prior knowledge in compressed sensing wireless ECG systems.

    PubMed

    Polanía, Luisa F; Carrillo, Rafael E; Blanco-Velasco, Manuel; Barner, Kenneth E

    2015-03-01

    Recent results in telecardiology show that compressed sensing (CS) is a promising tool to lower energy consumption in wireless body area networks for electrocardiogram (ECG) monitoring. However, the performance of current CS-based algorithms, in terms of compression rate and reconstruction quality of the ECG, still falls short of the performance attained by state-of-the-art wavelet-based algorithms. In this paper, we propose to exploit the structure of the wavelet representation of the ECG signal to boost the performance of CS-based methods for compression and reconstruction of ECG signals. More precisely, we incorporate prior information about the wavelet dependencies across scales into the reconstruction algorithms and exploit the high fraction of common support of the wavelet coefficients of consecutive ECG segments. Experimental results utilizing the MIT-BIH Arrhythmia Database show that significant performance gains, in terms of compression rate and reconstruction quality, can be obtained by the proposed algorithms compared to current CS-based methods. PMID:24846672

  13. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  14. Wavelet library for constrained devices

    NASA Astrophysics Data System (ADS)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  15. EEG data compression to monitor DoA in telemedicine.

    PubMed

    Palendeng, Mario E; Zhang, Qing; Pang, Chaoyi; Li, Yan

    2012-01-01

    Data compression techniques have been widely used to process and transmit huge amount of EEG data in real-time and remote EEG signal processing systems. In this paper we propose a lossy compression technique, F-shift, to compress EEG signals for remote depth of Anaesthesia (DoA) monitoring. Compared with traditional wavelet compression techniques, our method not only preserves valuable clinical information with high compression ratios, but also reduces high frequency noises in EEG signals. Moreover, our method has negligible compression overheads (less than 0.1 seconds), which can greatly benefit real-time EEG signal monitoring systems. Our extensive experiments demonstrate the efficiency and effectiveness of the proposed compression method.

  16. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  17. Embedded wavelet-based face recognition under variable position

    NASA Astrophysics Data System (ADS)

    Cotret, Pascal; Chevobbe, Stéphane; Darouich, Mehdi

    2015-02-01

    For several years, face recognition has been a hot topic in the image processing field: this technique is applied in several domains such as CCTV, electronic devices delocking and so on. In this context, this work studies the efficiency of a wavelet-based face recognition method in terms of subject position robustness and performance on various systems. The use of wavelet transform has a limited impact on the position robustness of PCA-based face recognition. This work shows, for a well-known database (Yale face database B*), that subject position in a 3D space can vary up to 10% of the original ROI size without decreasing recognition rates. Face recognition is performed on approximation coefficients of the image wavelet transform: results are still satisfying after 3 levels of decomposition. Furthermore, face database size can be divided by a factor 64 (22K with K = 3). In the context of ultra-embedded vision systems, memory footprint is one of the key points to be addressed; that is the reason why compression techniques such as wavelet transform are interesting. Furthermore, it leads to a low-complexity face detection stage compliant with limited computation resources available on such systems. The approach described in this work is tested on three platforms from a standard x86-based computer towards nanocomputers such as RaspberryPi and SECO boards. For K = 3 and a database with 40 faces, the execution mean time for one frame is 0.64 ms on a x86-based computer, 9 ms on a SECO board and 26 ms on a RaspberryPi (B model).

  18. A Simple Method for Guaranteeing ECG Quality in Real-Time Wavelet Lossy Coding

    NASA Astrophysics Data System (ADS)

    Alesanco, Álvaro; García, José

    2007-12-01

    Guaranteeing ECG signal quality in wavelet lossy compression methods is essential for clinical acceptability of reconstructed signals. In this paper, we present a simple and efficient method for guaranteeing reconstruction quality measured using the new distortion index wavelet weighted PRD (WWPRD), which reflects in a more accurate way the real clinical distortion of the compressed signal. The method is based on the wavelet transform and its subsequent coding using the set partitioning in hierarchical trees (SPIHT) algorithm. By thresholding the WWPRD in the wavelet transform domain, a very precise reconstruction error can be achieved thus enabling to obtain clinically useful reconstructed signals. Because of its computational efficiency, the method is suitable to work in a real-time operation, thus being very useful for real-time telecardiology systems. The method is extensively tested using two different ECG databases. Results led to an excellent conclusion: the method controls the quality in a very accurate way not only in mean value but also with a low-standard deviation. The effects of ECG baseline wandering as well as noise in compression are also discussed. Baseline wandering provokes negative effects when using WWPRD index to guarantee quality because this index is normalized by the signal energy. Therefore, it is better to remove it before compression. On the other hand, noise causes an increase in signal energy provoking an artificial increase of the coded signal bit rate. Clinical validation by cardiologists showed that a WWPRD value of 10[InlineEquation not available: see fulltext.] preserves the signal quality and thus they recommend this value to be used in the compression system.

  19. Lossless Compression on MRI Images Using SWT.

    PubMed

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G

    2014-10-01

    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  20. Next gen wavelets down-sampling preserving statistics

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Miao, Lidan; Chanyagon, Pornchai; Cader, Masud

    2007-04-01

    We extend the 2 nd Gen Discrete Wavelet Transform (DWT) of Swelden to the Next Generations (NG) Digital Wavelet Transform (DWT) preserving the statistical salient features. The lossless NG DWT accomplishes the data compression of "wellness baseline profiles (WBP)" of aging population at homes. For medical monitoring system at home fronts we translate the military experience to dual usage of veterans & civilian alike with the following three requirements: (i) Data Compression: The necessary down sampling reduces the immense amount of data of individual WBP from hours to days and to weeks for primary caretakers in terms of moments, e.g. mean value, variance, etc., without the artifacts caused by FFT arbitrary windowing. (ii) Lossless: our new NG_DWT must preserve the original data sets. (iii) Phase Transition: NG_DWT must capture the critical phase transition of the wellness toward the sickness with simultaneous display of local statistical moments. According to the Nyquist sampling theory, assuming a band-limited wellness physiology, we must sample the WBP at least twice per day since it is changing diurnally and seasonally. Since NG_DWT, like the 2 nd Gen, is lossless, we can reconstruct the original time series for the physicians' second looks. This technique of NG_DWT can also help stock market day-traders monitoring the volatility of multiple portfolios without artificial horizon artifacts.

  1. Simultaneous edge sensing compression and encryption for real-time video transmission

    NASA Astrophysics Data System (ADS)

    Al-Hayani, Nazar; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    Video compression and encryption became an essential part in multimedia application and video conferencing in particular. Applying both techniques simultaneously is one of the challenges where the size and the quality are important. In this paper we are suggesting the use of wavelet transform in order to deal with the low frequency coefficients when undertaking the encryption on the wavelet high frequency coefficients while accomplishing the compression. Applying both methods simultaneously is not new. In this paper we are suggesting a way to improve the security level of the encryption with better computational performance in both encryption and compression. Both encryption and compression in this paper are based on edges extraction from the wavelet high frequency sub-bands. Although there are some research perform the edge detection on the spatial domain, but the number of edges produced based on wavelet can be dynamic which have an effect on the compression ratio dynamically. Moreover, this kind of edge detection in wavelet domain will add different level of selective encryption.

  2. Robust wavelet-based video watermarking scheme for copyright protection using the human visual system

    NASA Astrophysics Data System (ADS)

    Preda, Radu O.; Vizireanu, Dragos Nicolae

    2011-01-01

    The development of the information technology and computer networks facilitates easy duplication, manipulation, and distribution of digital data. Digital watermarking is one of the proposed solutions for effectively safeguarding the rightful ownership of digital images and video. We propose a public digital watermarking technique for video copyright protection in the discrete wavelet transform domain. The scheme uses binary images as watermarks. These are embedded in the detail wavelet coefficients of the middle wavelet subbands. The method is a combination of spread spectrum and quantization-based watermarking. Every bit of the watermark is spread over a number of wavelet coefficients with the use of a secret key by means of quantization. The selected wavelet detail coefficients from different subbands are quantized using an optimal quantization model, based on the characteristics of the human visual system (HVS). Our HVS-based scheme is compared to a non-HVS approach. The resilience of the watermarking algorithm is tested against a series of different spatial, temporal, and compression attacks. To improve the robustness of the algorithm, we use error correction codes and embed the watermark with spatial and temporal redundancy. The proposed method achieves a good perceptual quality and high resistance to a large spectrum of attacks.

  3. Principal component analysis in the wavelet domain: new features for underwater object recognition

    NASA Astrophysics Data System (ADS)

    Okimoto, Gordon S.; Lemonds, David W.

    1999-08-01

    Principal component analysis (PCA) in the wavelet domain provides powerful features for underwater object recognition applications. The multiresolution analysis of the Morlet wavelet transform (MWT) is used to pre-process echo returns from targets ensonified by biologically motivated broadband signal. PCA is then used to compress and denoise the resulting time-scale signal representation for presentation to a hierarchical neural network for object classification. Wavelet/PCA features combined with multi-aspect data fusion and neural networks have resulted in impressive underwater object recognition performance using backscatter data generated by simulate dolphin echolocation clicks and bat- like linear frequency modulated upsweeps. For example, wavelet/PCA features extracted from LFM echo returns have resulted in correct classification rates of 98.6 percent over a six target suite, which includes two mine simulators and four clutter objects. For the same data, ROC analysis of the two-class mine-like versus non-mine-like problem resulted in a probability of detection of 0.981 and a probability of false alarm of 0.032 at the 'optimal' operating point. The wavelet/PCA feature extraction algorithm is currently being implemented in VLSI for use in small, unmanned underwater vehicles designed for mine- hunting operations in shallow water environments.

  4. Iterated oversampled filter banks and wavelet frames

    NASA Astrophysics Data System (ADS)

    Selesnick, Ivan W.; Sendur, Levent

    2000-12-01

    This paper takes up the design of wavelet tight frames that are analogous to Daubechies orthonormal wavelets - that is, the design of minimal length wavelet filters satisfying certain polynomial properties, but now in the oversampled case. The oversampled dyadic DWT considered in this paper is based on a single scaling function and tow distinct wavelets. Having more wavelets than necessary gives a closer spacing between adjacent wavelets within the same scale. As a result, the transform is nearly shift-invariant, and can be used to improve denoising. Because the associated time- frequency lattice preserves the dyadic structure of the critically sampled DWT it can be used with tree-based denoising algorithms that exploit parent-child correlation.

  5. Wavelet-based reconstruction of fossil-fuel CO2 emissions from sparse measurements

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Ray, J.; Yadav, V.; Van Bloemen Waanders, B.; Michalak, A. M.

    2012-12-01

    We present a method to estimate spatially resolved fossil-fuel CO2 (ffCO2) emissions from sparse measurements of time-varying CO2 concentrations. It is based on the wavelet-modeling of the strongly non-stationary spatial distribution of ffCO2 emissions. The dimensionality of the wavelet model is first reduced using images of nightlights, which identify regions of human habitation. Since wavelets are a multiresolution basis set, most of the reduction is accomplished by removing fine-scale wavelets, in the regions with low nightlight radiances. The (reduced) wavelet model of emissions is propagated through an atmospheric transport model (WRF) to predict CO2 concentrations at a handful of measurement sites. The estimation of the wavelet model of emissions i.e., inferring the wavelet weights, is performed by fitting to observations at the measurement sites. This is done using Staggered Orthogonal Matching Pursuit (StOMP), which first identifies (and sets to zero) the wavelet coefficients that cannot be estimated from the observations, before estimating the remaining coefficients. This model sparsification and fitting is performed simultaneously, allowing us to explore multiple wavelet-models of differing complexity. This technique is borrowed from the field of compressive sensing, and is generally used in image and video processing. We test this approach using synthetic observations generated from emissions from the Vulcan database. 35 sensor sites are chosen over the USA. FfCO2 emissions, averaged over 8-day periods, are estimated, at a 1 degree spatial resolutions. We find that only about 40% of the wavelets in emission model can be estimated from the data; however the mix of coefficients that are estimated changes with time. Total US emission can be reconstructed with about ~5% errors. The inferred emissions, if aggregated monthly, have a correlation of 0.9 with Vulcan fluxes. We find that the estimated emissions in the Northeast US are the most accurate. Sandia

  6. Wavelet analysis in two-dimensional tomography

    NASA Astrophysics Data System (ADS)

    Burkovets, Dimitry N.

    2002-02-01

    The diagnostic possibilities of wavelet-analysis of coherent images of connective tissue in its pathological changes diagnostics. The effectiveness of polarization selection in obtaining wavelet-coefficients' images is also shown. The wavelet structures, characterizing the process of skin psoriasis, bone-tissue osteoporosis have been analyzed. The histological sections of physiological normal and pathologically changed samples of connective tissue of human skin and spongy bone tissue have been analyzed.

  7. Wavelet analysis of epileptic spikes

    NASA Astrophysics Data System (ADS)

    Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-05-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  8. The FBI compression standard for digitized fingerprint images

    SciTech Connect

    Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  9. Haar Wavelet Analysis of Climatic Time Series

    NASA Astrophysics Data System (ADS)

    Zhang, Zhihua; Moore, John; Grinsted, Aslak

    2014-05-01

    In order to extract the intrinsic information of climatic time series from background red noise, we will first give an analytic formula on the distribution of Haar wavelet power spectra of red noise in a rigorous statistical framework. The relation between scale aand Fourier period T for the Morlet wavelet is a= 0.97T . However, for Haar wavelet, the corresponding formula is a= 0.37T . Since for any time series of time step δt and total length Nδt, the range of scales is from the smallest resolvable scale 2δt to the largest scale Nδt in wavelet-based time series analysis, by using the Haar wavelet analysis, one can extract more low frequency intrinsic information. Finally, we use our method to analyze Arctic Oscillation which is a key aspect of climate variability in the Northern Hemisphere, and discover a great change in fundamental properties of the AO,-commonly called a regime shift or tripping point. Our partial results have been published as follows: [1] Z. Zhang, J.C. Moore and A. Grinsted, Haar wavelet analysis of climatic time series, Int. J. Wavelets, Multiresol. & Inf. Process., in press, 2013 [2] Z. Zhang, J.C. Moore, Comment on "Significance tests for the wavelet power and the wavelet power spectrum", Ann. Geophys., 30:12, 2012

  10. Entangled Husimi Distribution and Complex Wavelet Transformation

    NASA Astrophysics Data System (ADS)

    Hu, Li-Yun; Fan, Hong-Yi

    2010-05-01

    Similar in spirit to the preceding work (Int. J. Theor. Phys. 48:1539, 2009) where the relationship between wavelet transformation and Husimi distribution function is revealed, we study this kind of relationship to the entangled case. We find that the optical complex wavelet transformation can be used to study the entangled Husimi distribution function in phase space theory of quantum optics. We prove that, up to a Gaussian function, the entangled Husimi distribution function of a two-mode quantum state | ψ> is just the modulus square of the complex wavelet transform of e^{-\\vert η \\vert 2/2} with ψ( η) being the mother wavelet.

  11. Wavelet Sparse Approximate Inverse Preconditioners

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Tang, W.-P.; Wan, W. L.

    1996-01-01

    There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.

  12. A 64-channel neural signal processor/ compressor based on Haar wavelet transform.

    PubMed

    Shaeri, Mohammad Ali; Sodagar, Amir M; Abrishami-Moghaddam, Hamid

    2011-01-01

    A signal processor/compressor dedicated to implantable neural recording microsystems is presented. Signal compression is performed based on Haar wavelet. It is shown in this paper that, compared to other mathematical transforms already used for this purpose, compression of neural signals using this type of wavelet transform can be of almost the same quality, while demanding less circuit complexity and smaller silicon area. Designed in a 0.13-μm standard CMOS process, the 64-channel 8-bit signal processor reported in this paper occupies 113 μm x 110 μm of silicon area. It operates under a 1.8-V supply voltage at a master clock frequency of 3.2 MHz.

  13. Science-based Region-of-Interest Image Compression

    NASA Technical Reports Server (NTRS)

    Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.

    2004-01-01

    As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.

  14. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D; Bertram, M; Duchaineau, M; Max, N

    2002-01-14

    Surfaces generated by scientific simulation and range scanning can reach into the billions of polygons. Such surfaces must be aggressively compressed, but at the same time should provide for level of detail queries. Progressive compression techniques based on subdivision surfaces produce impressive results on range scanned models. However, these methods require the construction of a base mesh which parameterizes the surface to be compressed and encodes the topology of the surface. For complex surfaces with high genus and/or a large number of components, the computation of an appropriate base mesh is difficult and often infeasible. We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our method avoids the costly base-mesh construction step and offers several improvements over previous attempts at compressing signed-distance functions, including an {Omicron}(n) distance transform, a new zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  15. BOOK REVIEW: The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance

    NASA Astrophysics Data System (ADS)

    Ng, J.; Kingsbury, N. G.

    2004-02-01

    wavelet. The second half of the chapter groups together miscellaneous points about the discrete wavelet transform, including coefficient manipulation for signal denoising and smoothing, a description of Daubechies’ wavelets, the properties of translation invariance and biorthogonality, the two-dimensional discrete wavelet transforms and wavelet packets. The fourth chapter is dedicated to wavelet transform methods in the author’s own specialty, fluid mechanics. Beginning with a definition of wavelet-based statistical measures for turbulence, the text proceeds to describe wavelet thresholding in the analysis of fluid flows. The remainder of the chapter describes wavelet analysis of engineering flows, in particular jets, wakes, turbulence and coherent structures, and geophysical flows, including atmospheric and oceanic processes. The fifth chapter describes the application of wavelet methods in various branches of engineering, including machining, materials, dynamics and information engineering. Unlike previous chapters, this (and subsequent) chapters are styled more as literature reviews that describe the findings of other authors. The areas addressed in this chapter include: the monitoring of machining processes, the monitoring of rotating machinery, dynamical systems, chaotic systems, non-destructive testing, surface characterization and data compression. The sixth chapter continues in this vein with the attention now turned to wavelets in the analysis of medical signals. Most of the chapter is devoted to the analysis of one-dimensional signals (electrocardiogram, neural waveforms, acoustic signals etc.), although there is a small section on the analysis of two-dimensional medical images. The seventh and final chapter of the book focuses on the application of wavelets in three seemingly unrelated application areas: fractals, finance and geophysics. The treatment on wavelet methods in fractals focuses on stochastic fractals with a short section on multifractals. The

  16. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-12-30

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

  17. Image compression/decompression based on mathematical transform, reduction/expansion, and image sharpening

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

  18. Video Compression

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.

  19. A new wavelet-based thin plate element using B-spline wavelet on the interval

    NASA Astrophysics Data System (ADS)

    Jiawei, Xiang; Xuefeng, Chen; Zhengjia, He; Yinghong, Zhang

    2008-01-01

    By interacting and synchronizing wavelet theory in mathematics and variational principle in finite element method, a class of wavelet-based plate element is constructed. In the construction of wavelet-based plate element, the element displacement field represented by the coefficients of wavelet expansions in wavelet space is transformed into the physical degree of freedoms in finite element space via the corresponding two-dimensional C1 type transformation matrix. Then, based on the associated generalized function of potential energy of thin plate bending and vibration problems, the scaling functions of B-spline wavelet on the interval (BSWI) at different scale are employed directly to form the multi-scale finite element approximation basis so as to construct BSWI plate element via variational principle. BSWI plate element combines the accuracy of B-spline functions approximation and various wavelet-based elements for structural analysis. Some static and dynamic numerical examples are studied to demonstrate the performances of the present element.

  20. Real-time nondestructive structural health monitoring using support vector machines and wavelets

    NASA Astrophysics Data System (ADS)

    Bulut, Ahmet; Singh, Ambuj K.; Shin, Peter; Fountain, Tony; Jasso, Hector; Yan, Linjun; Elgamal, Ahmed

    2005-05-01

    We present an alternative to visual inspection for detecting damage to civil infrastructure. We describe a real-time decision support system for nondestructive health monitoring. The system is instrumented by an integrated network of wireless sensors mounted on civil infrastructures such as bridges, highways, and commercial and industrial facilities. To address scalability and power consumption issues related to sensor networks, we propose a three-tier system that uses wavelets to adaptively reduce the streaming data spatially and temporally. At the sensor level, measurement data is temporally compressed before being sent upstream to intermediate communication nodes. There, correlated data from multiple sensors is combined and sent to the operation center for further reduction and interpretation. At each level, the compression ratio can be adaptively changed via wavelets. This multi-resolution approach is useful in optimizing total resources in the system. At the operation center, Support Vector Machines (SVMs) are used to detect the location of potential damage from the reduced data. We demonstrate that the SVM is a robust classifier in the presence of noise and that wavelet-based compression gracefully degrades its classification accuracy. We validate the effectiveness of our approach using a finite element model of the Humboldt Bay Bridge. We envision that our approach will prove novel and useful in the design of scalable nondestructive health monitoring systems.

  1. A symbol-map wavelet zero-tree image coding algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Liu, Wenyao; Peng, Xiang; Liu, Xiaoli

    2008-03-01

    A improved SPIHT image compression algorithm called symbol-map zero-tree coding algorithm (SMZTC) is proposed in this paper based on wavelet transform. The SPIHT algorithm is a high efficiency wavelet coefficients coding method and have good image compressing effect, but it has more complexity and need too much memory. The algorithm presented in this paper utilizes two small symbol-maps Mark and FC to store the status of coefficients and zero tree sets during coding procedure so as to reduce the memory requirement. By this strategy, the memory cost is reduced distinctly as well as the scanning speed of coefficients is improved. Those comparison experiments for 512 by 512 images are done with some other zerotree coding algorithms, such as SPIHT, NLS method. During the experiments, the biorthogonal 9/7 lifting wavelet transform is used to image transform. The results of coding experiments show that this algorithm speed of codec is improved significantly, and compression-ratio is almost uniformed with SPIHT algorithm.

  2. [An improved motion estimation of medical image series via wavelet transform].

    PubMed

    Zhang, Ying; Rao, Nini; Wang, Gang

    2006-10-01

    The compression of medical image series is very important in telemedicine. The motion estimation plays a key role in the video sequence compression. In this paper, an improved square-diamond search (SDS) algorithm is proposed for the motion estimation of medical image series. The improved SDS algorithm reduces the number of the searched points. This improved SDS algorithm is used in wavelet transformation field to estimate the motion of medical image series. A simulation experiment for digital subtraction angiography (DSA) is made. The experiment results show that the algorithm accuracy is higher than that of other algorithms in the motion estimation of medical image series. PMID:17121333

  3. Variable density compressed image sampling.

    PubMed

    Wang, Zhongmin; Arce, Gonzalo R

    2010-01-01

    Compressed sensing (CS) provides an efficient way to acquire and reconstruct natural images from a limited number of linear projection measurements leading to sub-Nyquist sampling rates. A key to the success of CS is the design of the measurement ensemble. This correspondence focuses on the design of a novel variable density sampling strategy, where the a priori information of the statistical distributions that natural images exhibit in the wavelet domain is exploited. The proposed variable density sampling has the following advantages: 1) the generation of the measurement ensemble is computationally efficient and requires less memory; 2) the necessary number of measurements for image reconstruction is reduced; 3) the proposed sampling method can be applied to several transform domains and leads to simple implementations. Extensive simulations show the effectiveness of the proposed sampling method.

  4. An adaptive undersampling scheme of wavelet-encoded parallel MR imaging for more efficient MR data acquisition

    NASA Astrophysics Data System (ADS)

    Xie, Hua; Bosshard, John C.; Hill, Jason E.; Wright, Steven M.; Mitra, Sunanda

    2016-03-01

    Magnetic Resonance Imaging (MRI) offers noninvasive high resolution, high contrast cross-sectional anatomic images through the body. The data of the conventional MRI is collected in spatial frequency (Fourier) domain, also known as kspace. Because there is still a great need to improve temporal resolution of MRI, Compressed Sensing (CS) in MR imaging is proposed to exploit the sparsity of MR images showing great potential to reduce the scan time significantly, however, it poses its own unique problems. This paper revisits wavelet-encoded MR imaging which replaces phase encoding in conventional MRI data acquisition with wavelet encoding by applying wavelet-shaped spatially selective radiofrequency (RF) excitation, and keeps the readout direction as frequency encoding. The practicality of wavelet encoded MRI by itself is limited due to the SNR penalties and poor time resolution compared to conventional Fourier-based MRI. To compensate for those disadvantages, this paper first introduces an undersampling scheme named significance map for sparse wavelet-encoded k-space to speed up data acquisition as well as allowing for various adaptive imaging strategies. The proposed adaptive wavelet-encoded undersampling scheme does not require prior knowledge of the subject to be scanned. Multiband (MB) parallel imaging is also incorporated with wavelet-encoded MRI by exciting multiple regions simultaneously for further reduction in scan time desirable for medical applications. The simulation and experimental results are presented showing the feasibility of the proposed approach in further reduction of the redundancy of the wavelet k-space data while maintaining relatively high quality.

  5. Adapted waveform analysis, wavelet packets, and local cosine libraries as a tool for image processing

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.; Woog, Lionel J.

    1995-09-01

    Adapted wave form analysis, refers to a collection of FFT like adapted transform algorithms. Given an image these methods provide special matched collections of templates (orthonormal bases) enabling an efficient coding of the image. Perhaps the closest well known example of such coding method is provided by musical notation, where each segment of music is represented by a musical score made up of notes (templates) characterised by their duration, pitch, location and amplitude, our method corresponds to transcribing the music in as few notes as possible. The extension to images and video is straightforward we describe the image by collections of oscillatory patterns (paint brush strokes)of various sizes locations and amplitudes using a variety of orthogonal bases. These selected basis functions are chosen inside predefined libraries of oscillatory localized functions (trigonometric and wavelet-packets waveforms) so as to optimize the number of parameters needed to describe our object. These algorithms are of complexity N log N opening the door for a large range of applications in signal and image processing, such as compression, feature extraction denoising and enhancement. In particular we describe a class of special purpose compressions for fingerprint irnages, as well as denoising tools for texture and noise extraction. We start by relating traditional Fourier methods to wavelet, wavelet-packet based algorithms using a recent refinement of the windowed sine and cosine transforms. We will then derive an adapted local sine transform show it's relation to wavelet and wavelet-packet analysis and describe an analysis toolkit illustrating the merits of different adaptive and nonadaptive schemes.

  6. Improved satellite image compression and reconstruction via genetic algorithms

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary

    2008-10-01

    A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.

  7. On the use of lossless integer wavelet transforms in medical image segmentation

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Mallya, Yogish

    2005-04-01

    Recent trends in medical image processing involve computationally intensive processing techniques on large data sets, especially for 3D applications such as segmentation, registration, volume rendering etc. Multi-resolution image processing techniques have been used in order to speed-up these methods. However, all well-known techniques currently used in multi-resolution medical image processing rely on using Gaussain-based or other equivalent floating point representations that are lossy and irreversible. In this paper, we study the use of Integer Wavelet Transforms (IWT) to address the issue of lossless representation and reversible reconstruction for such medical image processing applications while still retaining all the benefits which floating-point transforms offer such as high speed and efficient memory usage. In particular, we consider three low-complexity reversible wavelet transforms namely the - Lazy-wavelet, the Haar wavelet or (1,1) and the S+P transform as against the Gaussian filter for multi-resolution speed-up of an automatic bone removal algorithm for abdomen CT Angiography. Perfect-reconstruction integer wavelet filters have the ability to perfectly recover the original data set at any step in the application. An additional advantage with the reversible wavelet representation is that it is suitable for lossless compression for purposes of storage, archiving and fast retrieval. Given the fact that even a slight loss of information in medical image processing can be detrimental to diagnostic accuracy, IWTs seem to be the ideal choice for multi-resolution based medical image segmentation algorithms. These could also be useful for other medical image processing methods.

  8. Compressed Genotyping

    PubMed Central

    Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.

    2011-01-01

    Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737

  9. Wavelet analysis application for remote control room operation

    NASA Astrophysics Data System (ADS)

    Semenov, Oleg I.; Semenov, Igor B.

    2003-03-01

    Compression algorithm for data transfer from tokamak installation to remote control room was developed on the basis of wavelet analyses. The algorithm is useful in the case of low speed Internet channel (˜20 kbytes/s) for real time express analysis of row noisy data between shots (i.e., ˜5-10 times compression of the initial row data array (˜50-100 Mbytes), transmission, restoration, and analysis in time interval ˜15 min). The developed algorithm is based on some amount of data losses so that the amplitude and phase difference between the initial and restored data were less then 5% to signal amplitude. The algorithm was tested for Mirnov signal transmission in the case of disruption instability. It was shown that the error of restoration does not depend on form of the signal, i.e., applied method has good characteristics both in the case of the spikes and smooth functions. Experiments show that the coefficient of compression 5-15 could be achieved if the errors are in 0.5%-5%.

  10. Imaging system of wavelet optics described by the Gaussian linear frequency-modulated complex wavelet.

    PubMed

    Tan, Liying; Ma, Jing; Wang, Guangming

    2005-12-01

    The image formation and the point-spread function of an optical system are analyzed by use of the wavelet basis function. The image described by a wavelet is no longer an indivisible whole image. It is, rather, a complex image consisting of many wavelet subimages, which come from the changes of different parameters (scale) a and c, and parameters b and d show the positions of wavelet subimages under different scales. A Gaussian frequency-modulated complex-valued wavelet function is introduced to express the point-spread function of an optical system and used to describe the image formation. The analysis, in allusion to the situation of illumination with a monochromatic plain light wave, shows that using the theory of wavelet optics to describe the image formation of an optical system is feasible.

  11. Imaging system of wavelet optics described by the Gaussian linear frequency-modulated complex wavelet

    NASA Astrophysics Data System (ADS)

    Tan, Liying; Ma, Jing; Wang, Guangming

    2005-12-01

    The image formation and the point-spread function of an optical system are analyzed by use of the wavelet basis function. The image described by a wavelet is no longer an indivisible whole image. It is, rather, a complex image consisting of many wavelet subimages, which come from the changes of different parameters (scale) a and c, and parameters b and d show the positions of wavelet subimages under different scales. A Gaussian frequency-modulated complex-valued wavelet function is introduced to express the point-spread function of an optical system and used to describe the image formation. The analysis, in allusion to the situation of illumination with a monochromatic plain light wave, shows that using the theory of wavelet optics to describe the image formation of an optical system is feasible.

  12. Storage and compression design of high speed CCD

    NASA Astrophysics Data System (ADS)

    Cai, Xichang; Zhai, LinPei

    2009-05-01

    In current field of CCD measurement, large area and high resolution CCD is used to obtain big measurement image, so that, speed and capacity of CCD requires high performance of later storage and process system. The paper discusses how to use SCSI hard disk to construct storage system and use DSPs and FPGA to realize image compression. As for storage subsystem, Because CCD is divided into multiplex output, SCSI array is used in RAID0 way. The storage system is com posed of high speed buffer, DM A controller, control M CU, SCSI protocol controller and SCSI hard disk. As for compression subsystem, according to requirement of communication and monitor system, the output is fixed resolution image and analog PA L signal. The compression means is JPEG 2000 standard, in which, 9/7 wavelets in lifting format is used. 2 DSPs and FPGA are used to com pose parallel compression system. The system is com posed of FPGA pre-processing module, DSP compression module, video decoder module, data buffer module and communication module. Firstly, discrete wavelet transform and quantization is realized in FPGA. Secondly, entropy coding and stream adaption is realized in DSPs. Last, analog PA L signal is output by Video decoder. Data buffer is realized in synchronous dual-port RAM and state of subsystem is transfer to controller. Through subjective and objective evaluation, the storage and compression system satisfies the requirement of system.

  13. Applications of a fast, continuous wavelet transform

    SciTech Connect

    Dress, W.B.

    1997-02-01

    A fast, continuous, wavelet transform, based on Shannon`s sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon`s sampling theorem lets us view the Fourier transform of the data set as a continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time- domain sampling of the signal under analysis. Computational cost and nonorthogonality aside, the inherent flexibility and shift invariance of the frequency-space wavelets has advantages. The method has been applied to forensic audio reconstruction speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants` heart beats. Audio reconstruction is aided by selection of desired regions in the 2-D representation of the magnitude of the transformed signal. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass-spring system (e.g., a vehicle) by an occupants beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, features such as the glottal closing rate and word and phrase segmentation may be extracted from voice data.

  14. Wavelet-based multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Liu, Dian-Ting; Zhou, Xiao-Dan; Wang, Cheng-Wen

    2008-09-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (IR) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  15. Wavelet Applications for Flight Flutter Testing

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Freudinger, Lawrence C.

    1999-01-01

    Wavelets present a method for signal processing that may be useful for analyzing responses of dynamical systems. This paper describes several wavelet-based tools that have been developed to improve the efficiency of flight flutter testing. One of the tools uses correlation filtering to identify properties of several modes throughout a flight test for envelope expansion. Another tool uses features in time-frequency representations of responses to characterize nonlinearities in the system dynamics. A third tool uses modulus and phase information from a wavelet transform to estimate modal parameters that can be used to update a linear model and reduce conservatism in robust stability margins.

  16. [Multispectral image compression algorithms for color reproduction].

    PubMed

    Liang, Wei; Zeng, Ping; Luo, Xue-mei; Wang, Yi-feng; Xie, Kun

    2015-01-01

    In order to improve multispectral images compression efficiency and further facilitate their storage and transmission for the application of color reproduction and so on, in which fields high color accuracy is desired, WF serial methods is proposed, and APWS_RA algorithm is designed. Then the WF_APWS_RA algorithm, which has advantages of low complexity, good illuminant stability and supporting consistent coior reproduction across devices, is presented. The conventional MSE based wavelet embedded coding principle is first studied. And then color perception distortion criterion and visual characteristic matrix W are proposed. Meanwhile, APWS_RA algorithm is formed by optimizing the. rate allocation strategy of APWS. Finally, combined above technologies, a new coding method named WF_APWS_RA is designed. Colorimetric error criterion is used in the algorithm and APWS_RA is applied on visual weighted multispectral image. In WF_APWS_RA, affinity propagation clustering is utilized to exploit spectral correlation of weighted image. Then two-dimensional wavelet transform is used to remove the spatial redundancy. Subsequently, error compensation mechanism and rate pre-allocation are combined to accomplish the embedded wavelet coding. Experimental results show that at the same bit rate, compared with classical coding algorithms, WF serial algorithms have better performance on color retention. APWS_RA preserves least spectral error and WF APWS_RA algorithm has obvious superiority on color accuracy.

  17. Velocity and Object Detection Using Quaternion Wavelets

    SciTech Connect

    Traversoni, Leonardo; Xu Yi

    2007-09-06

    DStarting from stereoscopic films we detect corresponding objects in both and stablish an epipolar geometry as well as corresponding moving objects are detected and its movement described all using quaternion wavelets and quaternion phase space decomposition.

  18. Wavelet analysis for characterizing human electroencephalogram signals

    NASA Astrophysics Data System (ADS)

    Li, Bai-Lian; Wu, Hsin-i.

    1995-04-01

    Wavelet analysis is a recently developed mathematical theory and computational method for decomposing a nonstationary signal into components that have good localization properties both in time and frequency domains and hierarchical structures. Wavelet transform provides local information and multiresolution decomposition on a signal that cannot be obtained using traditional methods such as Fourier transforms and distribution-based statistical methods. Hence the change in complex biological signals can be detected. We use wavelet analysis as an innovative method for identifying and characterizing multiscale electroencephalogram signals in this paper. We develop a wavelet-based stationary phase transition method to extract instantaneous frequencies of the signal that vary in time. The results under different clinical situations show that the brian triggers small bursts of either low or high frequency immediately prior to changing on the global scale to that behavior. This information could be used as a diagnostic for detecting the onset of an epileptic seizure.

  19. Wavelet-based acoustic recognition of aircraft

    SciTech Connect

    Dress, W.B.; Kercel, S.W.

    1994-09-01

    We describe a wavelet-based technique for identifying aircraft from acoustic emissions during take-off and landing. Tests show that the sensor can be a single, inexpensive hearing-aid microphone placed close to the ground the paper describes data collection, analysis by various technique, methods of event classification, and extraction of certain physical parameters from wavelet subspace projections. The primary goal of this paper is to show that wavelet analysis can be used as a divide-and-conquer first step in signal processing, providing both simplification and noise filtering. The idea is to project the original signal onto the orthogonal wavelet subspaces, both details and approximations. Subsequent analysis, such as system identification, nonlinear systems analysis, and feature extraction, is then carried out on the various signal subspaces.

  20. Applications of a fast continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Dress, William B.

    1997-04-01

    A fast, continuous, wavelet transform, justified by appealing to Shannon's sampling theorem in frequency space, has been developed for use with continuous mother wavelets and sampled data sets. The method differs from the usual discrete-wavelet approach and from the standard treatment of the continuous-wavelet transform in that, here, the wavelet is sampled in the frequency domain. Since Shannon's sampling theorem lets us view the Fourier transform of the data set as representing the continuous function in frequency space, the continuous nature of the functions is kept up to the point of sampling the scale-translation lattice, so the scale-translation grid used to represent the wavelet transform is independent of the time-domain sampling of the signal under analysis. Although more computationally costly and not represented by an orthogonal basis, the inherent flexibility and shift invariance of the frequency-space wavelets are advantageous for certain applications. The method has been applied to forensic audio reconstruction, speaker recognition/identification, and the detection of micromotions of heavy vehicles associated with ballistocardiac impulses originating from occupants' heart beats. Audio reconstruction is aided by selection of desired regions in the 2D representation of the magnitude of the transformed signals. The inverse transform is applied to ridges and selected regions to reconstruct areas of interest, unencumbered by noise interference lying outside these regions. To separate micromotions imparted to a mass- spring system by an occupant's beating heart from gross mechanical motions due to wind and traffic vibrations, a continuous frequency-space wavelet, modeled on the frequency content of a canonical ballistocardiogram, was used to analyze time series taken from geophone measurements of vehicle micromotions. By using a family of mother wavelets, such as a set of Gaussian derivatives of various orders, different features may be extracted from voice

  1. Image and video compression for HDR content

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.

    2012-10-01

    High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.

  2. Wavelet neural networks for stock trading

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxing; Fataliyev, Kamaladdin; Wang, Lipo

    2013-05-01

    This paper explores the application of a wavelet neural network (WNN), whose hidden layer is comprised of neurons with adjustable wavelets as activation functions, to stock prediction. We discuss some basic rationales behind technical analysis, and based on which, inputs of the prediction system are carefully selected. This system is tested on Istanbul Stock Exchange National 100 Index and compared with traditional neural networks. The results show that the WNN can achieve very good prediction accuracy.

  3. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  4. Dictionary Approaches to Image Compression and Reconstruction

    NASA Technical Reports Server (NTRS)

    Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.

    1998-01-01

    This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.

  5. Compression scheme for geophysical electromagnetic inversions

    NASA Astrophysics Data System (ADS)

    Abubakar, A.

    2014-12-01

    We have developed a model-compression scheme for improving the efficiency of the regularized Gauss-Newton inversion algorithm for geophysical electromagnetic applications. In this scheme, the unknown model parameters (the conductivity/resistivity distribution) are represented in terms of a basis such as Fourier and wavelet (Haar and Daubechies). By applying a truncation criterion, the model may then be approximated by a reduced number of basis functions, which is usually much less than the number of the model parameters. Further, because the geophysical electromagnetic measurements have low resolution, it is sufficient for inversion to only keep the low-spatial frequency part of the image. This model-compression scheme accelerates the computational time and also reduces the memory usage of the Gauss-Newton method. We are able to significantly reduce the algorithm computational complexity without compromising the quality of the inverted models.

  6. The Continuous wavelet in airborne gravimetry

    NASA Astrophysics Data System (ADS)

    Liang, X.; Liu, L.

    2013-12-01

    Airborne gravimetry is an efficient method to recover medium and high frequency band of earth gravity over any region, especially inaccessible areas, which can measure gravity data with high accuracy,high resolution and broad range in a rapidly and economical way, and It will play an important role for geoid and geophysical exploration. Filtering methods for reducing high-frequency errors is critical to the success of airborne gravimetry due to Aircraft acceleration determination based on GPS.Tradiontal filters used in airborne gravimetry are FIR,IIR filer and so on. This study recommends an improved continuous wavelet to process airborne gravity data. Here we focus on how to construct the continuous wavelet filters and show their working principle. Particularly the technical parameters (window width parameter and scale parameter) of the filters are tested. Then the raw airborne gravity data from the first Chinese airborne gravimetry campaign are filtered using FIR-low pass filter and continuous wavelet filters to remove the noise. The comparison to reference data is performed to determinate external accuracy, which shows that continuous wavelet filters applied to airborne gravity in this thesis have good performances. The advantages of the continuous wavelet filters over digital filters are also introduced. The effectiveness of the continuous wavelet filters for airborne gravimetry is demonstrated through real data computation.

  7. Storing digital data using zero-compression method

    NASA Astrophysics Data System (ADS)

    Al-Qawasmi, Abdel-Rahman; Al-Lawama, Aiman

    2008-01-01

    Zero-Compression Method (ZCM) is a simple and effective algorithm that can be used to compress the digital data which consists of a significant numbers of zeros. The method has the ability to encode and decode the data with high processing speed and it has the ability to recover the stored data with minimum error and minimum storage area. On the other hand, the ZCM may have a wide practical value in storing data extracted from ECG signals. The method is implemented for various types of signals such as textual data, wavelet subsignals, randomly generated signals and speech signals. In this paper the coding and decoding algorithm is presented.

  8. Wavelet-based Poisson Solver for use in Particle-In-CellSimulations

    SciTech Connect

    Terzic, B.; Mihalcea, D.; Bohn, C.L.; Pogorelov, I.V.

    2005-05-13

    We report on a successful implementation of a wavelet based Poisson solver for use in 3D particle-in-cell (PIC) simulations. One new aspect of our algorithm is its ability to treat the general(inhomogeneous) Dirichlet boundary conditions (BCs). The solver harnesses advantages afforded by the wavelet formulation, such as sparsity of operators and data sets, existence of effective preconditioners, and the ability simultaneously to remove numerical noise and further compress relevant data sets. Having tested our method as a stand-alone solver on two model problems, we merged it into IMPACT-T to obtain a fully functional serial PIC code. We present and discuss preliminary results of application of the new code to the modeling of the Fermilab/NICADD and AES/JLab photoinjectors.

  9. Adaptive Wavelet Techniques, Wigner Distributions and the Direct Simulation of the Vlasov Equation

    NASA Astrophysics Data System (ADS)

    Afeyan, Bedros; Douglas, Melissa; Spielman, Rick

    2000-10-01

    The formal analogy between the quantum Liouville equation satisfied by the Wigner function in Quantum Mechanics, and the Vlasov equation satisfied by the single particle distribution function in plasma physics is exploited in order to study the long term evolution of nonlinear electrostatic wave phenomena dictated by the Vlasov-Poisson equations. Adaptive wavelet techniques are used to tile phase space in an optimal manner so as to minimize computational domain sizes and simultaneously to retain accuracy over disparate scales. Traditional MHD calculations will also be analyzed with our wavelet techniques to show the favorable data compression and feature extraction capabilities of multiresolution analysis. Specifically Z51 and Z179 will be compared to show the nature of the improvements in double wire array (Z179) implosions on Z to those obtained with a single wire array (Z51).

  10. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  11. Multichannel compressive sensing MRI using noiselet encoding.

    PubMed

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  12. Multichannel Compressive Sensing MRI Using Noiselet Encoding

    PubMed Central

    Pawar, Kamlesh; Egan, Gary; Zhang, Jingxin

    2015-01-01

    The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP) of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS). In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS) framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding. PMID:25965548

  13. The likelihood term in restoration of transform-compressed imagery

    NASA Astrophysics Data System (ADS)

    Robertson, Mark A.

    2004-05-01

    Compression of imagery by quantization of the data's transform coefficients introduces an error in the imagery upon decompression. When processing compressed imagery, often a likelihood term is used to provide a statistical description of how the observed data are related to the original noise-free data. This work derives the statistical relationship between compressed imagery and the original imagery, which is found to be embodied in a (in general) non-diagonal covariance matrix. Although the derivations are valid for transform coding in general, the work is motivated by considering examples for the specific cases of compression using the discrete cosine transform and the discrete wavelet transform. An example application of motion-compensated temporal filtering is provided to show how the presented likelihood term might be used in a restoration scenario.

  14. Multiparameter radar analysis using wavelets

    NASA Astrophysics Data System (ADS)

    Tawfik, Ben Bella Sayed

    Multiparameter radars have been used in the interpretation of many meteorological phenomena. Rainfall estimates can be obtained from multiparameter radar measurements. Studying and analyzing spatial variability of different rainfall algorithms, namely R(ZH), the algorithm based on reflectivity, R(ZH, ZDR), the algorithm based on reflectivity and differential reflectivity, R(KDP), the algorithm based on specific differential phase, and R(KDP, Z DR), the algorithm based on specific differential phase and differential reflectivity, are important for radar applications. The data used in this research were collected using CSU-CHILL, CP-2, and S-POL radars. In this research multiple objectives are addressed using wavelet analysis namely, (1)space time variability of various rainfall algorithms, (2)separation of convective and stratiform storms based on reflectivity measurements, (3)and detection of features such as bright bands. The bright band is a multiscale edge detection problem. In this research, the technique of multiscale edge detection is applied on the radar data collected using CP-2 radar on August 23, 1991 to detect the melting layer. In the analysis of space/time variability of rainfall algorithms, wavelet variance introduces an idea about the statistics of the radar field. In addition, multiresolution analysis of different rainfall estimates based on four algorithms, namely R(ZH), R( ZH, ZDR), R(K DP), and R(KDP, Z DR), are analyzed. The flood data of July 29, 1997 collected by CSU-CHILL radar were used for this analysis. Another set of S-POL radar data collected on May 2, 1997 at Wichita, Kansas were used as well. At each level of approximation, the detail and the approximation components are analyzed. Based on this analysis, the rainfall algorithms can be judged. From this analysis, an important result was obtained. The Z-R algorithms that are widely used do not show the full spatial variability of rainfall. In addition another intuitively obvious result

  15. Wavelet transforms as solutions of partial differential equations

    SciTech Connect

    Zweig, G.

    1997-10-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). Wavelet transforms are useful in representing transients whose time and frequency structure reflect the dynamics of an underlying physical system. Speech sound, pressure in turbulent fluid flow, or engine sound in automobiles are excellent candidates for wavelet analysis. This project focused on (1) methods for choosing the parent wavelet for a continuous wavelet transform in pattern recognition applications and (2) the more efficient computation of continuous wavelet transforms by understanding the relationship between discrete wavelet transforms and discretized continuous wavelet transforms. The most interesting result of this research is the finding that the generalized wave equation, on which the continuous wavelet transform is based, can be used to understand phenomena that relate to the process of hearing.

  16. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  17. Analysis of autostereoscopic three-dimensional images using multiview wavelets.

    PubMed

    Saveljev, Vladimir; Palchikova, Irina

    2016-08-10

    We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images. PMID:27534470

  18. Wavelet based detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Gur, Berke M.; Niezrecki, Christopher

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.

  19. Review of wavelet transforms for pattern recognitions

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1996-03-01

    After relating the adaptive wavelet transform to the human visual and hearing systems, we exploit the synergism between such a smart sensor processing with brain-style neural network computing. The freedom of choosing an appropriate kernel of a linear transform, which is given to us by the recent mathematical foundation of the wavelet transform, is exploited fully and is generally called the adaptive wavelet transform (WT). However, there are several levels of adaptivity: (1) optimum coefficients: adjustable transform coefficients chosen with respect to a fixed mother kernel for better invariant signal representation, (2) super-mother: grouping different scales of daughter wavelets of same or different mother wavelets at different shift location into a new family called a superposition mother kernel for better speech signal classification, (3) variational calculus to determine ab initio a constraint optimization mother for a specific task. The tradeoff between the mathematical rigor of the complete orthonormality and the speed of order (N) with the adaptive flexibility is finally up to the user's needs. Then, to illustrate (1), a new invariant optoelectronic architecture of a wedge- shape filter in the WT domain is given for scale-invariant signal classification by neural networks.

  20. Lifting wavelet method of target detection

    NASA Astrophysics Data System (ADS)

    Han, Jun; Zhang, Chi; Jiang, Xu; Wang, Fang; Zhang, Jin

    2009-11-01

    Image target recognition plays a very important role in the areas of scientific exploration, aeronautics and space-to-ground observation, photography and topographic mapping. Complex environment of the image noise, fuzzy, all kinds of interference has always been to affect the stability of recognition algorithm. In this paper, the existence of target detection in real-time, accuracy problems, as well as anti-interference ability, using lifting wavelet image target detection methods. First of all, the use of histogram equalization, the goal difference method to obtain the region, on the basis of adaptive threshold and mathematical morphology operations to deal with the elimination of the background error. Secondly, the use of multi-channel wavelet filter wavelet transform of the original image de-noising and enhancement, to overcome the general algorithm of the noise caused by the sensitive issue of reducing the rate of miscarriage of justice will be the multi-resolution characteristics of wavelet and promotion of the framework can be designed directly in the benefits of space-time region used in target detection, feature extraction of targets. The experimental results show that the design of lifting wavelet has solved the movement of the target due to the complexity of the context of the difficulties caused by testing, which can effectively suppress noise, and improve the efficiency and speed of detection.

  1. Application of wavelet analysis for monitoring the hydrologic effects of dam operation: Glen canyon dam and the Colorado River at lees ferry, Arizona

    USGS Publications Warehouse

    White, M.A.; Schmidt, J.C.; Topping, D.J.

    2005-01-01

    Wavelet analysis is a powerful tool with which to analyse the hydrologic effects of dam construction and operation on river systems. Using continuous records of instantaneous discharge from the Lees Ferry gauging station and records of daily mean discharge from upstream tributaries, we conducted wavelet analyses of the hydrologic structure of the Colorado River in Grand Canyon. The wavelet power spectrum (WPS) of daily mean discharge provided a highly compressed and integrative picture of the post-dam elimination of pronounced annual and sub-annual flow features. The WPS of the continuous record showed the influence of diurnal and weekly power generation cycles, shifts in discharge management, and the 1996 experimental flood in the post-dam period. Normalization of the WPS by local wavelet spectra revealed the fine structure of modulation in discharge scale and amplitude and provides an extremely efficient tool with which to assess the relationships among hydrologic cycles and ecological and geomorphic systems. We extended our analysis to sections of the Snake River and showed how wavelet analysis can be used as a data mining technique. The wavelet approach is an especially promising tool with which to assess dam operation in less well-studied regions and to evaluate management attempts to reconstruct desired flow characteristics. Copyright ?? 2005 John Wiley & Sons, Ltd.

  2. CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC

    NASA Astrophysics Data System (ADS)

    Poupat, Jean-Luc; Vitulli, Raffaele

    2013-08-01

    The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.

  3. Wavelet approximation of correlated wave functions. II. Hyperbolic wavelets and adaptive approximation schemes

    NASA Astrophysics Data System (ADS)

    Luo, Hongjun; Kolb, Dietmar; Flad, Heinz-Jurgen; Hackbusch, Wolfgang; Koprucki, Thomas

    2002-08-01

    We have studied various aspects concerning the use of hyperbolic wavelets and adaptive approximation schemes for wavelet expansions of correlated wave functions. In order to analyze the consequences of reduced regularity of the wave function at the electron-electron cusp, we first considered a realistic exactly solvable many-particle model in one dimension. Convergence rates of wavelet expansions, with respect to L2 and H1 norms and the energy, were established for this model. We compare the performance of hyperbolic wavelets and their extensions through adaptive refinement in the cusp region, to a fully adaptive treatment based on the energy contribution of individual wavelets. Although hyperbolic wavelets show an inferior convergence behavior, they can be easily refined in the cusp region yielding an optimal convergence rate for the energy. Preliminary results for the helium atom are presented, which demonstrate the transferability of our observations to more realistic systems. We propose a contraction scheme for wavelets in the cusp region, which reduces the number of degrees of freedom and yields a favorable cost to benefit ratio for the evaluation of matrix elements.

  4. [Wavelet entropy analysis of spontaneous EEG signals in Alzheimer's disease].

    PubMed

    Zhang, Meiyun; Zhang, Benshu; Chen, Ying

    2014-08-01

    Wavelet entropy is a quantitative index to describe the complexity of signals. Continuous wavelet transform method was employed to analyze the spontaneous electroencephalogram (EEG) signals of mild, moderate and severe Alzheimer's disease (AD) patients and normal elderly control people in this study. Wavelet power spectrums of EEG signals were calculated based on wavelet coefficients. Wavelet entropies of mild, moderate and severe AD patients were compared with those of normal controls. The correlation analysis between wavelet entropy and MMSE score was carried out. There existed significant difference on wavelet entropy among mild, moderate, severe AD patients and normal controls (P<0.01). Group comparisons showed that wavelet entropy for mild, moderate, severe AD patients was significantly lower than that for normal controls, which was related to the narrow distribution of their wavelet power spectrums. The statistical difference was significant (P<0.05). Further studies showed that the wavelet entropy of EEG and the MMSE score were significantly correlated (r= 0. 601-0. 799, P<0.01). Wavelet entropy is a quantitative indicator describing the complexity of EEG signals. Wavelet entropy is likely to be an electrophysiological index for AD diagnosis and severity assessment.

  5. An Energy Efficient Compressed Sensing Framework for the Compression of Electroencephalogram Signals

    PubMed Central

    Fauvel, Simon; Ward, Rabab K.

    2014-01-01

    The use of wireless body sensor networks is gaining popularity in monitoring and communicating information about a person's health. In such applications, the amount of data transmitted by the sensor node should be minimized. This is because the energy available in these battery powered sensors is limited. In this paper, we study the wireless transmission of electroencephalogram (EEG) signals. We propose the use of a compressed sensing (CS) framework to efficiently compress these signals at the sensor node. Our framework exploits both the temporal correlation within EEG signals and the spatial correlations amongst the EEG channels. We show that our framework is up to eight times more energy efficient than the typical wavelet compression method in terms of compression and encoding computations and wireless transmission. We also show that for a fixed compression ratio, our method achieves a better reconstruction quality than the CS-based state-of-the art method. We finally demonstrate that our method is robust to measurement noise and to packet loss and that it is applicable to a wide range of EEG signal types. PMID:24434840

  6. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms.

  7. Component identification of nonstationary signals using wavelets

    SciTech Connect

    Otaduy, P.J.; Georgevich, V. )

    1993-01-01

    Fourier analysis is based on the decomposition of a signal into a linear combination of integral dilations of the base function e[sup ix],i.e., of sinusoidal waves. The larger the dilation the higher the frequency of the sinusoidal component. Each frequency component is of constant magnitude along the signal length. Localized features are averaged over the signal's length; thus, time localization is absent. Wavelet analysis is based on the decomposition of a signal into a linear combination of binary dilations and dyadic translations of a base function with compact support, i.e., a basic wavelet. A basic wavelet function can be, with basic restrictions, any function suitable to be a window in both the time.

  8. Wavelet Analysis for Wind Fields Estimation

    PubMed Central

    Leite, Gladeston C.; Ushizima, Daniela M.; Medeiros, Fátima N. S.; de Lima, Gilson G.

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B3 spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms−1. Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  9. Wavelet analysis for wind fields estimation.

    PubMed

    Leite, Gladeston C; Ushizima, Daniela M; Medeiros, Fátima N S; de Lima, Gilson G

    2010-01-01

    Wind field analysis from synthetic aperture radar images allows the estimation of wind direction and speed based on image descriptors. In this paper, we propose a framework to automate wind direction retrieval based on wavelet decomposition associated with spectral processing. We extend existing undecimated wavelet transform approaches, by including à trous with B(3) spline scaling function, in addition to other wavelet bases as Gabor and Mexican-hat. The purpose is to extract more reliable directional information, when wind speed values range from 5 to 10 ms(-1). Using C-band empirical models, associated with the estimated directional information, we calculate local wind speed values and compare our results with QuikSCAT scatterometer data. The proposed approach has potential application in the evaluation of oil spills and wind farms. PMID:22219699

  10. Characterization and simulation of gunfire with wavelets

    SciTech Connect

    Smallwood, D.O.

    1998-09-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The response of a structure to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The methods all used some form of the discrete fourier transform. The current paper will explore a simpler method to describe the nonstationary random process in terms of a wavelet transform. As was done previously, the gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. The wavelet transform is performed on each of these records. The mean and standard deviation of the resulting wavelet coefficients describe the composite characteristics of the entire waveform. It is shown that the distribution of the wavelet coefficients is approximately Gaussian with a nonzero mean and that the standard deviation of the coefficients at different times and levels are approximately independent. The gunfire is simulated by generating realizations of records of a single-round firing by computing the inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously discussed gunfire record. The individual realizations are then assembled into a realization of a time history of many rounds firing. A second-order correction of the probability density function (pdf) is accomplished with a zero memory nonlinear (ZMNL) function. The method is straightforward, easy to implement, and produces a simulated record very much like the original measured gunfire record.

  11. A Compression Algorithm in Wireless Sensor Networks of Bearing Monitoring

    NASA Astrophysics Data System (ADS)

    Bin, Zheng; Qingfeng, Meng; Nan, Wang; Zhi, Li

    2011-07-01

    The energy consumption of wireless sensor networks (WSNs) is always an important problem in the application of wireless sensor networks. This paper proposes a data compression algorithm to reduce amount of data and energy consumption during the data transmission process in the on-line WSNs-based bearing monitoring system. The proposed compression algorithm is based on lifting wavelets, Zerotree coding and Hoffman coding. Among of that, 5/3 lifting wavelets is used for dividing data into different frequency bands to extract signal characteristics. Zerotree coding is applied to calculate the dynamic thresholds to retain the attribute data. The attribute data are then encoded by Hoffman coding to further enhance the compression ratio. In order to validate the algorithm, simulation is carried out by using Matlab. The result of simulation shows that the proposed algorithm is very suitable for the compression of bearing monitoring data. The algorithm has been successfully used in online WSNs-based bearing monitoring system, in which TI DSP TMS320F2812 is used to realize the algorithm.

  12. Analysis of wavelet technology for NASA applications

    NASA Technical Reports Server (NTRS)

    Wells, R. O., Jr.

    1994-01-01

    The purpose of this grant was to introduce a broad group of NASA researchers and administrators to wavelet technology and to determine its future role in research and development at NASA JSC. The activities of several briefings held between NASA JSC scientists and Rice University researchers are discussed. An attached paper, 'Recent Advances in Wavelet Technology', summarizes some aspects of these briefings. Two proposals submitted to NASA reflect the primary areas of common interest. They are image analysis and numerical solutions of partial differential equations arising in computational fluid dynamics and structural mechanics.

  13. Numerical Algorithms Based on Biorthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Ponenti, Pj.; Liandrat, J.

    1996-01-01

    Wavelet bases are used to generate spaces of approximation for the resolution of bidimensional elliptic and parabolic problems. Under some specific hypotheses relating the properties of the wavelets to the order of the involved operators, it is shown that an approximate solution can be built. This approximation is then stable and converges towards the exact solution. It is designed such that fast algorithms involving biorthogonal multi resolution analyses can be used to resolve the corresponding numerical problems. Detailed algorithms are provided as well as the results of numerical tests on partial differential equations defined on the bidimensional torus.

  14. Wavelet analysis applied to the IRAS cirrus

    NASA Technical Reports Server (NTRS)

    Langer, William D.; Wilson, Robert W.; Anderson, Charles H.

    1994-01-01

    The structure of infrared cirrus clouds is analyzed with Laplacian pyramid transforms, a form of non-orthogonal wavelets. Pyramid and wavelet transforms provide a means to decompose images into their spatial frequency components such that all spatial scales are treated in an equivalent manner. The multiscale transform analysis is applied to IRAS 100 micrometer maps of cirrus emission in the north Galactic pole region to extract features on different scales. In the maps we identify filaments, fragments and clumps by separating all connected regions. These structures are analyzed with respect to their Hausdorff dimension for evidence of the scaling relationships in the cirrus clouds.

  15. Wavelet-based detection of transients in biological signals

    NASA Astrophysics Data System (ADS)

    Mzaik, Tahsin; Jagadeesh, Jogikal M.

    1994-10-01

    This paper presents two multiresolution algorithms for detection and separation of mixed signals using the wavelet transform. The first algorithm allows one to design a mother wavelet and its associated wavelet grid that guarantees the separation of signal components if information about the expected minimum signal time and frequency separation of the individual components is known. The second algorithm expands this idea to design two mother wavelets which are then combined to achieve the required separation otherwise impossible with a single wavelet. Potential applications include many biological signals such as ECG, EKG, and retinal signals.

  16. EEG analysis using wavelet-based information tools.

    PubMed

    Rosso, O A; Martin, M T; Figliola, A; Keller, K; Plastino, A

    2006-06-15

    Wavelet-based informational tools for quantitative electroencephalogram (EEG) record analysis are reviewed. Relative wavelet energies, wavelet entropies and wavelet statistical complexities are used in the characterization of scalp EEG records corresponding to secondary generalized tonic-clonic epileptic seizures. In particular, we show that the epileptic recruitment rhythm observed during seizure development is well described in terms of the relative wavelet energies. In addition, during the concomitant time-period the entropy diminishes while complexity grows. This is construed as evidence supporting the conjecture that an epileptic focus, for this kind of seizures, triggers a self-organized brain state characterized by both order and maximal complexity.

  17. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  18. Turbulence in Compressible Flows

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.

  19. Best tree wavelet packet transform based copyright protection scheme for digital images

    NASA Astrophysics Data System (ADS)

    Rawat, Sanjay; Raman, Balasubramanian

    2012-05-01

    In this paper, a dual watermarking scheme based on discrete wavelet transform (DWT), wavelet packet transform (WPT) with best tree, and singular value decomposition (SVD) is proposed. In our algorithm, the cover image is sub-sampled into four sub-images and then two sub-images, having the highest sum of singular values are selected. Two different gray scale images are embedded in the selected sub-images. For embedding first watermark, one of the selected sub-image is decomposed via WPT. The entropy based algorithm is adopted to find the best tree of WPT. Watermark is embedded in all frequency sub-bands of the best tree. For embedding second watermark, l-level discrete wavelet transform (DWT) is performed on the second selected sub-image. The watermark is embedded by modifying the singular values of the transformed image. To enhance the security of the scheme, Zig-Zag scan in applied on the second watermark before embedding. The robustness of the proposed scheme is demonstrated through a series of attack simulations. Experimental results demonstrate that the proposed scheme has good perceptual invisibility and is also robust against various image processing operations, geometric attacks and JPEG Compression.

  20. Understanding wavelet analysis and filters for engineering applications

    NASA Astrophysics Data System (ADS)

    Parameswariah, Chethan Bangalore

    Wavelets are signal-processing tools that have been of interest due to their characteristics and properties. Clear understanding of wavelets and their properties are a key to successful applications. Many theoretical and application-oriented papers have been written. Yet the choice of a right wavelet for a given application is an ongoing quest that has not been satisfactorily answered. This research has successfully identified certain issues, and an effort has been made to provide an understanding of wavelets by studying the wavelet filters in terms of their pole-zero and magnitude-phase characteristics. The magnitude characteristics of these filters have flat responses in both the pass band and stop band. The phase characteristics are almost linear. It is interesting to observe that some wavelets have the exact same magnitude characteristics but their phase responses vary in the linear slopes. An application of wavelets for fast detection of the fault current in a transformer and distinguishing from the inrush current clearly shows the advantages of the lower slope and fewer coefficients---Daubechies wavelet D4 over D20. This research has been published in the IEEE transactions on Power systems and is also proposed as an innovative method for protective relaying techniques. For detecting the frequency composition of the signal being analyzed, an understanding of the energy distribution in the output wavelet decompositions is presented for different wavelet families. The wavelets with fewer coefficients in their filters have more energy leakage into adjacent bands. The frequency bandwidth characteristics display flatness in the middle of the pass band confirming that the frequency of interest should be in the middle of the frequency band when performing a wavelet transform. Symlets exhibit good flatness with minimum ripple but the transition regions do not have sharper cut off. The number of wavelet levels and their frequency ranges are dependent on the two

  1. Information retrieval system utilizing wavelet transform

    DOEpatents

    Brewster, Mary E.; Miller, Nancy E.

    2000-01-01

    A method for automatically partitioning an unstructured electronically formatted natural language document into its sub-topic structure. Specifically, the document is converted to an electronic signal and a wavelet transform is then performed on the signal. The resultant signal may then be used to graphically display and interact with the sub-topic structure of the document.

  2. Nonlinear adaptive wavelet analysis of electrocardiogram signals

    NASA Astrophysics Data System (ADS)

    Yang, H.; Bukkapatnam, S. T.; Komanduri, R.

    2007-08-01

    Wavelet representation can provide an effective time-frequency analysis for nonstationary signals, such as the electrocardiogram (EKG) signals, which contain both steady and transient parts. In recent years, wavelet representation has been emerging as a powerful time-frequency tool for the analysis and measurement of EKG signals. The EKG signals contain recurring, near-periodic patterns of P , QRS , T , and U waveforms, each of which can have multiple manifestations. Identification and extraction of a compact set of features from these patterns is critical for effective detection and diagnosis of various disorders. This paper presents an approach to extract a fiducial pattern of EKG based on the consideration of the underlying nonlinear dynamics. The pattern, in a nutshell, is a combination of eigenfunctions of the ensembles created from a Poincare section of EKG dynamics. The adaptation of wavelet functions to the fiducial pattern thus extracted yields two orders of magnitude (some 95%) more compact representation (measured in terms of Shannon signal entropy). Such a compact representation can facilitate in the extraction of features that are less sensitive to extraneous noise and other variations. The adaptive wavelet can also lead to more efficient algorithms for beat detection and QRS cancellation as well as for the extraction of multiple classical EKG signal events, such as widths of QRS complexes and QT intervals.

  3. Parallel adaptive wavelet collocation method for PDEs

    SciTech Connect

    Nejadmalayeri, Alireza; Vezolainen, Alexei; Brown-Dymkoski, Eric; Vasilyev, Oleg V.

    2015-10-01

    A parallel adaptive wavelet collocation method for solving a large class of Partial Differential Equations is presented. The parallelization is achieved by developing an asynchronous parallel wavelet transform, which allows one to perform parallel wavelet transform and derivative calculations with only one data synchronization at the highest level of resolution. The data are stored using tree-like structure with tree roots starting at a priori defined level of resolution. Both static and dynamic domain partitioning approaches are developed. For the dynamic domain partitioning, trees are considered to be the minimum quanta of data to be migrated between the processes. This allows fully automated and efficient handling of non-simply connected partitioning of a computational domain. Dynamic load balancing is achieved via domain repartitioning during the grid adaptation step and reassigning trees to the appropriate processes to ensure approximately the same number of grid points on each process. The parallel efficiency of the approach is discussed based on parallel adaptive wavelet-based Coherent Vortex Simulations of homogeneous turbulence with linear forcing at effective non-adaptive resolutions up to 2048{sup 3} using as many as 2048 CPU cores.

  4. Characterization and Simulation of Gunfire with Wavelets

    DOE PAGES

    Smallwood, David O.

    1999-01-01

    Gunfire is used as an example to show how the wavelet transform can be used to characterize and simulate nonstationary random events when an ensemble of events is available. The structural response to nearby firing of a high-firing rate gun has been characterized in several ways as a nonstationary random process. The current paper will explore a method to describe the nonstationary random process using a wavelet transform. The gunfire record is broken up into a sequence of transient waveforms each representing the response to the firing of a single round. A wavelet transform is performed on each of thesemore » records. The gunfire is simulated by generating realizations of records of a single-round firing by computing an inverse wavelet transform from Gaussian random coefficients with the same mean and standard deviation as those estimated from the previously analyzed gunfire record. The individual records are assembled into a realization of many rounds firing. A second-order correction of the probability density function is accomplished with a zero memory nonlinear function. The method is straightforward, easy to implement, and produces a simulated record much like the measured gunfire record.« less

  5. Cosmic Ray elimination using the Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Orozco-Aguilera, M. T.; Cruz, J.; Altamirano, L.; Serrano, A.

    2009-11-01

    In this work, we present a method for the automatic cosmic ray elimination in a single CCD exposure using the Wavelet Transform. The proposed method can eliminate cosmic rays of any shape or size. With this method we can eliminate over 95% of cosmic rays in a spectral image.

  6. Climate wavelet spectrum estimation under chronology uncertainties

    NASA Astrophysics Data System (ADS)

    Lenoir, G.; Crucifix, M.

    2012-04-01

    Several approaches to estimate the chronology of palaeoclimate records exist in the literature: simple interpolation between the tie points, orbital tuning, alignment on other data... These techniques generate a single estimate of the chronology. More recently, statistical generators of chronologies have appeared (e.g. OXCAL, BCHRON) allowing the construction of thousands of chronologies given the tie points and their uncertainties. These techniques are based on advanced statistical methods. They allow one to take into account the uncertainty of the timing of each climatic event recorded into the core. On the other hand, when interpreting the data, scientists often rely on time series analysis, and especially on spectral analysis. Given that paleo-data are composed of a large spectrum of frequencies, are non-stationary and are highly noisy, the continuous wavelet transform turns out to be a suitable tool to analyse them. The wavelet periodogram, in particular, is helpful to interpret visually the time-frequency behaviour of the data. Here, we combine statistical methods to generate chronologies with the power of continuous wavelet transform. Some interesting applications then come up: comparison of time-frequency patterns between two proxies (extracted from different cores), between a proxy and a statistical dynamical model, and statistical estimation of phase-lag between two filtered signals. All these applications consider explicitly the uncertainty in the chronology. The poster presents mathematical developments on the wavelet spectrum estimation under chronology uncertainties as well as some applications to Quaternary data based on marine and ice cores.

  7. Implementation of compressed sensing in telecardiology sensor networks.

    PubMed

    Correia Pinheiro, Eduardo; Postolache, Octavian Adrian; Silva Girão, Pedro

    2010-01-01

    Mobile solutions for patient cardiac monitoring are viewed with growing interest, and improvements on current implementations are frequently reported, with wireless, and in particular, wearable devices promising to achieve ubiquity. However, due to unavoidable power consumption limitations, the amount of data acquired, processed, and transmitted needs to be diminished, which is counterproductive, regarding the quality of the information produced. Compressed sensing implementation in wireless sensor networks (WSNs) promises to bring gains not only in power savings to the devices, but also with minor impact in signal quality. Several cardiac signals have a sparse representation in some wavelet transformations. The compressed sensing paradigm states that signals can be recovered from a few projections into another basis, incoherent with the first. This paper evaluates the compressed sensing paradigm impact in a cardiac monitoring WSN, discussing the implications in data reliability, energy management, and the improvements accomplished by in-network processing. PMID:20885973

  8. Multimode waveguide speckle patterns for compressive sensing.

    PubMed

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS. PMID:27244406

  9. Multimode waveguide speckle patterns for compressive sensing.

    PubMed

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  10. Robust facial expression recognition via compressive sensing.

    PubMed

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  11. Image compression using the W-transform

    SciTech Connect

    Reynolds, W.D. Jr.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  12. Wavelet-enabled progressive data Access and Storage Protocol (WASP)

    NASA Astrophysics Data System (ADS)

    Clyne, J.; Frank, L.; Lesperance, T.; Norton, A.

    2015-12-01

    Current practices for storing numerical simulation outputs hail from an era when the disparity between compute and I/O performance was not as great as it is today. The memory contents for every sample, computed at every grid point location, are simply saved at some prescribed temporal frequency. Though straightforward, this approach fails to take advantage of the coherency in neighboring grid points that invariably exists in numerical solutions to mathematical models. Exploiting such coherence is essential to digital multimedia; DVD-Video, digital cameras, streaming movies and audio are all possible today because of transform-based compression schemes that make substantial reductions in data possible by taking advantage of the strong correlation between adjacent samples in both space and time. Such methods can also be exploited to enable progressive data refinement in a manner akin to that used in ubiquitous digital mapping applications: views from far away are shown in coarsened detail to provide context, and can be progressively refined as the user zooms in on a localized region of interest. The NSF funded WASP project aims to provide a common, NetCDF-compatible software framework for supporting wavelet-based, multi-scale, progressive data, enabling interactive exploration of large data sets for the geoscience communities. This presentation will provide an overview of this work in progress to develop community cyber-infrastructure for the efficient analysis of very large data sets.

  13. A symmetrical image encryption scheme in wavelet and time domain

    NASA Astrophysics Data System (ADS)

    Luo, Yuling; Du, Minghui; Liu, Junxiu

    2015-02-01

    There has been an increasing concern for effective storages and secure transactions of multimedia information over the Internet. Then a great variety of encryption schemes have been proposed to ensure the information security while transmitting, but most of current approaches are designed to diffuse the data only in spatial domain which result in reducing storage efficiency. A lightweight image encryption strategy based on chaos is proposed in this paper. The encryption process is designed in transform domain. The original image is decomposed into approximation and detail components using integer wavelet transform (IWT); then as the more important component of the image, the approximation coefficients are diffused by secret keys generated from a spatiotemporal chaotic system followed by inverse IWT to construct the diffused image; finally a plain permutation is performed for diffusion image by the Logistic mapping in order to reduce the correlation between adjacent pixels further. Experimental results and performance analysis demonstrate the proposed scheme is an efficient, secure and robust encryption mechanism and it realizes effective coding compression to satisfy desirable storage.

  14. Fingerprint data acquisition, desmearing, wavelet feature extraction, and identification

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.; Hsu, Charles C.; Garcia, Joseph P.; Telfer, Brian A.

    1995-04-01

    In this paper, we present (1) a design concept of a fingerprint scanning system that can reject severely blurred inputs for retakes and then de-smear those less blurred prints. The de-smear algorithm is new and is based on the digital filter theory of the lossless QMF (quadrature mirror filter) subband coding. Then, we present (2) a new fingerprint minutia feature extraction methodology which uses a 2D STAR mother wavelet that can efficiently locate the fork feature anywhere on the fingerprints in parallel and is independent of its scale, shift, and rotation. Such a combined system can achieve high data compression to send through a binary facsimile machine that when combined with a tabletop computer can achieve the automatic finger identification systems (AFIS) using today's technology in the office environment. An interim recommendation for the National Crime Information Center is given about how to reduce the crime rate by an upgrade of today's police office technology in the light of the military expertise in ATR.

  15. Feature selection using Haar wavelet power spectrum

    PubMed Central

    Subramani, Prabakaran; Sahu, Rajendra; Verma, Shekhar

    2006-01-01

    Background Feature selection is an approach to overcome the 'curse of dimensionality' in complex researches like disease classification using microarrays. Statistical methods are utilized more in this domain. Most of them do not fit for a wide range of datasets. The transform oriented signal processing domains are not probed much when other fields like image and video processing utilize them well. Wavelets, one of such techniques, have the potential to be utilized in feature selection method. The aim of this paper is to assess the capability of Haar wavelet power spectrum in the problem of clustering and gene selection based on expression data in the context of disease classification and to propose a method based on Haar wavelet power spectrum. Results Haar wavelet power spectra of genes were analysed and it was observed to be different in different diagnostic categories. This difference in trend and magnitude of the spectrum may be utilized in gene selection. Most of the genes selected by earlier complex methods were selected by the very simple present method. Each earlier works proved only few genes are quite enough to approach the classification problem [1]. Hence the present method may be tried in conjunction with other classification methods. The technique was applied without removing the noise in data to validate the robustness of the method against the noise or outliers in the data. No special softwares or complex implementation is needed. The qualities of the genes selected by the present method were analysed through their gene expression data. Most of them were observed to be related to solve the classification issue since they were dominant in the diagnostic category of the dataset for which they were selected as features. Conclusion In the present paper, the problem of feature selection of microarray gene expression data was considered. We analyzed the wavelet power spectrum of genes and proposed a clustering and feature selection method useful for

  16. Optimized satellite image compression and reconstruction via evolution strategies

    NASA Astrophysics Data System (ADS)

    Babb, Brendan; Moore, Frank; Peterson, Michael

    2009-05-01

    This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.

  17. Microbunching and RF Compression

    SciTech Connect

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-05-23

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  18. Compressed gas manifold

    SciTech Connect

    Hildebrand, Richard J.; Wozniak, John J.

    2001-01-01

    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  19. Wavelet analysis and applications to some dynamical systems

    NASA Astrophysics Data System (ADS)

    Bendjoya, Ph.; Slezak, E.

    1993-05-01

    The main properties of the wavelet transform as a new time-frequency method which is particularly well suited for detecting and localizing discontinuities and scaling behavior in signals are reviewed. Particular attention is given to first applications of the wavelet transform to dynamical systems including solution of partial differential equations, fractal and turbulence characterization, and asteroid family determination from cluster analysis. Advantages of the wavelet transform over classical analysis methods are summarized.

  20. Compressible turbulent mixing: Effects of compressibility

    NASA Astrophysics Data System (ADS)

    Ni, Qionglin

    2016-04-01

    We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.

  1. Variability of Solar Irradiances Using Wavelet Analysis

    NASA Technical Reports Server (NTRS)

    Pesnell, William D.

    2007-01-01

    We have used wavelets to analyze the sunspot number, F10.7 (the solar irradiance at a wavelength of approx.10.7 cm), and Ap (a geomagnetic activity index). Three different wavelets are compared, showing how each selects either temporal or scale resolution. Our goal is an envelope of solar activity that better bounds the large amplitude fluctuations form solar minimum to maximum. We show how the 11-year cycle does not disappear at solar minimum, that minimum is only the other part of the solar cycle. Power in the fluctuations of solar-activity-related indices may peak during solar maximum but the solar cycle itself is always present. The Ap index has a peak after solar maximum that appears to be better correlated with the current solar cycle than with the following cycle.

  2. Wavelets for full reconfigurable ECG acquisition system

    NASA Astrophysics Data System (ADS)

    Morales, D. P.; García, A.; Castillo, E.; Meyer-Baese, U.; Palma, A. J.

    2011-06-01

    This paper presents the use of wavelet cores for a full reconfigurable electrocardiogram signal (ECG) acquisition system. The system is compound by two reconfigurable devices, a FPGA and a FPAA. The FPAA is in charge of the ECG signal acquisition, since this device is a versatile and reconfigurable analog front-end for biosignals. The FPGA is in charge of FPAA configuration, digital signal processing and information extraction such as heart beat rate and others. Wavelet analysis has become a powerful tool for ECG signal processing since it perfectly fits ECG signal shape. The use of these cores has been integrated in the LabVIEW FPGA module development tool that makes possible to employ VHDL cores within the usual LabVIEW graphical programming environment, thus freeing the designer from tedious and time consuming design of communication interfaces. This enables rapid test and graphical representation of results.

  3. Wavelet packet entropy for heart murmurs classification.

    PubMed

    Safara, Fatemeh; Doraisamy, Shyamala; Azman, Azreen; Jantan, Azrul; Ranga, Sri

    2012-01-01

    Heart murmurs are the first signs of cardiac valve disorders. Several studies have been conducted in recent years to automatically differentiate normal heart sounds, from heart sounds with murmurs using various types of audio features. Entropy was successfully used as a feature to distinguish different heart sounds. In this paper, new entropy was introduced to analyze heart sounds and the feasibility of using this entropy in classification of five types of heart sounds and murmurs was shown. The entropy was previously introduced to analyze mammograms. Four common murmurs were considered including aortic regurgitation, mitral regurgitation, aortic stenosis, and mitral stenosis. Wavelet packet transform was employed for heart sound analysis, and the entropy was calculated for deriving feature vectors. Five types of classification were performed to evaluate the discriminatory power of the generated features. The best results were achieved by BayesNet with 96.94% accuracy. The promising results substantiate the effectiveness of the proposed wavelet packet entropy for heart sounds classification.

  4. Wavelets and their applications past and future

    NASA Astrophysics Data System (ADS)

    Coifman, Ronald R.

    2009-04-01

    As this is a conference on mathematical tools for defense, I would like to dedicate this talk to the memory of Louis Auslander, who through his insights and visionary leadership, brought powerful new mathematics into DARPA, he has provided the main impetus to the development and insertion of wavelet based processing in defense. My goal here is to describe the evolution of a stream of ideas in Harmonic Analysis, ideas which in the past have been mostly applied for the analysis and extraction of information from physical data, and which now are increasingly applied to organize and extract information and knowledge from any set of digital documents, from text to music to questionnaires. This form of signal processing on digital data, is part of the future of wavelet analysis.

  5. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D; Lanier, R

    2007-10-29

    The investigation of wavelet analysis techniques as a means of filtering the gross-count signal obtained from radiation detectors has shown promise. These signals are contaminated with high frequency statistical noise and significantly varying background radiation levels. Wavelet transforms allow a signal to be split into its constituent frequency components without losing relative timing information. Initial simulations and an injection study have been performed. Additionally, acquisition and analysis software has been written which allowed the technique to be evaluated in real-time under more realistic operating conditions. The technique performed well when compared to more traditional triggering techniques with its performance primarily limited by false alarms due to prominent features in the signal. An initial investigation into the potential rejection and classification of these false alarms has also shown promise.

  6. Wavelet analysis of the impedance cardiogram waveforms

    NASA Astrophysics Data System (ADS)

    Podtaev, S.; Stepanov, R.; Dumler, A.; Chugainov, S.; Tziberkin, K.

    2012-12-01

    Impedance cardiography has been used for diagnosing atrial and ventricular dysfunctions, valve disorders, aortic stenosis, and vascular diseases. Almost all the applications of impedance cardiography require determination of some of the characteristic points of the ICG waveform. The ICG waveform has a set of characteristic points known as A, B, E ((dZ/dt)max) X, Y, O and Z. These points are related to distinct physiological events in the cardiac cycle. Objective of this work is an approbation of a new method of processing and interpretation of the impedance cardiogram waveforms using wavelet analysis. A method of computer thoracic tetrapolar polyrheocardiography is used for hemodynamic registrations. Use of original wavelet differentiation algorithm allows combining filtration and calculation of the derivatives of rheocardiogram. The proposed approach can be used in clinical practice for early diagnostics of cardiovascular system remodelling in the course of different pathologies.

  7. Orthogonal wavelet moments and their multifractal invariants

    NASA Astrophysics Data System (ADS)

    Uchaev, Dm. V.; Uchaev, D. V.; Malinnikov, V. A.

    2015-02-01

    This paper introduces a new family of moments, namely orthogonal wavelet moments (OWMs), which are orthogonal realization of wavelet moments (WMs). In contrast to WMs with nonorthogonal kernel function, these moments can be used for multiresolution image representation and image reconstruction. The paper also introduces multifractal invariants (MIs) of OWMs which can be used instead of OWMs. Some reconstruction tests performed with noise-free and noisy images demonstrate that MIs of OWMs can also be used for image smoothing, sharpening and denoising. It is established that the reconstruction quality for MIs of OWMs can be better than corresponding orthogonal moments (OMs) and reduces to the reconstruction quality for the OMs if we use the zero scale level.

  8. Propagating unstable wavelets in cardiac tissue

    NASA Astrophysics Data System (ADS)

    Boyle, Patrick M.; Madhavan, Adarsh; Reid, Matthew P.; Vigmond, Edward J.

    2012-01-01

    Solitonlike propagating modes have been proposed for excitable tissue, but have never been measured in cardiac tissue. In this study, we simulate an experimental protocol to elicit these propagating unstable wavelets (PUWs) in a detailed three-dimensional ventricular wedge preparation. PUWs appear as fixed-shape wavelets that propagate only in the direction of cardiac fibers, with conduction velocity approximately 40% slower than normal action potential excitation. We investigate their properties, demonstrating that PUWs are not true solitons. The range of stimuli for which PUWs were elicited was very narrow (several orders of magnitude lower than the stimulus strength itself), but increased with reduced sodium conductance and reduced coupling in nonlongitudinal directions. We show that the phenomenon does not depend on the particular membrane representation used or the shape of the stimulating electrode.

  9. Development of wavelet analysis tools for turbulence

    NASA Technical Reports Server (NTRS)

    Bertelrud, A.; Erlebacher, G.; Dussouillez, PH.; Liandrat, M. P.; Liandrat, J.; Bailly, F. Moret; Tchamitchian, PH.

    1992-01-01

    Presented here is the general framework and the initial results of a joint effort to derive novel research tools and easy to use software to analyze and model turbulence and transition. Given here is a brief review of the issues, a summary of some basic properties of wavelets, and preliminary results. Technical aspects of the implementation, the physical conclusions reached at this time, and current developments are discussed.

  10. Multiscale peak detection in wavelet space.

    PubMed

    Zhang, Zhi-Min; Tong, Xia; Peng, Ying; Ma, Pan; Zhang, Ming-Jin; Lu, Hong-Mei; Chen, Xiao-Qing; Liang, Yi-Zeng

    2015-12-01

    Accurate peak detection is essential for analyzing high-throughput datasets generated by analytical instruments. Derivatives with noise reduction and matched filtration are frequently used, but they are sensitive to baseline variations, random noise and deviations in the peak shape. A continuous wavelet transform (CWT)-based method is more practical and popular in this situation, which can increase the accuracy and reliability by identifying peaks across scales in wavelet space and implicitly removing noise as well as the baseline. However, its computational load is relatively high and the estimated features of peaks may not be accurate in the case of peaks that are overlapping, dense or weak. In this study, we present multi-scale peak detection (MSPD) by taking full advantage of additional information in wavelet space including ridges, valleys, and zero-crossings. It can achieve a high accuracy by thresholding each detected peak with the maximum of its ridge. It has been comprehensively evaluated with MALDI-TOF spectra in proteomics, the CAMDA 2006 SELDI dataset as well as the Romanian database of Raman spectra, which is particularly suitable for detecting peaks in high-throughput analytical signals. Receiver operating characteristic (ROC) curves show that MSPD can detect more true peaks while keeping the false discovery rate lower than MassSpecWavelet and MALDIquant methods. Superior results in Raman spectra suggest that MSPD seems to be a more universal method for peak detection. MSPD has been designed and implemented efficiently in Python and Cython. It is available as an open source package at .

  11. Wavelet features in motion data classification

    NASA Astrophysics Data System (ADS)

    Szczesna, Agnieszka; Świtoński, Adam; Słupik, Janusz; Josiński, Henryk; Wojciechowski, Konrad

    2016-06-01

    The paper deals with the problem of motion data classification based on result of multiresolution analysis implemented in form of quaternion lifting scheme. Scheme processes directly on time series of rotations coded in form of unit quaternion signal. In the work new features derived from wavelet energy and entropy are proposed. To validate the approach gait database containing data of 30 different humans is used. The obtained results are satisfactory. The classification has over than 91% accuracy.

  12. Correlation Filtering of Modal Dynamics using the Laplace Wavelet

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Lind, Rick; Brenner, Martin J.

    1997-01-01

    Wavelet analysis allows processing of transient response data commonly encountered in vibration health monitoring tasks such as aircraft flutter testing. The Laplace wavelet is formulated as an impulse response of a single mode system to be similar to data features commonly encountered in these health monitoring tasks. A correlation filtering approach is introduced using the Laplace wavelet to decompose a signal into impulse responses of single mode subsystems. Applications using responses from flutter testing of aeroelastic systems demonstrate modal parameters and stability estimates can be estimated by correlation filtering free decay data with a set of Laplace wavelets.

  13. Wavelet variance analysis for random fields on a regular lattice.

    PubMed

    Mondal, Debashis; Percival, Donald B

    2012-02-01

    There has been considerable recent interest in using wavelets to analyze time series and images that can be regarded as realizations of certain 1-D and 2-D stochastic processes on a regular lattice. Wavelets give rise to the concept of the wavelet variance (or wavelet power spectrum), which decomposes the variance of a stochastic process on a scale-by-scale basis. The wavelet variance has been applied to a variety of time series, and a statistical theory for estimators of this variance has been developed. While there have been applications of the wavelet variance in the 2-D context (in particular, in works by Unser in 1995 on wavelet-based texture analysis for images and by Lark and Webster in 2004 on analysis of soil properties), a formal statistical theory for such analysis has been lacking. In this paper, we develop the statistical theory by generalizing and extending some of the approaches developed for time series, thus leading to a large-sample theory for estimators of 2-D wavelet variances. We apply our theory to simulated data from Gaussian random fields with exponential covariances and from fractional Brownian surfaces. We demonstrate that the wavelet variance is potentially useful for texture discrimination. We also use our methodology to analyze images of four types of clouds observed over the southeast Pacific Ocean.

  14. REVIEWS OF TOPICAL PROBLEMS: Wavelets and their uses

    NASA Astrophysics Data System (ADS)

    Dremin, Igor M.; Ivanov, Oleg V.; Nechitailo, Vladimir A.

    2001-05-01

    This review paper is intended to give a useful guide for those who want to apply the discrete wavelet transform in practice. The notion of wavelets and their use in practical computing and various applications are briefly described, but rigorous proofs of mathematical statements are omitted, and the reader is just referred to the corresponding literature. The multiresolution analysis and fast wavelet transform have become a standard procedure for dealing with discrete wavelets. The proper choice of a wavelet and use of nonstandard matrix multiplication are often crucial for the achievement of a goal. Analysis of various functions with the help of wavelets allows one to reveal fractal structures, singularities etc. The wavelet transform of operator expressions helps solve some equations. In practical applications one often deals with the discretized functions, and the problem of stability of the wavelet transform and corresponding numerical algorithms becomes important. After discussing all these topics we turn to practical applications of the wavelet machinery. They are so numerous that we have to limit ourselves to a few examples only. The authors would be grateful for any comments which would move us closer to the goal proclaimed in the first phrase of the abstract.

  15. Wavelet-based moment invariants for pattern recognition

    NASA Astrophysics Data System (ADS)

    Chen, Guangyi; Xie, Wenfang

    2011-07-01

    Moment invariants have received a lot of attention as features for identification and inspection of two-dimensional shapes. In this paper, two sets of novel moments are proposed by using the auto-correlation of wavelet functions and the dual-tree complex wavelet functions. It is well known that the wavelet transform lacks the property of shift invariance. A little shift in the input signal will cause very different output wavelet coefficients. The autocorrelation of wavelet functions and the dual-tree complex wavelet functions, on the other hand, are shift-invariant, which is very important in pattern recognition. Rotation invariance is the major concern in this paper, while translation invariance and scale invariance can be achieved by standard normalization techniques. The Gaussian white noise is added to the noise-free images and the noise levels vary with different signal-to-noise ratios. Experimental results conducted in this paper show that the proposed wavelet-based moments outperform Zernike's moments and the Fourier-wavelet descriptor for pattern recognition under different rotation angles and different noise levels. It can be seen that the proposed wavelet-based moments can do an excellent job even when the noise levels are very high.

  16. Improved total variation algorithms for wavelet-based denoising

    NASA Astrophysics Data System (ADS)

    Easley, Glenn R.; Colonna, Flavia

    2007-04-01

    Many improvements of wavelet-based restoration techniques suggest the use of the total variation (TV) algorithm. The concept of combining wavelet and total variation methods seems effective but the reasons for the success of this combination have been so far poorly understood. We propose a variation of the total variation method designed to avoid artifacts such as oil painting effects and is more suited than the standard TV techniques to be implemented with wavelet-based estimates. We then illustrate the effectiveness of this new TV-based method using some of the latest wavelet transforms such as contourlets and shearlets.

  17. A Wavelet Packets Approach to Electrocardiograph Baseline Drift Cancellation

    PubMed Central

    Mozaffary, Behzad

    2006-01-01

    Baseline wander elimination is considered a classical problem. In electrocardiography (ECG) signals, baseline drift can influence the accurate diagnosis of heart disease such as ischemia and arrhythmia. We present a wavelet-transform- (WT-) based search algorithm using the energy of the signal in different scales to isolate baseline wander from the ECG signal. The algorithm computes wavelet packet coefficients and then in each scale the energy of the signal is calculated. Comparison is made and the branch of the wavelet binary tree corresponding to higher energy wavelet spaces is chosen. This algorithm is tested using the data record from MIT/BIH database and excellent results are obtained. PMID:23165064

  18. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  19. Wavelet-based verification of the quantitative precipitation forecast

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi; Jakubiak, Bogumil

    2016-06-01

    This paper explores the use of wavelets for spatial verification of quantitative precipitation forecasts (QPF), and especially the capacity of wavelets to provide both localization and scale information. Two 24-h forecast experiments using the two versions of the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) on 22 August 2010 over Poland are used to illustrate the method. Strong spatial localizations and associated intermittency of the precipitation field make verification of QPF difficult using standard statistical methods. The wavelet becomes an attractive alternative, because it is specifically designed to extract spatially localized features. The wavelet modes are characterized by the two indices for the scale and the localization. Thus, these indices can simply be employed for characterizing the performance of QPF in scale and localization without any further elaboration or tunable parameters. Furthermore, spatially-localized features can be extracted in wavelet space in a relatively straightforward manner with only a weak dependence on a threshold. Such a feature may be considered an advantage of the wavelet-based method over more conventional "object" oriented verification methods, as the latter tend to represent strong threshold sensitivities. The present paper also points out limits of the so-called "scale separation" methods based on wavelets. Our study demonstrates how these wavelet-based QPF verifications can be performed straightforwardly. Possibilities for further developments of the wavelet-based methods, especially towards a goal of identifying a weak physical process contributing to forecast error, are also pointed out.

  20. Bayesian Wavelet Shrinkage of the Haar-Fisz Transformed Wavelet Periodogram

    PubMed Central

    2015-01-01

    It is increasingly being realised that many real world time series are not stationary and exhibit evolving second-order autocovariance or spectral structure. This article introduces a Bayesian approach for modelling the evolving wavelet spectrum of a locally stationary wavelet time series. Our new method works by combining the advantages of a Haar-Fisz transformed spectrum with a simple, but powerful, Bayesian wavelet shrinkage method. Our new method produces excellent and stable spectral estimates and this is demonstrated via simulated data and on differenced infant electrocardiogram data. A major additional benefit of the Bayesian paradigm is that we obtain rigorous and useful credible intervals of the evolving spectral structure. We show how the Bayesian credible intervals provide extra insight into the infant electrocardiogram data. PMID:26381141

  1. Denoising solar radiation data using coiflet wavelets

    SciTech Connect

    Karim, Samsul Ariffin Abdul Janier, Josefina B. Muthuvalu, Mohana Sundaram; Hasan, Mohammad Khatim; Sulaiman, Jumat; Ismail, Mohd Tahir

    2014-10-24

    Signal denoising and smoothing plays an important role in processing the given signal either from experiment or data collection through observations. Data collection usually was mixed between true data and some error or noise. This noise might be coming from the apparatus to measure or collect the data or human error in handling the data. Normally before the data is use for further processing purposes, the unwanted noise need to be filtered out. One of the efficient methods that can be used to filter the data is wavelet transform. Due to the fact that the received solar radiation data fluctuates according to time, there exist few unwanted oscillation namely noise and it must be filtered out before the data is used for developing mathematical model. In order to apply denoising using wavelet transform (WT), the thresholding values need to be calculated. In this paper the new thresholding approach is proposed. The coiflet2 wavelet with variation diminishing 4 is utilized for our purpose. From numerical results it can be seen clearly that, the new thresholding approach give better results as compare with existing approach namely global thresholding value.

  2. Wavelet Technique Applications in Planetary Nebulae Images

    NASA Astrophysics Data System (ADS)

    Leal Ferreira, M. L.; Rabaça, C. R.; Cuisinier, F.; Epitácio Pereira, D. N.

    2009-05-01

    Through the application of the wavelet technique to a planetary nebulae image, we are able to identify different scale sizes structures present in its wavelet coefficient decompositions. In a multiscale vision model, an object is defined as a hierarchical set of these structures. We can then use this model to independently reconstruct the different objects that compose the nebulae. The result is the separation and identification of superposed objects, some of them with very low surface brightness, what makes them, in general, very difficult to be seen in the original images due to the presence of noise. This allows us to make a more detailed analysis of brightness distribution in these sources. In this project, we use this method to perform a detailed morphological study of some planetary nebulae and to investigate whether one of them indeed shows internal temperature fluctuations. We have also conducted a series of tests concerning the reliability of the method and the confidence level of the objects detected. The wavelet code used in this project is called OV_WAV and was developed by the UFRJ's Astronomy Departament team.

  3. Wavelet analysis of radon time series

    NASA Astrophysics Data System (ADS)

    Barbosa, Susana; Pereira, Alcides; Neves, Luis

    2013-04-01

    Radon is a radioactive noble gas with a half-life of 3.8 days ubiquitous in both natural and indoor environments. Being produced in uranium-bearing materials by decay from radium, radon can be easily and accurately measured by nuclear methods, making it an ideal proxy for time-varying geophysical processes. Radon time series exhibit a complex temporal structure and large variability on multiple scales. Wavelets are therefore particularly suitable for the analysis on a scale-by-scale basis of time series of radon concentrations. In this study continuous and discrete wavelet analysis is applied to describe the variability structure of hourly radon time series acquired both indoors and on a granite site in central Portugal. A multi-resolution decomposition is performed for extraction of sub-series associated to specific scales. The high-frequency components are modeled in terms of stationary autoregressive / moving average (ARMA) processes. The amplitude and phase of the periodic components are estimated and tidal features of the signals are assessed. Residual radon concentrations (after removal of periodic components) are further examined and the wavelet spectrum is used for estimation of the corresponding Hurst exponent. The results for the several radon time series considered in the present study are very heterogeneous in terms of both high-frequency and long-term temporal structure indicating that radon concentrations are very site-specific and heavily influenced by local factors.

  4. Multispectral multisensor image fusion using wavelet transforms

    USGS Publications Warehouse

    Lemeshewsky, George P.

    1999-01-01

    Fusion techniques can be applied to multispectral and higher spatial resolution panchromatic images to create a composite image that is easier to interpret than the individual images. Wavelet transform-based multisensor, multiresolution fusion (a type of band sharpening) was applied to Landsat thematic mapper (TM) multispectral and coregistered higher resolution SPOT panchromatic images. The objective was to obtain increased spatial resolution, false color composite products to support the interpretation of land cover types wherein the spectral characteristics of the imagery are preserved to provide the spectral clues needed for interpretation. Since the fusion process should not introduce artifacts, a shift invariant implementation of the discrete wavelet transform (SIDWT) was used. These results were compared with those using the shift variant, discrete wavelet transform (DWT). Overall, the process includes a hue, saturation, and value color space transform to minimize color changes, and a reported point-wise maximum selection rule to combine transform coefficients. The performance of fusion based on the SIDWT and DWT was evaluated with a simulated TM 30-m spatial resolution test image and a higher resolution reference. Simulated imagery was made by blurring higher resolution color-infrared photography with the TM sensors' point spread function. The SIDWT based technique produced imagery with fewer artifacts and lower error between fused images and the full resolution reference. Image examples with TM and SPOT 10-m panchromatic illustrate the reduction in artifacts due to the SIDWT based fusion.

  5. Adaptive wavelet Wiener filtering of ECG signals.

    PubMed

    Smital, Lukáš; Vítek, Martin; Kozumplík, Jiří; Provazník, Ivo

    2013-02-01

    In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter.

  6. Adaptive wavelet Wiener filtering of ECG signals.

    PubMed

    Smital, Lukáš; Vítek, Martin; Kozumplík, Jiří; Provazník, Ivo

    2013-02-01

    In this study, we focused on the reduction of broadband myopotentials (EMG) in ECG signals using the wavelet Wiener filtering with noise-free signal estimation. We used the dyadic stationary wavelet transform (SWT) in the Wiener filter as well as in estimating the noise-free signal. Our goal was to find a suitable filter bank and to choose other parameters of the Wiener filter with respect to the signal-to-noise ratio (SNR) obtained. Testing was performed on artificially noised signals from the standard CSE database sampled at 500 Hz. When creating an artificial interference, we started from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of an EMG signal. To improve the filtering performance, we used adaptive setting parameters of filtering according to the level of interference in the input signal. We were able to increase the average SNR of the whole test database by about 10.6 dB. The proposed algorithm provides better results than the classic wavelet Wiener filter. PMID:23192472

  7. Two-dimensional integer wavelet transform with reduced influence of rounding operations

    NASA Astrophysics Data System (ADS)

    Strutz, Tilo; Rennert, Ines

    2012-12-01

    If a system for lossless compression of images applies a decorrelation step, this step must map integer input values to integer output values. This can be achieved, for example, using the integer wavelet transform (IWT). The non-linearity, introduced by the obligatory rounding steps, is the main drawback of the IWT, since it deteriorates the desired filter characteristic. This paper discusses different methods for reducing the influence of rounding in 5/3 and 9/7 filter banks. A novel combination of two-dimensional implementations of the JPEG2000 9/7 filter bank with new filter coefficients is proposed and the effects of the methods on lossless image compression are investigated. In addition, these filter banks are compared to the 9/7 Deslauriers-Dubuc filter bank (97DD). The analysed two-dimensional implementations generally perform better than their one-dimensional counterparts in terms of compression ratio for natural images. On average, the 2D 97DD filter bank performs best. In addition, it has been found that the compression results cannot be improved by simply reducing the number of lifting steps via 2D implementations of the JPEG2000 9/7 filter bank. Only the 2D implementation with a minimum number of lifting steps, in combination with modified lifting coefficients, leads to fewer bits per pixel than the separable implementation on average for a selected set of images.

  8. Lossless compression algorithm for multispectral imagers

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth

    2008-08-01

    Multispectral imaging is becoming an increasingly important tool for monitoring the earth and its environment from space borne and airborne platforms. Multispectral imaging data consists of visible and IR measurements from a scene across space and spectrum. Growing data rates resulting from faster scanning and finer spatial and spectral resolution makes compression an increasingly critical tool to reduce data volume for transmission and archiving. Research for NOAA NESDIS has been directed to finding for the characteristics of satellite atmospheric Earth science Imager sensor data what level of Lossless compression ratio can be obtained as well as appropriate types of mathematics and approaches that can lead to approaching this data's entropy level. Conventional lossless do not achieve the theoretical limits for lossless compression on imager data as estimated from the Shannon entropy. In a previous paper, the authors introduce a lossless compression algorithm developed for MODIS as a proxy for future NOAA-NESDIS satellite based Earth science multispectral imagers such as GOES-R. The algorithm is based on capturing spectral correlations using spectral prediction, and spatial correlations with a linear transform encoder. In decompression, the algorithm uses a statistically computed look up table to iteratively predict each channel from a channel decompressed in the previous iteration. In this paper we present a new approach which fundamentally differs from our prior work. In this new approach, instead of having a single predictor for each pair of bands we introduce a piecewise spatially varying predictor which significantly improves the compression results. Our new algorithm also now optimizes the sequence of channels we use for prediction. Our results are evaluated by comparison with a state of the art wavelet based image compression scheme, Jpeg2000. We present results on the 14 channel subset of the MODIS imager, which serves as a proxy for the GOES-R imager. We

  9. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  10. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  11. ECG compression using uniform scalar dead-zone quantization and conditional entropy coding.

    PubMed

    Chen, Jianhua; Wang, Fuyan; Zhang, Yufeng; Shi, Xinling

    2008-05-01

    A new wavelet-based method for the compression of electrocardiogram (ECG) data is presented. A discrete wavelet transform (DWT) is applied to the digitized ECG signal. The DWT coefficients are first quantized with a uniform scalar dead-zone quantizer, and then the quantized coefficients are decomposed into four symbol streams, representing a binary significance stream, the signs, the positions of the most significant bits, and the residual bits. An adaptive arithmetic coder with several different context models is employed for the entropy coding of these symbol streams. Simulation results on several records from the MIT-BIH arrhythmia database show that the proposed coding algorithm outperforms some recently developed ECG compression algorithms.

  12. Wavelet based image visibility enhancement of IR images

    NASA Astrophysics Data System (ADS)

    Jiang, Qin; Owechko, Yuri; Blanton, Brendan

    2016-05-01

    Enhancing the visibility of infrared images obtained in a degraded visibility environment is very important for many applications such as surveillance, visual navigation in bad weather, and helicopter landing in brownout conditions. In this paper, we present an IR image visibility enhancement system based on adaptively modifying the wavelet coefficients of the images. In our proposed system, input images are first filtered by a histogram-based dynamic range filter in order to remove sensor noise and convert the input images into 8-bit dynamic range for efficient processing and display. By utilizing a wavelet transformation, we modify the image intensity distribution and enhance image edges simultaneously. In the wavelet domain, low frequency wavelet coefficients contain original image intensity distribution while high frequency wavelet coefficients contain edge information for the original images. To modify the image intensity distribution, an adaptive histogram equalization technique is applied to the low frequency wavelet coefficients while to enhance image edges, an adaptive edge enhancement technique is applied to the high frequency wavelet coefficients. An inverse wavelet transformation is applied to the modified wavelet coefficients to obtain intensity images with enhanced visibility. Finally, a Gaussian filter is used to remove blocking artifacts introduced by the adaptive techniques. Since wavelet transformation uses down-sampling to obtain low frequency wavelet coefficients, histogram equalization of low-frequency coefficients is computationally more efficient than histogram equalization of the original images. We tested the proposed system with degraded IR images obtained from a helicopter landing in brownout conditions. Our experimental results show that the proposed system is effective for enhancing the visibility of degraded IR images.

  13. Wavelet based free-form deformations for nonrigid registration

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  14. Evaluation of irreversible JPEG compression for a clinical ultrasound practice.

    PubMed

    Persons, Kenneth R; Hangiandreou, Nicholas J; Charboneau, Nicholas T; Charboneau, J; James, E; Douglas, Bruce R; Salmon, Ann P; Knudsen, John M; Erickson, Bradley J

    2002-03-01

    A prior ultrasound study indicated that images with low to moderate levels of JPEG and wavelet compression were acceptable for diagnostic purposes. The purpose of this study is to validate this prior finding using the Joint Photographic Experts Group (JPEG) baseline compression algorithm, at a compression ratio of approximately 10:1, on a sufficiently large number of grayscale and color ultrasound images to attain a statistically significant result. The practical goal of this study is to determine if it is feasible for radiologists to use irreversibly compressed images as an integral part of the day to day ultrasound practice (ie, perform primary diagnosis with, and store irreversibly compressed images in the ultrasound PACS archive). In this study, 5 Radiologists were asked to review 300 grayscale and color static ultrasound images selected from 4 major anatomic groups. Each image was compressed and decompressed using the JPEG baseline compression algorithm at a fixed quality factor resulting in an average compression ratio of approximately 9:1. The images were presented in pairs (original and compressed) in a blinded fashion on a PACS workstation in the ultrasound reading areas, and radiologists were asked to pick which image they preferred in terms of diagnostic utility and their degree of certainty (on a scale from 1 to 4). Of the 1499 total readings, 50.17% (95% confidence intervals at 47.6%, and 52.7%) indicated a preference for the original image in the pair, and 49.83% (95% confidence intervals at 47.3%, and 52.0%) indicated a preference for the compressed image. These findings led the authors to conclude that static color and gray-scale ultrasound images compressed with JPEG at approximately 9:1 are statistically indistinguishable from the originals for primary diagnostic purposes. Based on the authors laboratory experience with compression and the results of this and other prior studies, JPEG compression is now being applied to all ultrasound images in

  15. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm

  16. Schrödinger like equation for wavelets

    NASA Astrophysics Data System (ADS)

    Zúñiga-Segundo, A.; Moya-Cessa, H. M.; Soto-Eguibar, F.

    2016-01-01

    An explicit phase space representation of the wave function is build based on a wavelet transformation. The wavelet transformation allows us to understand the relationship between s - ordered Wigner function, (or Wigner function when s = 0), and the Torres-Vega-Frederick's wave functions. This relationship is necessary to find a general solution of the Schrödinger equation in phase-space.

  17. Wavelet based feature extraction and visualization in hyperspectral tissue characterization

    PubMed Central

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-01-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p <0.02) between the chosen tissue parameters and the selected wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition. PMID:25574437

  18. Lossless grey image compression using a splitting binary tree

    NASA Astrophysics Data System (ADS)

    Li, Tao; Tian, Xin; Xiong, Cheng-Yi; Li, Yan-Sheng; Zhang, Yun; Tian, Jin-Wen

    2013-10-01

    A multi-layer coding algorithm is proposed for grey image lossless compression. We transform the original image by a set of bases (e.g., wavelets, DCT, and gradient spaces). Then, the transformed image is split into a sub-image set with a binary tree. The set include two parts: major sub-images and minor sub-images, which are coded separately. Experimental results over a common dataset show that the proposed algorithm performs close to JPEG-LS in terms of bitrate. However, we can get a scalable image quality, which is similar to JPEG2000. A suboptimal compressed image can be obtained when the bitstream is truncated by unexpected factors. Our algorithm is quit suitable for image transmission, on internet or on satellites.

  19. On ECG reconstruction using weighted-compressive sensing.

    PubMed

    Zonoobi, Dornoosh; Kassim, Ashraf A

    2014-06-01

    The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices.

  20. On ECG reconstruction using weighted-compressive sensing

    PubMed Central

    Kassim, Ashraf A.

    2014-01-01

    The potential of the new weighted-compressive sensing approach for efficient reconstruction of electrocardiograph (ECG) signals is investigated. This is motivated by the observation that ECG signals are hugely sparse in the frequency domain and the sparsity changes slowly over time. The underlying idea of this approach is to extract an estimated probability model for the signal of interest, and then use this model to guide the reconstruction process. The authors show that the weighted-compressive sensing approach is able to achieve reconstruction performance comparable with the current state-of-the-art discrete wavelet transform-based method, but with substantially less computational cost to enable it to be considered for use in the next generation of miniaturised wearable ECG monitoring devices. PMID:26609381

  1. Compressed Sensing Based Fingerprint Identification for Wireless Transmitters

    PubMed Central

    Zhao, Caidan; Wu, Xiongpeng; Huang, Lianfen; Yao, Yan; Chang, Yao-Chung

    2014-01-01

    Most of the existing fingerprint identification techniques are unable to distinguish different wireless transmitters, whose emitted signals are highly attenuated, long-distance propagating, and of strong similarity to their transient waveforms. Therefore, this paper proposes a new method to identify different wireless transmitters based on compressed sensing. A data acquisition system is designed to capture the wireless transmitter signals. Complex analytical wavelet transform is used to obtain the envelope of the transient signal, and the corresponding features are extracted by using the compressed sensing theory. Feature selection utilizing minimum redundancy maximum relevance (mRMR) is employed to obtain the optimal feature subsets for identification. The results show that the proposed method is more efficient for the identification of wireless transmitters with similar transient waveforms. PMID:24892053

  2. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  3. Wavelet Analysis of Satellite Images for Coastal Watch

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Peng, Chich Y.; Chang, Steve Y.-S.

    1997-01-01

    The two-dimensional wavelet transform is a very efficient bandpass filter, which can be used to separate various scales of processes and show their relative phase/location. In this paper, algorithms and techniques for automated detection and tracking of mesoscale features from satellite imagery employing wavelet analysis are developed. The wavelet transform has been applied to satellite images, such as those from synthetic aperture radar (SAR), advanced very-high-resolution radiometer (AVHRR), and coastal zone color scanner (CZCS) for feature extraction. The evolution of mesoscale features such as oil slicks, fronts, eddies, and ship wakes can be tracked by the wavelet analysis using satellite data from repeating paths. Several examples of the wavelet analysis applied to various satellite Images demonstrate the feasibility of this technique for coastal monitoring.

  4. Combining Wavelet Transform and Hidden Markov Models for ECG Segmentation

    NASA Astrophysics Data System (ADS)

    Andreão, Rodrigo Varejão; Boudy, Jérôme

    2006-12-01

    This work aims at providing new insights on the electrocardiogram (ECG) segmentation problem using wavelets. The wavelet transform has been originally combined with a hidden Markov models (HMMs) framework in order to carry out beat segmentation and classification. A group of five continuous wavelet functions commonly used in ECG analysis has been implemented and compared using the same framework. All experiments were realized on the QT database, which is composed of a representative number of ambulatory recordings of several individuals and is supplied with manual labels made by a physician. Our main contribution relies on the consistent set of experiments performed. Moreover, the results obtained in terms of beat segmentation and premature ventricular beat (PVC) detection are comparable to others works reported in the literature, independently of the type of the wavelet. Finally, through an original concept of combining two wavelet functions in the segmentation stage, we achieve our best performances.

  5. Application of harmonic wavelet to filtering of rockbolt detecting signal

    NASA Astrophysics Data System (ADS)

    Zhao, Yucheng; Liu, Hongyan; Wang, Jiyan; Miao, Xiexing

    2008-11-01

    Harmonic wavelet had explicit functional expression, flexible time-frequency division, simple transforming algorithm and a finer frequency refinement function than the others wavelet. In this paper based on frequency distributing characteristic of nondestructive testing signal from rockbolt supporting system, the discrete harmonic wavelet transforming theory was used to get rid of the lower and higher frequency signal from the initial signal. Meanwhile, the reconstruction algorithm of harmonic wavelet was brought forward to gain the signal without the unnecessary bandwidth signals. Finally, a numerical signal and real signal which can demonstrate superiority of harmonic wavelet in filtering are presented, and the transforming result shows that it would make the system run more precise and stably in the detecting to the quality of rockbolt supporting system.

  6. A multiresolution wavelet representation in two or more dimensions

    NASA Technical Reports Server (NTRS)

    Bromley, B. C.

    1992-01-01

    In the multiresolution approximation, a signal is examined on a hierarchy of resolution scales by projection onto sets of smoothing functions. Wavelets are used to carry the detail information connecting adjacent sets in the resolution hierarchy. An algorithm has been implemented to perform a multiresolution decomposition in n greater than or equal to 2 dimensions based on wavelets generated from products of 1-D wavelets and smoothing functions. The functions are chosen so that an n-D wavelet may be associated with a single resolution scale and orientation. The algorithm enables complete reconstruction of a high resolution signal from decomposition coefficients. The signal may be oversampled to accommodate non-orthogonal wavelet systems, or to provide approximate translational invariance in the decomposition arrays.

  7. A New Adaptive Mother Wavelet for Electromagnetic Transient Analysis

    NASA Astrophysics Data System (ADS)

    Guillén, Daniel; Idárraga-Ospina, Gina; Cortes, Camilo

    2016-01-01

    Wavelet Transform (WT) is a powerful technique of signal processing, its applications in power systems have been increasing to evaluate power system conditions, such as faults, switching transients, power quality issues, among others. Electromagnetic transients in power systems are due to changes in the network configuration, producing non-periodic signals, which have to be identified to avoid power outages in normal operation or transient conditions. In this paper a methodology to develop a new adaptive mother wavelet for electromagnetic transient analysis is proposed. Classification is carried out with an innovative technique based on adaptive wavelets, where filter bank coefficients will be adapted until a discriminant criterion is optimized. Then, its corresponding filter coefficients will be used to get the new mother wavelet, named wavelet ET, which allowed to identify and to distinguish the high frequency information produced by different electromagnetic transients.

  8. A Multiscale Wavelet Solver with O( n) Complexity

    NASA Astrophysics Data System (ADS)

    Williams, John R.; Amaratunga, Kevin

    1995-11-01

    In this paper, we use the biorthogonal wavelets recently constructed by Dahlke and Weinreich to implement a highly efficient procedure for solving a certain class of one-dimensional problems, (∂21/∂x21)u = f,I ɛ Z, I > 0. For these problems, the discrete biorthogonal wavelet transform allows us to set up a system of wavelet-Galerkin equations in which the scales are uncoupled, so that a true multiscale solution procedure may be formulated. We prove that the resulting stiffness matrix is in fact an almost perfectly diagonal matrix (the original aim of the construction was to achieve a block diagonal structure) and we show that this leads to an algorithm whose cost is O(n). We also present numerical results which demonstrate that the multiscale biorthogonal wavelet algorithm is superior to the more conventional single scale orthogonal wavelet approach both in terms of speed and in terms of convergence.

  9. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  10. An image fusion method based on biorthogonal wavelet

    NASA Astrophysics Data System (ADS)

    Li, Jianlin; Yu, Jiancheng; Sun, Shengli

    2008-03-01

    Image fusion could process and utilize the source images, with complementing different image information, to achieve the more objective and essential understanding of the identical object. Recently, image fusion has been extensively applied in many fields such as medical imaging, micro photographic imaging, remote sensing, and computer vision as well as robot. There are various methods have been proposed in the past years, such as pyramid decomposition and wavelet transform algorithm. As for wavelet transform algorithm, due to the virtue of its multi-resolution, wavelet transform has been applied in image processing successfully. Another advantage of wavelet transform is that it can be much more easily realized in hardware, because its data format is very simple, so it could save a lot of resources, besides, to some extent, it can solve the real-time problem of huge-data image fusion. However, as the orthogonal filter of wavelet transform doesn't have the characteristics of linear phase, the phase distortion will lead to the distortion of the image edge. To make up for this shortcoming, the biorthogonal wavelet is introduced here. So, a novel image fusion scheme based on biorthogonal wavelet decomposition is presented in this paper. As for the low-frequency and high-frequency wavelet decomposition coefficients, the local-area-energy-weighted-coefficient fusion rule is adopted and different thresholds of low-frequency and high-frequency are set. Based on biorthogonal wavelet transform and traditional pyramid decomposition algorithm, an MMW image and a visible image are fused in the experiment. Compared with the traditional pyramid decomposition, the fusion scheme based biorthogonal wavelet is more capable to retain and pick up image information, and make up the distortion of image edge. So, it has a wide application potential.

  11. Compression of multispectral fluorescence microscopic images based on a modified set partitioning in hierarchal trees

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Robinson, J. Paul; Rajwa, Bartek

    2009-02-01

    Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in a compressed format. A considerable number of techniques based on interband and intraband redundancies of multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal data. However, these works have been carried out mostly in the elds of remote sensing and video processing. Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal performance of 2D compression techniques in compressing 3D images. Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image. The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands

  12. Image encryption in the wavelet domain

    NASA Astrophysics Data System (ADS)

    Bao, Long; Zhou, Yicong; Chen, C. L. Philip

    2013-05-01

    Most existing image encryption algorithms often transfer the original image into a noise-like image which is an apparent visual sign indicating the presence of an encrypted image. Motivated by the data hiding technologies, this paper proposes a novel concept of image encryption, namely transforming an encrypted original image into another meaningful image which is the final resulting encrypted image and visually the same as the cover image, overcoming the mentioned problem. Using this concept, we introduce a new image encryption algorithm based on the wavelet decomposition. Simulations and security analysis are given to show the excellent performance of the proposed concept and algorithm.

  13. Lung tissue classification using wavelet frames.

    PubMed

    Depeursinge, Adrien; Sage, Daniel; Hidki, Asmâa; Platon, Alexandra; Poletti, Pierre-Alexandre; Unser, Michael; Müller, Henning

    2007-01-01

    We describe a texture classification system that identifies lung tissue patterns from high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD). This pattern recognition task is part of an image-based diagnostic aid system for ILDs. Five lung tissue patterns (healthy, emphysema, ground glass, fibrosis and microdules) selected from a multimedia database are classified using the overcomplete discrete wavelet frame decompostion combined with grey-level histogram features. The overall multiclass accuracy reaches 92.5% of correct matches while combining the two types of features, which are found to be complementary. PMID:18003452

  14. Spike detection using the continuous wavelet transform.

    PubMed

    Nenadic, Zoran; Burdick, Joel W

    2005-01-01

    This paper combines wavelet transforms with basic detection theory to develop a new unsupervised method for robustly detecting and localizing spikes in noisy neural recordings. The method does not require the construction of templates, or the supervised setting of thresholds. We present extensive Monte Carlo simulations, based on actual extracellular recordings, to show that this technique surpasses other commonly used methods in a wide variety of recording conditions. We further demonstrate that falsely detected spikes corresponding to our method resemble actual spikes more than the false positives of other techniques such as amplitude thresholding. Moreover, the simplicity of the method allows for nearly real-time execution. PMID:15651566

  15. Lung tissue classification using wavelet frames.

    PubMed

    Depeursinge, Adrien; Sage, Daniel; Hidki, Asmâa; Platon, Alexandra; Poletti, Pierre-Alexandre; Unser, Michael; Müller, Henning

    2007-01-01

    We describe a texture classification system that identifies lung tissue patterns from high-resolution computed tomography (HRCT) images of patients affected with interstitial lung diseases (ILD). This pattern recognition task is part of an image-based diagnostic aid system for ILDs. Five lung tissue patterns (healthy, emphysema, ground glass, fibrosis and microdules) selected from a multimedia database are classified using the overcomplete discrete wavelet frame decompostion combined with grey-level histogram features. The overall multiclass accuracy reaches 92.5% of correct matches while combining the two types of features, which are found to be complementary.

  16. Simulation-based design using wavelets

    NASA Astrophysics Data System (ADS)

    Williams, John R.; Amaratunga, Kevin S.

    1994-03-01

    The design of large-scale systems requires methods of analysis which have the flexibility to provide a fast interactive simulation capability, while retaining the ability to provide high-order solution accuracy when required. This suggests that a hierarchical solution procedure is required that allows us to trade off accuracy for solution speed in a rational manner. In this paper, we examine the properties of the biorthogonal wavelets recently constructed by Dahlke and Weinreich and show how they can be used to implement a highly efficient multiscale solution procedure for solving a certain class of one-dimensional problems.

  17. Multiscale Transient Signal Detection: Localizing Transients in Geodetic Data Through Wavelet Transforms and Sparse Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Riel, B.; Simons, M.; Agram, P.

    2012-12-01

    Transients are a class of deformation signals on the Earth's surface that can be described as non-periodic accumulation of strain in the crust. Over seismically and volcanically active regions, these signals are often challenging to detect due to noise and other modes of deformation. Geodetic datasets that provide precise measurements of surface displacement over wide areas are ideal for exploiting both the spatial and temporal coherence of transient signals. We present an extension to the Multiscale InSAR Time Series (MInTS) approach for analyzing geodetic data by combining the localization benefits of wavelet transforms (localizing signals in space) with sparse optimization techniques (localizing signals in time). Our time parameterization approach allows us to reduce geodetic time series to sparse, compressible signals with very few non-zero coefficients corresponding to transient events. We first demonstrate the temporal transient detection by analyzing GPS data over the Long Valley caldera in California and along the San Andreas fault near Parkfield, CA. For Long Valley, we are able to resolve the documented 2002-2003 uplift event with greater temporal precision. Similarly for Parkfield, we model the postseismic deformation by specific integrated basis splines characterized by timescales that are largely consistent with postseismic relaxation times. We then apply our method to ERS and Envisat InSAR datasets consisting of over 200 interferograms for Long Valley and over 100 interferograms for Parkfield. The wavelet transforms reduce the impact of spatially correlated atmospheric noise common in InSAR data since the wavelet coefficients themselves are essentially uncorrelated. The spatial density and extended temporal coverage of the InSAR data allows us to effectively localize ground deformation events in both space and time with greater precision than has been previously accomplished.

  18. On the Use of Adaptive Wavelet-based Methods for Ocean Modeling and Data Assimilation Problems

    NASA Astrophysics Data System (ADS)

    Vasilyev, Oleg V.; Yousuff Hussaini, M.; Souopgui, Innocent

    2014-05-01

    Latest advancements in parallel wavelet-based numerical methodologies for the solution of partial differential equations, combined with the unique properties of wavelet analysis to unambiguously identify and isolate localized dynamically dominant flow structures, make it feasible to start developing integrated approaches for ocean modeling and data assimilation problems that take advantage of temporally and spatially varying meshes. In this talk the Parallel Adaptive Wavelet Collocation Method with spatially and temporarily varying thresholding is presented and the feasibility/potential advantages of its use for ocean modeling are discussed. The second half of the talk focuses on the recently developed Simultaneous Space-time Adaptive approach that addresses one of the main challenges of variational data assimilation, namely the requirement to have a forward solution available when solving the adjoint problem. The issue is addressed by concurrently solving forward and adjoint problems in the entire space-time domain on a near optimal adaptive computational mesh that automatically adapts to spatio-temporal structures of the solution. The compressed space-time form of the solution eliminates the need to save or recompute forward solution for every time slice, as it is typically done in traditional time marching variational data assimilation approaches. The simultaneous spacio-temporal discretization of both the forward and the adjoint problems makes it possible to solve both of them concurrently on the same space-time adaptive computational mesh reducing the amount of saved data to the strict minimum for a given a priori controlled accuracy of the solution. The simultaneous space-time adaptive approach of variational data assimilation is demonstrated for the advection diffusion problem in 1D-t and 2D-t dimensions.

  19. Block-based conditional entropy coding for medical image compression

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng

    2003-05-01

    In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.

  20. A constrained two-layer compression technique for ECG waves.

    PubMed

    Byun, Kyungguen; Song, Eunwoo; Shim, Hwan; Lim, Hyungjoon; Kang, Hong-Goo

    2015-08-01

    This paper proposes a constrained two-layer compression technique for electrocardiogram (ECG) waves, of which encoded parameters can be directly used for the diagnosis of arrhythmia. In the first layer, a single ECG beat is represented by one of the registered templates in the codebook. Since the required coding parameter in this layer is only the codebook index of the selected template, its compression ratio (CR) is very high. Note that the distribution of registered templates is also related to the characteristics of ECG waves, thus it can be used as a metric to detect various types of arrhythmias. The residual error between the input and the selected template is encoded by a wavelet-based transform coding in the second layer. The number of wavelet coefficients is constrained by pre-defined maximum distortion to be allowed. The MIT-BIH arrhythmia database is used to evaluate the performance of the proposed algorithm. The proposed algorithm shows around 7.18 CR when the reference value of percentage root mean square difference (PRD) is set to ten. PMID:26737691

  1. Comprehensive numerical methodology for direct numerical simulations of compressible Rayleigh-Taylor instability

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott J.; Livescu, Daniel; Vasilyev, Oleg V.

    2016-05-01

    An investigation of compressible Rayleigh-Taylor instability (RTI) using Direct Numerical Simulations (DNS) requires efficient numerical methods, advanced boundary conditions, and consistent initialization in order to capture the wide range of scales and vortex dynamics present in the system, while reducing the computational impact associated with acoustic wave generation and the subsequent interaction with the flow. An advanced computational framework is presented that handles the challenges introduced by considering the compressive nature of RTI systems, which include sharp interfacial density gradients on strongly stratified background states, acoustic wave generation and removal at computational boundaries, and stratification dependent vorticity production. The foundation of the numerical methodology described here is the wavelet-based grid adaptivity of the Parallel Adaptive Wavelet Collocation Method (PAWCM) that maintains symmetry in single-mode RTI systems to extreme late-times. PAWCM is combined with a consistent initialization, which reduces the generation of acoustic disturbances, and effective boundary treatments, which prevent acoustic reflections. A dynamic time integration scheme that can handle highly nonlinear and potentially stiff systems, such as compressible RTI, completes the computational framework. The numerical methodology is used to simulate two-dimensional single-mode RTI to extreme late-times for a wide range of flow compressibility and variable density effects. The results show that flow compressibility acts to reduce the growth of RTI for low Atwood numbers, as predicted from linear stability analysis.

  2. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  3. Generalized Morse wavelets for the phase evaluation of projected fringe pattern

    NASA Astrophysics Data System (ADS)

    Kocahan Yılmaz, Özlem; Coşkun, Emre; Özder, Serhat

    2014-10-01

    Generalized Morse wavelets are proposed to evaluate the phase information from projected fringe pattern with the spatial carrier frequency in the x direction. The height profile of the object is determined through the phase change distribution by using the phase of the continuous wavelet transform. The choice of an appropriate mother wavelet is an important step for the calculation of phase. As a mother wavelet, zero order generalized Morse wavelet is chosen because of the flexible spatial and frequency localization property, and it is exactly analytic. Experimental results for the Morlet and Paul wavelets are compared with the results of generalized Morse wavelets analysis.

  4. Solar wind compressible structures at ion scales

    NASA Astrophysics Data System (ADS)

    Perrone, D.; Alexandrova, O.; Rocoto, V.; Pantellini, F. G. E.; Zaslavsky, A.; Maksimovic, M.; Issautier, K.; Mangeney, A.

    2014-12-01

    In the solar wind turbulent cascade, the energy partition between fluid and kinetic degrees of freedom, in the vicinity of plasma characteristic scales, i.e. ion and electron Larmor radius and inertial lengths, is still under debate. In a neighborhood of the ion scales, it has been observed that the spectral shape changes and fluctuations become more compressible. Nowadays, a huge scientific effort is directed to the comprehension of the link between macroscopic and microscopic scales and to disclose the nature of compressive fluctuations, meaning that if space plasma turbulence is a mixture of quasi-linear waves (as whistler or kinetic Alfvèn waves) or if turbulence is strong with formation of coherent structures responsible for dissipation. Here we present an automatic method to identify compressible coherent structures around the ion spectral break, using Morlet wavelet decomposition of magnetic signal from Cluster spacecraft and reconstruction of magnetic fluctuations in a selected scale range. Different kind of coherent structures have been detected: from soliton-like one-dimensional structures to current sheet- or wave-like two-dimensional structures. Using a multi-satellite analysis, in order to characterize 3D geometry and propagation in plasma rest frame, we recover that these structures propagate quasi-perpendicular to the mean magnetic field, with finite velocity. Moreover, without using the Taylor hypothesis, the spatial scales of coherent structures have been estimated. Our observations in the solar wind can provide constraints on theoretical modeling of small scale turbulence and dissipation in collisionless magnetized plasmas.

  5. A Comprehensive Noise Robust Speech Parameterization Algorithm Using Wavelet Packet Decomposition-Based Denoising and Speech Feature Representation Techniques

    NASA Astrophysics Data System (ADS)

    Kotnik, Bojan; Kačič, Zdravko

    2007-12-01

    This paper concerns the problem of automatic speech recognition in noise-intense and adverse environments. The main goal of the proposed work is the definition, implementation, and evaluation of a novel noise robust speech signal parameterization algorithm. The proposed procedure is based on time-frequency speech signal representation using wavelet packet decomposition. A new modified soft thresholding algorithm based on time-frequency adaptive threshold determination was developed to efficiently reduce the level of additive noise in the input noisy speech signal. A two-stage Gaussian mixture model (GMM)-based classifier was developed to perform speech/nonspeech as well as voiced/unvoiced classification. The adaptive topology of the wavelet packet decomposition tree based on voiced/unvoiced detection was introduced to separately analyze voiced and unvoiced segments of the speech signal. The main feature vector consists of a combination of log-root compressed wavelet packet parameters, and autoregressive parameters. The final output feature vector is produced using a two-staged feature vector postprocessing procedure. In the experimental framework, the noisy speech databases Aurora 2 and Aurora 3 were applied together with corresponding standardized acoustical model training/testing procedures. The automatic speech recognition performance achieved using the proposed noise robust speech parameterization procedure was compared to the standardized mel-frequency cepstral coefficient (MFCC) feature extraction procedures ETSI ES 201 108 and ETSI ES 202 050.

  6. Compression Ratio Adjuster

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1982-01-01

    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  7. Facial Feature Extraction Based on Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Viet

    Facial feature extraction is one of the most important processes in face recognition, expression recognition and face detection. The aims of facial feature extraction are eye location, shape of eyes, eye brow, mouth, head boundary, face boundary, chin and so on. The purpose of this paper is to develop an automatic facial feature extraction system, which is able to identify the eye location, the detailed shape of eyes and mouth, chin and inner boundary from facial images. This system not only extracts the location information of the eyes, but also estimates four important points in each eye, which helps us to rebuild the eye shape. To model mouth shape, mouth extraction gives us both mouth location and two corners of mouth, top and bottom lips. From inner boundary we obtain and chin, we have face boundary. Based on wavelet features, we can reduce the noise from the input image and detect edge information. In order to extract eyes, mouth, inner boundary, we combine wavelet features and facial character to design these algorithms for finding midpoint, eye's coordinates, four important eye's points, mouth's coordinates, four important mouth's points, chin coordinate and then inner boundary. The developed system is tested on Yale Faces and Pedagogy student's faces.

  8. Wavelet Denoising of Mobile Radiation Data

    SciTech Connect

    Campbell, D B

    2008-10-31

    The FY08 phase of this project investigated the merits of video fusion as a method for mitigating the false alarms encountered by vehicle borne detection systems in an effort to realize performance gains associated with wavelet denoising. The fusion strategy exploited the significant correlations which exist between data obtained from radiation detectors and video systems with coincident fields of view. The additional information provided by optical systems can greatly increase the capabilities of these detection systems by reducing the burden of false alarms and through the generation of actionable information. The investigation into the use of wavelet analysis techniques as a means of filtering the gross-counts signal obtained from moving radiation detectors showed promise for vehicle borne systems. However, the applicability of these techniques to man-portable systems is limited due to minimal gains in performance over the rapid feedback available to system operators under walking conditions. Furthermore, the fusion of video holds significant promise for systems operating from vehicles or systems organized into stationary arrays; however, the added complexity and hardware required by this technique renders it infeasible for man-portable systems.

  9. Continuous wavelet transform in quantum field theory

    NASA Astrophysics Data System (ADS)

    Altaisky, M. V.; Kaputkina, N. E.

    2013-07-01

    We describe the application of the continuous wavelet transform to calculation of the Green functions in quantum field theory: scalar ϕ4 theory, quantum electrodynamics, and quantum chromodynamics. The method of continuous wavelet transform in quantum field theory, presented by Altaisky [Phys. Rev. D 81, 125003 (2010)] for the scalar ϕ4 theory, consists in substitution of the local fields ϕ(x) by those dependent on both the position x and the resolution a. The substitution of the action S[ϕ(x)] by the action S[ϕa(x)] makes the local theory into a nonlocal one and implies the causality conditions related to the scale a, the region causality [J. D. Christensen and L. Crane, J. Math. Phys. (N.Y.) 46, 122502 (2005)]. These conditions make the Green functions G(x1,a1,…,xn,an)=⟨ϕa1(x1)…ϕan(xn)⟩ finite for any given set of regions by means of an effective cutoff scale A=min⁡(a1,…,an).

  10. Wavelet-Based Interpolation and Representation of Non-Uniformly Sampled Spacecraft Mission Data

    NASA Technical Reports Server (NTRS)

    Bose, Tamal

    2000-01-01

    A well-documented problem in the analysis of data collected by spacecraft instruments is the need for an accurate, efficient representation of the data set. The data may suffer from several problems, including additive noise, data dropouts, an irregularly-spaced sampling grid, and time-delayed sampling. These data irregularities render most traditional signal processing techniques unusable, and thus the data must be interpolated onto an even grid before scientific analysis techniques can be applied. In addition, the extremely large volume of data collected by scientific instrumentation presents many challenging problems in the area of compression, visualization, and analysis. Therefore, a representation of the data is needed which provides a structure which is conducive to these applications. Wavelet representations of data have already been shown to possess excellent characteristics for compression, data analysis, and imaging. The main goal of this project is to develop a new adaptive filtering algorithm for image restoration and compression. The algorithm should have low computational complexity and a fast convergence rate. This will make the algorithm suitable for real-time applications. The algorithm should be able to remove additive noise and reconstruct lost data samples from images.

  11. Fractal image compression

    NASA Technical Reports Server (NTRS)

    Barnsley, Michael F.; Sloan, Alan D.

    1989-01-01

    Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.

  12. Efficient transmission of compressed data for remote volume visualization.

    PubMed

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  13. A rapid compression technique for 4-D functional MRI images using data rearrangement and modified binary array techniques.

    PubMed

    Uma Vetri Selvi, G; Nadarajan, R

    2015-12-01

    Compression techniques are vital for efficient storage and fast transfer of medical image data. The existing compression techniques take significant amount of time for performing encoding and decoding and hence the purpose of compression is not fully satisfied. In this paper a rapid 4-D lossy compression method constructed using data rearrangement, wavelet-based contourlet transformation and a modified binary array technique has been proposed for functional magnetic resonance imaging (fMRI) images. In the proposed method, the image slices of fMRI data are rearranged so that the redundant slices form a sequence. The image sequence is then divided into slices and transformed using wavelet-based contourlet transform (WBCT). In WBCT, the high frequency sub-band obtained from wavelet transform is further decomposed into multiple directional sub-bands by directional filter bank to obtain more directional information. The relationship between the coefficients has been changed in WBCT as it has more directions. The differences in parent–child relationships are handled by a repositioning algorithm. The repositioned coefficients are then subjected to quantization. The quantized coefficients are further compressed by modified binary array technique where the most frequently occurring value of a sequence is coded only once. The proposed method has been experimented with fMRI images the results indicated that the processing time of the proposed method is less compared to existing wavelet-based set partitioning in hierarchical trees and set partitioning embedded block coder (SPECK) compression schemes [1]. The proposed method could also yield a better compression performance compared to wavelet-based SPECK coder. The objective results showed that the proposed method could gain good compression ratio in maintaining a peak signal noise ratio value of above 70 for all the experimented sequences. The SSIM value is equal to 1 and the value of CC is greater than 0.9 for all

  14. Wavelet Methods Developed to Detect and Control Compressor Stall

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.

    1997-01-01

    A "wavelet" is, by definition, an amplitude-varying, short waveform with a finite bandwidth (e.g., that shown in the first two graphs). Naturally, wavelets are more effective than the sinusoids of Fourier analysis for matching and reconstructing signal features. In wavelet transformation and inversion, all transient or periodic data features (as in compressor-inlet pressures) can be detected and reconstructed by stretching or contracting a single wavelet to generate the matching building blocks. Consequently, wavelet analysis provides many flexible and effective ways to reduce noise and extract signals which surpass classical techniques - making it very attractive for data analysis, modeling, and active control of stall and surge in high-speed turbojet compressors. Therefore, fast and practical wavelet methods are being developed in-house at the NASA Lewis Research Center to assist in these tasks. This includes establishing user-friendly links between some fundamental wavelet analysis ideas and the classical theories (or practices) of system identification, data analysis, and processing.

  15. On-Line Loss of Control Detection Using Wavelets

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J. (Technical Monitor); Thompson, Peter M.; Klyde, David H.; Bachelder, Edward N.; Rosenthal, Theodore J.

    2005-01-01

    Wavelet transforms are used for on-line detection of aircraft loss of control. Wavelet transforms are compared with Fourier transform methods and shown to more rapidly detect changes in the vehicle dynamics. This faster response is due to a time window that decreases in length as the frequency increases. New wavelets are defined that further decrease the detection time by skewing the shape of the envelope. The wavelets are used for power spectrum and transfer function estimation. Smoothing is used to tradeoff the variance of the estimate with detection time. Wavelets are also used as front-end to the eigensystem reconstruction algorithm. Stability metrics are estimated from the frequency response and models, and it is these metrics that are used for loss of control detection. A Matlab toolbox was developed for post-processing simulation and flight data using the wavelet analysis methods. A subset of these methods was implemented in real time and named the Loss of Control Analysis Tool Set or LOCATS. A manual control experiment was conducted using a hardware-in-the-loop simulator for a large transport aircraft, in which the real time performance of LOCATS was demonstrated. The next step is to use these wavelet analysis tools for flight test support.

  16. CVS Filtering of 3D Turbulent Mixing Layers Using Orthogonal Wavelets

    NASA Technical Reports Server (NTRS)

    Schneider, Kai; Farge, Marie; Pellegrino, Giulio; Rogers, Michael

    2000-01-01

    Coherent Vortex Simulation (CVS) filtering has been applied to Direct Numerical Simulation (DNS) data of forced and unforced time-developing turbulent mixing layers. CVS filtering splits the turbulent flow into two orthogonal parts, one corresponding to coherent vortices and the other to incoherent background flow. We have shown that the coherent vortices can be represented by few wavelet modes and that these modes are sufficient to reproduce the vorticity probability distribution function (PDF) and the energy spectrum over the entire inertial range. The remaining incoherent background flow is homogeneous, has small amplitude, and is uncorrelated. These results are compared with those obtained for the same compression rate using large eddy simulation (LES) filtering. In contrast to the incoherent background flow of CVS filtering, the LES subgrid scales have a much larger amplitude and are correlated, which makes their statistical modeling more difficult.

  17. Extended wavelet transformation to digital holographic reconstruction: application to the elliptical, astigmatic Gaussian beams.

    PubMed

    Remacha, Clément; Coëtmellec, Sébastien; Brunel, Marc; Lebrun, Denis

    2013-02-01

    Wavelet analysis provides an efficient tool in numerous signal processing problems and has been implemented in optical processing techniques, such as in-line holography. This paper proposes an improvement of this tool for the case of an elliptical, astigmatic Gaussian (AEG) beam. We show that this mathematical operator allows reconstructing an image of a spherical particle without compression of the reconstructed image, which increases the accuracy of the 3D location of particles and of their size measurement. To validate the performance of this operator we have studied the diffraction pattern produced by a particle illuminated by an AEG beam. This study used mutual intensity propagation, and the particle is defined as a chirped Gaussian sum. The proposed technique was applied and the experimental results are presented.

  18. Highly scalable differential JPEG 2000 wavelet video codec for Internet video streaming

    NASA Astrophysics Data System (ADS)

    Zhao, Lifeng; Kim, JongWon; Bao, Yiliang; Kuo, C.-C. Jay

    2000-12-01

    A highly scalable wavelet video codec is proposed for Internet video streaming applications based on the simplified JPEG-2000 compression core. Most existing video coding solutions utilize a fixed temporal grouping structure, resulting in quality degradation due to structural mismatch with inherent motion and scene change. Thus, by adopting an adaptive frame grouping scheme based on fast scene change detection, a flexible temporal grouping is proposed according to motion activities. To provide good temporal scalability regardless of packet loss, the dependency structure inside a temporal group is simplified by referencing only the initial intra-frame in telescopic motion estimation at the cost of coding efficiency. In addition, predictive-frames in a temporal group are prioritized according to their relative motion and coding cost. Finally, the joint spatio-temporal scalability support of the proposed video solution is demonstrated in terms of the network adaptation capability.

  19. Extended wavelet transformation to digital holographic reconstruction: application to the elliptical, astigmatic Gaussian beams.

    PubMed

    Remacha, Clément; Coëtmellec, Sébastien; Brunel, Marc; Lebrun, Denis

    2013-02-01

    Wavelet analysis provides an efficient tool in numerous signal processing problems and has been implemented in optical processing techniques, such as in-line holography. This paper proposes an improvement of this tool for the case of an elliptical, astigmatic Gaussian (AEG) beam. We show that this mathematical operator allows reconstructing an image of a spherical particle without compression of the reconstructed image, which increases the accuracy of the 3D location of particles and of their size measurement. To validate the performance of this operator we have studied the diffraction pattern produced by a particle illuminated by an AEG beam. This study used mutual intensity propagation, and the particle is defined as a chirped Gaussian sum. The proposed technique was applied and the experimental results are presented. PMID:23385926

  20. On the Daubechies-based wavelet differentiation matrix

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a Daubechies-based wavelet basis is constructed and superconvergence is proven. That is, it will be proven that under the assumption of periodic boundary conditions that the differentiation matrix is accurate of order 2M, even though the approximation subspace can represent exactly only polynomials up to degree M-1, where M is the number of vanishing moments of the associated wavelet. It is illustrated that Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small-scale structure is present.

  1. Medical image fusion by wavelet transform modulus maxima

    NASA Astrophysics Data System (ADS)

    Guihong, Qu; Dali, Zhang; Pingfan, Yan

    2001-08-01

    Medical image fusion has been used to derive useful information from multimodality medical image data. In this research, we propose a novel method for multimodality medical image fusion. Using wavelet transform, we achieved a fusion scheme. Afusion rule is proposed and used for calculating the wavelet transformation modulus maxima of input images at different bandwidths and levels. To evaluate the fusion result, a metric based on mutual information (MI) is presented for measuring fusion effect. The performances of other two methods of image fusion based on wavelet transform are briefly described for comparison. The experiment results demonstrate the effectiveness of the fusion scheme.

  2. EEG Artifact Removal Using a Wavelet Neural Network

    NASA Technical Reports Server (NTRS)

    Nguyen, Hoang-Anh T.; Musson, John; Li, Jiang; McKenzie, Frederick; Zhang, Guangfan; Xu, Roger; Richey, Carl; Schnell, Tom

    2011-01-01

    !n this paper we developed a wavelet neural network. (WNN) algorithm for Electroencephalogram (EEG) artifact removal without electrooculographic (EOG) recordings. The algorithm combines the universal approximation characteristics of neural network and the time/frequency property of wavelet. We. compared the WNN algorithm with .the ICA technique ,and a wavelet thresholding method, which was realized by using the Stein's unbiased risk estimate (SURE) with an adaptive gradient-based optimal threshold. Experimental results on a driving test data set show that WNN can remove EEG artifacts effectively without diminishing useful EEG information even for very noisy data.

  3. Wavelet analysis and scaling properties of time series

    NASA Astrophysics Data System (ADS)

    Manimaran, P.; Panigrahi, Prasanta K.; Parikh, Jitendra C.

    2005-10-01

    We propose a wavelet based method for the characterization of the scaling behavior of nonstationary time series. It makes use of the built-in ability of the wavelets for capturing the trends in a data set, in variable window sizes. Discrete wavelets from the Daubechies family are used to illustrate the efficacy of this procedure. After studying binomial multifractal time series with the present and earlier approaches of detrending for comparison, we analyze the time series of averaged spin density in the 2D Ising model at the critical temperature, along with several experimental data sets possessing multifractal behavior.

  4. A wavelet approach to binary blackholes with asynchronous multitasking

    NASA Astrophysics Data System (ADS)

    Lim, Hyun; Hirschmann, Eric; Neilsen, David; Anderson, Matthew; Debuhr, Jackson; Zhang, Bo

    2016-03-01

    Highly accurate simulations of binary black holes and neutron stars are needed to address a variety of interesting problems in relativistic astrophysics. We present a new method for the solving the Einstein equations (BSSN formulation) using iterated interpolating wavelets. Wavelet coefficients provide a direct measure of the local approximation error for the solution and place collocation points that naturally adapt to features of the solution. Further, they exhibit exponential convergence on unevenly spaced collection points. The parallel implementation of the wavelet simulation framework presented here deviates from conventional practice in combining multi-threading with a form of message-driven computation sometimes referred to as asynchronous multitasking.

  5. EEG seizure identification by using optimized wavelet decomposition.

    PubMed

    Pinzon-Morales, R D; Orozco-Gutierrez, A; Castellanos-Dominguez, G

    2011-01-01

    A methodology for wavelet synthesis based on lifting scheme and genetic algorithms is presented. Often, the wavelet synthesis is addressed to solve the problem of choosing properly a wavelet function from an existing library, but which may be not specially designed to the application in hand. The task under consideration is the identification of epileptic seizures over electroencephalogram recordings. Although basic classifiers are employed, results rendered that the proposed methodology is successful in the considered study achieving similar classification rates that had been reported in literature. PMID:22254892

  6. Wavelet transform based on the optimal wavelet pairs for tunable diode laser absorption spectroscopy signal processing.

    PubMed

    Li, Jingsong; Yu, Benli; Fischer, Horst

    2015-04-01

    This paper presents a novel methodology-based discrete wavelet transform (DWT) and the choice of the optimal wavelet pairs to adaptively process tunable diode laser absorption spectroscopy (TDLAS) spectra for quantitative analysis, such as molecular spectroscopy and trace gas detection. The proposed methodology aims to construct an optimal calibration model for a TDLAS spectrum, regardless of its background structural characteristics, thus facilitating the application of TDLAS as a powerful tool for analytical chemistry. The performance of the proposed method is verified using analysis of both synthetic and observed signals, characterized with different noise levels and baseline drift. In terms of fitting precision and signal-to-noise ratio, both have been improved significantly using the proposed method.

  7. Predicting apple tree leaf nitrogen content based on hyperspectral applying wavelet and wavelet packet analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yao; Zheng, Lihua; Li, Minzan; Deng, Xiaolei; Sun, Hong

    2012-11-01

    The visible and NIR spectral reflectance were measured for apple leaves by using a spectrophotometer in fruit-bearing, fruit-falling and fruit-maturing period respectively, and the nitrogen content of each sample was measured in the lab. The analysis of correlation between nitrogen content of apple tree leaves and their hyperspectral data was conducted. Then the low frequency signal and high frequency noise reduction signal were extracted by using wavelet packet decomposition algorithm. At the same time, the original spectral reflectance was denoised taking advantage of the wavelet filtering technology. And then the principal components spectra were collected after PCA (Principal Component Analysis). It was known that the model built based on noise reduction principal components spectra reached higher accuracy than the other three ones in fruit-bearing period and physiological fruit-maturing period. Their calibration R2 reached 0.9529 and 0.9501, and validation R2 reached 0.7285 and 0.7303 respectively. While in the fruit-falling period the model based on low frequency principal components spectra reached the highest accuracy, and its calibration R2 reached 0.9921 and validation R2 reached 0.6234. The results showed that it was an effective way to improve ability of predicting apple tree nitrogen content based on hyperspectral analysis by using wavelet packet algorithm.

  8. A low computational complexity algorithm for ECG signal compression.

    PubMed

    Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; López-Ferreras, Francisco; Bravo-Santos, Angel; Martínez-Muñoz, Damián

    2004-09-01

    In this work, a filter bank-based algorithm for electrocardiogram (ECG) signals compression is proposed. The new coder consists of three different stages. In the first one--the subband decomposition stage--we compare the performance of a nearly perfect reconstruction (N-PR) cosine-modulated filter bank with the wavelet packet (WP) technique. Both schemes use the same coding algorithm, thus permitting an effective comparison. The target of the comparison is the quality of the reconstructed signal, which must remain within predetermined accuracy limits. We employ the most widely used quality criterion for the compressed ECG: the percentage root-mean-square difference (PRD). It is complemented by means of the maximum amplitude error (MAX). The tests have been done for the 12 principal cardiac leads, and the amount of compression is evaluated by means of the mean number of bits per sample (MBPS) and the compression ratio (CR). The implementation cost for both the filter bank and the WP technique has also been studied. The results show that the N-PR cosine-modulated filter bank method outperforms the WP technique in both quality and efficiency. PMID:15271283

  9. Modeling Compressed Turbulence

    SciTech Connect

    Israel, Daniel M.

    2012-07-13

    From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

  10. Local compressibilities in crystals

    NASA Astrophysics Data System (ADS)

    Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor

    2000-12-01

    An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.

  11. Illumination-tolerant face verification of low-bit-rate JPEG2000 wavelet images with advanced correlation filters for handheld devices

    NASA Astrophysics Data System (ADS)

    Wijaya, Surya Li; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-02-01

    Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.

  12. [Hyper spectral estimation method for soil alkali hydrolysable nitrogen content based on discrete wavelet transform and genetic algorithm in combining with partial least squares DWT-GA-PLS)].

    PubMed

    Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling

    2013-11-01

    Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.

  13. Market turning points forecasting using wavelet analysis

    NASA Astrophysics Data System (ADS)

    Bai, Limiao; Yan, Sen; Zheng, Xiaolian; Chen, Ben M.

    2015-11-01

    Based on the system adaptation framework we previously proposed, a frequency domain based model is developed in this paper to forecast the major turning points of stock markets. This system adaptation framework has its internal model and adaptive filter to capture the slow and fast dynamics of the market, respectively. The residue of the internal model is found to contain rich information about the market cycles. In order to extract and restore its informative frequency components, we use wavelet multi-resolution analysis with time-varying parameters to decompose this internal residue. An empirical index is then proposed based on the recovered signals to forecast the market turning points. This index is successfully applied to US, UK and China markets, where all major turning points are well forecasted.

  14. Wavelet neural networks: a practical guide.

    PubMed

    Alexandridis, Antonios K; Zapranis, Achilleas D

    2013-06-01

    Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications.

  15. Adaptive wavelet simulation of global ocean dynamics

    NASA Astrophysics Data System (ADS)

    Kevlahan, N. K.-R.; Dubos, T.; Aechtner, M.

    2015-07-01

    In order to easily enforce solid-wall boundary conditions in the presence of complex coastlines, we propose a new mass and energy conserving Brinkman penalization for the rotating shallow water equations. This penalization does not lead to higher wave speeds in the solid region. The error estimates for the penalization are derived analytically and verified numerically for linearized one dimensional equations. The penalization is implemented in a conservative dynamically adaptive wavelet method for the rotating shallow water equations on the sphere with bathymetry and coastline data from NOAA's ETOPO1 database. This code could form the dynamical core for a future global ocean model. The potential of the dynamically adaptive ocean model is illustrated by using it to simulate the 2004 Indonesian tsunami and wind-driven gyres.

  16. [An improved wavelet threshold algorithm for ECG denoising].

    PubMed

    Liu, Xiuling; Qiao, Lei; Yang, Jianli; Dong, Bin; Wang, Hongrui

    2014-06-01

    Due to the characteristics and environmental factors, electrocardiogram (ECG) signals are usually interfered by noises in the course of signal acquisition, so it is crucial for ECG intelligent analysis to eliminate noises in ECG signals. On the basis of wavelet transform, threshold parameters were improved and a more appropriate threshold expression was proposed. The discrete wavelet coefficients were processed using the improved threshold parameters, the accurate wavelet coefficients without noises were gained through inverse discrete wavelet transform, and then more original signal coefficients could be preserved. MIT-BIH arrythmia database was used to validate the method. Simulation results showed that the improved method could achieve better denoising effect than the traditional ones. PMID:25219225

  17. Doppler radar fall activity detection using the wavelet transform.

    PubMed

    Su, Bo Yu; Ho, K C; Rantz, Marilyn J; Skubic, Marjorie

    2015-03-01

    We propose in this paper the use of Wavelet transform (WT) to detect human falls using a ceiling mounted Doppler range control radar. The radar senses any motions from falls as well as nonfalls due to the Doppler effect. The WT is very effective in distinguishing the falls from other activities, making it a promising technique for radar fall detection in nonobtrusive inhome elder care applications. The proposed radar fall detector consists of two stages. The prescreen stage uses the coefficients of wavelet decomposition at a given scale to identify the time locations in which fall activities may have occurred. The classification stage extracts the time-frequency content from the wavelet coefficients at many scales to form a feature vector for fall versus nonfall classification. The selection of different wavelet functions is examined to achieve better performance. Experimental results using the data from the laboratory and real inhome environments validate the promising and robust performance of the proposed detector.

  18. Detection of the electrocardiogram P-wave using wavelet analysis

    SciTech Connect

    Anant, K.S.; Rodrigue, G.H. |; Dowla, F.U.

    1994-01-01

    Since wavelet analysis is an effective tool for analyzing transient signals, we studied its feature extraction and representation properties for events in electrocardiogram (EKG) data. Significant features of the EKG include the P-wave, the QRS complex, and the T-wave. For this paper the feature that we chose to focus on was the P-wave. Wavelet analysis was used as a pre-processor for a backpropagation neural network with conjugate gradient learning. The inputs to the neural network were the wavelet transforms of EKGs at a particular scale. The desired output was the location of the P-wave. The results were compared to results obtained without using the wavelet transform as a pre-processor.

  19. Detection of the electrocardiogram P-wave using wavelet analysis

    NASA Astrophysics Data System (ADS)

    Anant, Kanwaldip S.; Dowla, Farid U.; Rodrigue, Garry H.

    1994-03-01

    Since wavelet analysis is an effective tool for analyzing transient signals, we studied its feature extraction and representation properties for events in electrocardiogram (EKG) data. Significant features of the EKG include the P-wave, the QRS complex, and the T-wave. For this paper the feature that we chose to focus on was the P-wave. Wavelet analysis was used as a preprocessor for a backpropagation neural network with conjugate gradient learning. The inputs to the neural network were the wavelet transforms of EKGs at a particular scale. The desired output was the location of the P-wave. The results were compared to results obtained without using the wavelet transform as a preprocessor.

  20. Doppler radar fall activity detection using the wavelet transform.

    PubMed

    Su, Bo Yu; Ho, K C; Rantz, Marilyn J; Skubic, Marjorie

    2015-03-01

    We propose in this paper the use of Wavelet transform (WT) to detect human falls using a ceiling mounted Doppler range control radar. The radar senses any motions from falls as well as nonfalls due to the Doppler effect. The WT is very effective in distinguishing the falls from other activities, making it a promising technique for radar fall detection in nonobtrusive inhome elder care applications. The proposed radar fall detector consists of two stages. The prescreen stage uses the coefficients of wavelet decomposition at a given scale to identify the time locations in which fall activities may have occurred. The classification stage extracts the time-frequency content from the wavelet coefficients at many scales to form a feature vector for fall versus nonfall classification. The selection of different wavelet functions is examined to achieve better performance. Experimental results using the data from the laboratory and real inhome environments validate the promising and robust performance of the proposed detector. PMID:25376033

  1. Early detection of rogue waves by the wavelet transforms

    NASA Astrophysics Data System (ADS)

    Bayındır, Cihan

    2016-01-01

    We discuss the possible advantages of using the wavelet transform over the Fourier transform for the early detection of rogue waves. We show that the triangular wavelet spectra of the rogue waves can be detected at early stages of the development of rogue waves in a chaotic wave field. Compared to the Fourier spectra, the wavelet spectra are capable of detecting not only the emergence of a rogue wave but also its possible spatial (or temporal) location. Due to this fact, wavelet transform is also capable of predicting the characteristic distances between successive rogue waves. Therefore multiple simultaneous breaking of the successive rogue waves on ships or on the offshore structures can be predicted and avoided by smart designs and operations.

  2. An adaptive morphological gradient lifting wavelet for detecting bearing defects

    NASA Astrophysics Data System (ADS)

    Li, Bing; Zhang, Pei-lin; Mi, Shuang-shan; Hu, Ren-xi; Liu, Dong-sheng

    2012-05-01

    This paper presents a novel wavelet decomposition scheme, named adaptive morphological gradient lifting wavelet (AMGLW), for detecting bearing defects. The adaptability of the AMGLW consists in that the scheme can select between two filters, mean the average filter and morphological gradient filter, to update the approximation signal based on the local gradient of the analyzed signal. Both a simulated signal and vibration signals acquired from bearing are employed to evaluate and compare the proposed AMGLW scheme with the traditional linear wavelet transform (LWT) and another adaptive lifting wavelet (ALW) developed in literature. Experimental results reveal that the AMGLW outperforms the LW and ALW obviously for detecting bearing defects. The impulsive components can be enhanced and the noise can be depressed simultaneously by the presented AMGLW scheme. Thus the fault characteristic frequencies of bearing can be clearly identified. Furthermore, the AMGLW gets an advantage over LW in computation efficiency. It is quite suitable for online condition monitoring of bearings and other rotating machineries.

  3. Tree-structured wavelet transform signature for classification of melanoma

    NASA Astrophysics Data System (ADS)

    Patwardhan, Sachin V.; Dhawan, Atam P.; Relue, Patricia A.

    2002-05-01

    The purpose of this work is to evaluate the use of a wavelet transform based tree structure in classifying skin lesion images in to melanoma and dysplastic nevus based on the spatial/frequency information. The classification is done using the wavelet transform tree structure analysis. Development of the tree structure in the proposed method uses energy ratio thresholds obtained from a statistical analysis of the coefficients in the wavelet domain. The method is used to obtain a tree structure signature of melanoma and dysplastic nevus, which is then used to classify the data set in to the two classes. Images are classified by using a semantic comparison of the wavelet transform tree structure signatures. Results show that the proposed method is effective and simple for classification based on spatial/frequency information, which also includes the textural information.

  4. Complex Wavelet Transform of the Two-mode Quantum States

    NASA Astrophysics Data System (ADS)

    Song, Jun; Zhou, Jun; Yuan, Hao; He, Rui; Fan, Hong-Yi

    2016-08-01

    By employing the bipartite entangled state representation and the technique of integration within an ordered product of operators, the classical complex wavelet transform of a complex signal function can be recast to a matrix element of the squeezing-displacing operator U 2( μ, σ) between the mother wavelet vector < ψ| and the two-mode quantum state vector | f> to be transformed. < ψ| U 2( μ, σ)| f> can be considered as the spectrum for analyzing the two-mode quantum state | f>. In this way, for some typical two-mode quantum states, such as two-mode coherent state and two-mode Fock state, we derive the complex wavelet transform spectrum and carry out the numerical calculation. This kind of wavelet-transform spectrum can be used to recognize quantum states.

  5. Denoising time-domain induced polarisation data using wavelet techniques

    NASA Astrophysics Data System (ADS)

    Deo, Ravin N.; Cull, James P.

    2016-05-01

    Time-domain induced polarisation (TDIP) methods are routinely used for near-surface evaluations in quasi-urban environments harbouring networks of buried civil infrastructure. A conventional technique for improving signal to noise ratio in such environments is by using analogue or digital low-pass filtering followed by stacking and rectification. However, this induces large distortions in the processed data. In this study, we have conducted the first application of wavelet based denoising techniques for processing raw TDIP data. Our investigation included laboratory and field measurements to better understand the advantages and limitations of this technique. It was found that distortions arising from conventional filtering can be significantly avoided with the use of wavelet based denoising techniques. With recent advances in full-waveform acquisition and analysis, incorporation of wavelet denoising techniques can further enhance surveying capabilities. In this work, we present the rationale for utilising wavelet denoising methods and discuss some important implications, which can positively influence TDIP methods.

  6. Ischemia detection by electrocardiogram in wavelet domain using entropy measure

    PubMed Central

    Rabbani, Hossein; Mahjoob, Mohammad Parsa; Farahabadi, Eiman; Farahabadi, Amin; Dehnavi, Alireza Mehri

    2011-01-01

    BACKGROUND: Ischemic heart disease is one of the common fatal diseases in advanced countries. Because signal perturbation in healthy people is less than signal perturbation in patients, entropy measure can be used as an appropriate feature for ischemia detection. METHODS: Four entropy-based methods comprising of using electrocardiogram (ECG) signal directly, wavelet sub-bands of ECG signals, extracted ST segments and reconstructed signal from time-frequency feature of ST segments in wavelet domain were investigated to distinguish between ECG signal of healthy individuals and patients. We used exercise treadmill test as a gold standard, with a sample of 40 patients who had ischemic signs based on initial diagnosis of medical practitioner. RESULTS: The suggested technique in wavelet domain resulted in the highest discrepancy between healthy individuals and patients in comparison to other methods. Specificity and sensitivity of this method were 95% and 94% respectively. CONCLUSIONS: The method based on wavelet sub-bands outperformed the others. PMID:22973350

  7. Wavelet Algorithm for Feature Identification and Image Analysis

    2005-10-01

    WVL are a set of python scripts based on the algorithm described in "A novel 3D wavelet-based filter for visualizing features in noisy biological data, " W. C. Moss et al., J. Microsc. 219, 43-49 (2005)

  8. Compressive Optical Image Encryption

    PubMed Central

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-01-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946

  9. Focus on Compression Stockings

    MedlinePlus

    ... sion apparel is used to prevent or control edema The post-thrombotic syndrome (PTS) is a complication ( ... complication. abdomen. This swelling is referred to as edema. If you have edema, compression therapy may be ...

  10. Muon cooling: longitudinal compression.

    PubMed

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-01

    A 10  MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2  μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 10^{7}. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 10^{4}.

  11. Compressive Optical Image Encryption

    NASA Astrophysics Data System (ADS)

    Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong

    2015-05-01

    An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.

  12. Muon Cooling: Longitudinal Compression

    NASA Astrophysics Data System (ADS)

    Bao, Yu; Antognini, Aldo; Bertl, Wilhelm; Hildebrandt, Malte; Khaw, Kim Siang; Kirch, Klaus; Papa, Angela; Petitjean, Claude; Piegsa, Florian M.; Ritt, Stefan; Sedlak, Kamil; Stoykov, Alexey; Taqqu, David

    2014-06-01

    A 10 MeV/c positive muon beam was stopped in helium gas of a few mbar in a magnetic field of 5 T. The muon "swarm" has been efficiently compressed from a length of 16 cm down to a few mm along the magnetic field axis (longitudinal compression) using electrostatic fields. The simulation reproduces the low energy interactions of slow muons in helium gas. Phase space compression occurs on the order of microseconds, compatible with the muon lifetime of 2 μs. This paves the way for the preparation of a high-quality low-energy muon beam, with an increase in phase space density relative to a standard surface muon beam of 107. The achievable phase space compression by using only the longitudinal stage presented here is of the order of 104.

  13. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  14. Wavelet-based approach to character skeleton.

    PubMed

    You, Xinge; Tang, Yuan Yan

    2007-05-01

    Character skeleton plays a significant role in character recognition. The strokes of a character may consist of two regions, i.e., singular and regular regions. The intersections and junctions of the strokes belong to singular region, while the straight and smooth parts of the strokes are categorized to regular region. Therefore, a skeletonization method requires two different processes to treat the skeletons in theses two different regions. All traditional skeletonization algorithms are based on the symmetry analysis technique. The major problems of these methods are as follows. 1) The computation of the primary skeleton in the regular region is indirect, so that its implementation is sophisticated and costly. 2) The extracted skeleton cannot be exactly located on the central line of the stroke. 3) The captured skeleton in the singular region may be distorted by artifacts and branches. To overcome these problems, a novel scheme of extracting the skeleton of character based on wavelet transform is presented in this paper. This scheme consists of two main steps, namely: a) extraction of primary skeleton in the regular region and b) amendment processing of the primary skeletons and connection of them in the singular region. A direct technique is used in the first step, where a new wavelet-based symmetry analysis is developed for finding the central line of the stroke directly. A novel method called smooth interpolation is designed in the second step, where a smooth operation is applied to the primary skeleton, and, thereafter, the interpolation compensation technique is proposed to link the primary skeleton, so that the skeleton in the singular region can be produced. Experiments are conducted and positive results are achieved, which show that the proposed skeletonization scheme is applicable to not only binary image but also gray-level image, and the skeleton is robust against noise and affine transform.

  15. Big data extraction with adaptive wavelet analysis (Presentation Video)

    NASA Astrophysics Data System (ADS)

    Qu, Hongya; Chen, Genda; Ni, Yiqing

    2015-04-01

    Nondestructive evaluation and sensing technology have been increasingly applied to characterize material properties and detect local damage in structures. More often than not, they generate images or data strings that are difficult to see any physical features without novel data extraction techniques. In the literature, popular data analysis techniques include Short-time Fourier Transform, Wavelet Transform, and Hilbert Transform for time efficiency and adaptive recognition. In this study, a new data analysis technique is proposed and developed by introducing an adaptive central frequency of the continuous Morlet wavelet transform so that both high frequency and time resolution can be maintained in a time-frequency window of interest. The new analysis technique is referred to as Adaptive Wavelet Analysis (AWA). This paper will be organized in several sections. In the first section, finite time-frequency resolution limitations in the traditional wavelet transform are introduced. Such limitations would greatly distort the transformed signals with a significant frequency variation with time. In the second section, Short Time Wavelet Transform (STWT), similar to Short Time Fourier Transform (STFT), is defined and developed to overcome such shortcoming of the traditional wavelet transform. In the third section, by utilizing the STWT and a time-variant central frequency of the Morlet wavelet, AWA can adapt the time-frequency resolution requirement to the signal variation over time. Finally, the advantage of the proposed AWA is demonstrated in Section 4 with a ground penetrating radar (GPR) image from a bridge deck, an analytical chirp signal with a large range sinusoidal frequency change over time, the train-induced acceleration responses of the Tsing-Ma Suspension Bridge in Hong Kong, China. The performance of the proposed AWA will be compared with the STFT and traditional wavelet transform.

  16. Wavelet operational matrix method for solving the Riccati differential equation

    NASA Astrophysics Data System (ADS)

    Li, Yuanlu; Sun, Ning; Zheng, Bochao; Wang, Qi; Zhang, Yingchao

    2014-03-01

    A Haar wavelet operational matrix method (HWOMM) was derived to solve the Riccati differential equations. As a result, the computation of the nonlinear term was simplified by using the Block pulse function to expand the Haar wavelet one. The proposed method can be used to solve not only the classical Riccati differential equations but also the fractional ones. The capability and the simplicity of the proposed method was demonstrated by some examples and comparison with other methods.

  17. Image compression technique

    DOEpatents

    Fu, C.Y.; Petrich, L.I.

    1997-03-25

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.

  18. Image compression technique

    DOEpatents

    Fu, Chi-Yung; Petrich, Loren I.

    1997-01-01

    An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.

  19. Alternative Compression Garments

    NASA Technical Reports Server (NTRS)

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.

    2011-01-01

    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  20. Performance of the Wavelet Decomposition on Massively Parallel Architectures

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; LeMoigne, Jacqueline; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Traditionally, Fourier Transforms have been utilized for performing signal analysis and representation. But although it is straightforward to reconstruct a signal from its Fourier transform, no local description of the signal is included in its Fourier representation. To alleviate this problem, Windowed Fourier transforms and then wavelet transforms have been introduced, and it has been proven that wavelets give a better localization than traditional Fourier transforms, as well as a better division of the time- or space-frequency plane than Windowed Fourier transforms. Because of these properties and after the development of several fast algorithms for computing the wavelet representation of any signal, in particular the Multi-Resolution Analysis (MRA) developed by Mallat, wavelet transforms have increasingly been applied to signal analysis problems, especially real-life problems, in which speed is critical. In this paper we present and compare efficient wavelet decomposition algorithms on different parallel architectures. We report and analyze experimental measurements, using NASA remotely sensed images. Results show that our algorithms achieve significant performance gains on current high performance parallel systems, and meet scientific applications and multimedia requirements. The extensive performance measurements collected over a number of high-performance computer systems have revealed important architectural characteristics of these systems, in relation to the processing demands of the wavelet decomposition of digital images.

  1. Analysis and removing noise from speech using wavelet transform

    NASA Astrophysics Data System (ADS)

    Tomala, Karel; Voznak, Miroslav; Partila, Pavol; Rezac, Filip; Safarik, Jakub

    2013-05-01

    The paper discusses the use of Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT) wavelet in removing noise from voice samples and evaluation of its impact on speech quality. One significant part of Quality of Service (QoS) in communication technology is the speech quality assessment. However, this part is seriously overlooked as telecommunication providers often focus on increasing network capacity, expansion of services offered and their enforcement in the market. Among the fundamental factors affecting the transmission properties of the communication chain is noise, either at the transmitter or the receiver side. A wavelet transform (WT) is a modern tool for signal processing. One of the most significant areas in which wavelet transforms are used is applications designed to suppress noise in signals. To remove noise from the voice sample in our experiment, we used the reference segment of the voice which was distorted by Gaussian white noise. An evaluation of the impact on speech quality was carried out by an intrusive objective algorithm Perceptual Evaluation of Speech Quality (PESQ). DWT and SWT transformation was applied to voice samples that were devalued by Gaussian white noise. Afterwards, we determined the effectiveness of DWT and SWT by means of objective algorithm PESQ. The decisive criterion for determining the quality of a voice sample once the noise had been removed was Mean Opinion Score (MOS) which we obtained in PESQ. The contribution of this work lies in the evaluation of efficiency of wavelet transformation to suppress noise in voice samples.

  2. Instantaneous fault frequencies estimation in roller bearings via wavelet structures

    NASA Astrophysics Data System (ADS)

    Rodopoulos, Konstantinos I.; Antoniadis, Ioannis A.

    2016-11-01

    The main target of the current paper is the effective application of the method proposed in "Antoniadis et al. (2014) [17], in roller bearings under variable speed. For this reason, roller bearing model with slip and real data coming from a test rig has been used. The method extracts useful information from a complicated signal where the overlap among the harmonics can raise up to 30%. According to the proposed method, a set of wavelet transforms of the signal is first obtained, using a structure of Complex Shifted Morlet Wavelets. The center frequencies and the bandwidths of the individual wavelets, as well as the number of wavelets used, are associated with the characteristic fault frequency and its harmonic components. In this way, a set of complex signals result in the time domain, equal to the number of the wavelets used. Then, the instantaneous frequencies of the signals are estimated by applying an appropriate subspace algorithm (as for e.g. ESPRIT), to the entire set of the resulting complex wavelet transforms, exploiting the corresponding subspace rotational invariance property of this set of complex signals. The iterative procedure brings up accurate results from complicated signals, separating the fault associated signal components. Also, the spectrograms of the processed signals confirm the ability to match excited areas with specific faults.

  3. Application of wavelet analysis in laser Doppler vibration signal denoising

    NASA Astrophysics Data System (ADS)

    Lan, Yu-fei; Xue, Hui-feng; Li, Xin-liang; Liu, Dan

    2010-10-01

    Large number of experiments show that, due to external disturbances, the measured surface is too rough and other factors make use of laser Doppler technique to detect the vibration signal contained complex information, low SNR, resulting in Doppler frequency shift signals unmeasured, can not be demodulated Doppler phase and so on. This paper first analyzes the laser Doppler signal model and feature in the vibration test, and studies the most commonly used three ways of wavelet denoising techniques: the modulus maxima wavelet denoising method, the spatial correlation denoising method and wavelet threshold denoising method. Here we experiment with the vibration signals and achieve three ways by MATLAB simulation. Processing results show that the wavelet modulus maxima denoising method at low laser Doppler vibration SNR, has an advantage for the signal which mixed with white noise and contained more singularities; the spatial correlation denoising method is more suitable for denoising the laser Doppler vibration signal which noise level is not very high, and has a better edge reconstruction capacity; wavelet threshold denoising method has a wide range of adaptability, computational efficiency, and good denoising effect. Specifically, in the wavelet threshold denoising method, we estimate the original noise variance by spatial correlation method, using an adaptive threshold denoising method, and make some certain amendments in practice. Test can be shown that, compared with conventional threshold denoising, this method is more effective to extract the feature of laser Doppler vibration signal.

  4. Wavelet Approach for Operational Gamma Spectral Peak Detection - Preliminary Assessment

    SciTech Connect

    ,

    2012-02-01

    Gamma spectroscopy for radionuclide identifications typically involves locating spectral peaks and matching the spectral peaks with known nuclides in the knowledge base or database. Wavelet analysis, due to its ability for fitting localized features, offers the potential for automatic detection of spectral peaks. Past studies of wavelet technologies for gamma spectra analysis essentially focused on direct fitting of raw gamma spectra. Although most of those studies demonstrated the potentials of peak detection using wavelets, they often failed to produce new benefits to operational adaptations for radiological surveys. This work presents a different approach with the operational objective being to detect only the nuclides that do not exist in the environment (anomalous nuclides). With this operational objective, the raw-count spectrum collected by a detector is first converted to a count-rate spectrum and is then followed by background subtraction prior to wavelet analysis. The experimental results suggest that this preprocess is independent of detector type and background radiation, and is capable of improving the peak detection rates using wavelets. This process broadens the doors for a practical adaptation of wavelet technologies for gamma spectral surveying devices.

  5. Iterative PET Image Reconstruction Using Translation Invariant Wavelet Transform

    PubMed Central

    Zhou, Jian; Senhadji, Lotfi; Coatrieux, Jean-Louis; Luo, Limin

    2009-01-01

    The present work describes a Bayesian maximum a posteriori (MAP) method using a statistical multiscale wavelet prior model. Rather than using the orthogonal discrete wavelet transform (DWT), this prior is built on the translation invariant wavelet transform (TIWT). The statistical modeling of wavelet coefficients relies on the generalized Gaussian distribution. Image reconstruction is performed in spatial domain with a fast block sequential iteration algorithm. We study theoretically the TIWT MAP method by analyzing the Hessian of the prior function to provide some insights on noise and resolution properties of image reconstruction. We adapt the key concept of local shift invariance and explore how the TIWT MAP algorithm behaves with different scales. It is also shown that larger support wavelet filters do not offer better performance in contrast recovery studies. These theoretical developments are confirmed through simulation studies. The results show that the proposed method is more attractive than other MAP methods using either the conventional Gibbs prior or the DWT-based wavelet prior. PMID:21869846

  6. [ECoG classification based on wavelet variance].

    PubMed

    Yan, Shiyu; Liu, Chong; Wang, Hong; Zhao, Haibin

    2013-06-01

    For a typical electrocorticogram (ECoG)-based brain-computer interface (BCI) system in which the subject's task is to imagine movements of either the left small finger or the tongue, we proposed a feature extraction algorithm using wavelet variance. Firstly the definition and significance of wavelet variance were brought out and taken as feature based on the discussion of wavelet transform. Six channels with most distinctive features were selected from 64 channels for analysis. Consequently the EEG data were decomposed using db4 wavelet. The wavelet coeffi-cient variances containing Mu rhythm and Beta rhythm were taken out as features based on ERD/ERS phenomenon. The features were classified linearly with an algorithm of cross validation. The results of off-line analysis showed that high classification accuracies of 90. 24% and 93. 77% for training and test data set were achieved, the wavelet vari-ance had characteristics of simplicity and effectiveness and it was suitable for feature extraction in BCI research. K PMID:23865300

  7. Wavelet subband coding of computer simulation output using the A++ array class library

    SciTech Connect

    Bradley, J.N.; Brislawn, C.M.; Quinlan, D.J.; Zhang, H.D.; Nuri, V.

    1995-07-01

    The goal of the project is to produce utility software for off-line compression of existing data and library code that can be called from a simulation program for on-line compression of data dumps as the simulation proceeds. Naturally, we would like the amount of CPU time required by the compression algorithm to be small in comparison to the requirements of typical simulation codes. We also want the algorithm to accomodate a wide variety of smooth, multidimensional data types. For these reasons, the subband vector quantization (VQ) approach employed in has been replaced by a scalar quantization (SQ) strategy using a bank of almost-uniform scalar subband quantizers in a scheme similar to that used in the FBI fingerprint image compression standard. This eliminates the considerable computational burdens of training VQ codebooks for each new type of data and performing nearest-vector searches to encode the data. The comparison of subband VQ and SQ algorithms in indicated that, in practice, there is relatively little additional gain from using vector as opposed to scalar quantization on DWT subbands, even when the source imagery is from a very homogeneous population, and our subjective experience with synthetic computer-generated data supports this stance. It appears that a careful study is needed of the tradeoffs involved in selecting scalar vs. vector subband quantization, but such an analysis is beyond the scope of this paper. Our present work is focused on the problem of generating wavelet transform/scalar quantization (WSQ) implementations that can be ported easily between different hardware environments. This is an extremely important consideration given the great profusion of different high-performance computing architectures available, the high cost associated with learning how to map algorithms effectively onto a new architecture, and the rapid rate of evolution in the world of high-performance computing.

  8. Adaptive Wavelet-Based Direct Numerical Simulations of Rayleigh-Taylor Instability

    NASA Astrophysics Data System (ADS)

    Reckinger, Scott J.

    The compressible Rayleigh-Taylor instability (RTI) occurs when a fluid of low molar mass supports a fluid of higher molar mass against a gravity-like body force or in the presence of an accelerating front. Intrinsic to the problem are highly stratified background states, acoustic waves, and a wide range of physical scales. The objective of this thesis is to develop a specialized computational framework that addresses these challenges and to apply the advanced methodologies for direct numerical simulations of compressible RTI. Simulations are performed using the Parallel Adaptive Wavelet Collocation Method (PAWCM). Due to the physics-based adaptivity and direct error control of the method, PAWCM is ideal for resolving the wide range of scales present in RTI growth. Characteristics-based non-reflecting boundary conditions are developed for highly stratified systems to be used in conjunction with PAWCM. This combination allows for extremely long domains, which is necessary for observing the late time growth of compressible RTI. Initial conditions that minimize acoustic disturbances are also developed. The initialization is consistent with linear stability theory, where the background state consists of two diffusively mixed stratified fluids of differing molar masses. The compressibility effects on the departure from the linear growth, the onset of strong non-linear interactions, and the late-time behavior of the fluid structures are investigated. It is discovered that, for the thermal equilibrium case, the background stratification acts to suppress the instability growth when the molar mass difference is small. A reversal in this monotonic behavior is observed for large molar mass differences, where stratification enhances the bubble growth. Stratification also affects the vortex creation and the associated induced velocities. The enhancement and suppression of the RTI growth has important consequences for a detailed understanding of supernovae flame front

  9. Class of Fibonacci-Daubechies-4-Haar wavelets with applicability to ECG denoising

    NASA Astrophysics Data System (ADS)

    Smith, Christopher B.; Agaian, Sos S.

    2004-05-01

    The presented paper introduces a new class of wavelets that includes the simplest Haar wavelet (Daubechies-2) as well as the Daubechies-4 wavelet. This class is shown to have several properties similar to the Daubechies wavelets. In application, the new class of wavelets has been shown to effectively denoise ECG signals. In addition, the paper introduces a new polynomial soft threshold technique for denoising through wavelet shrinkage. The polynomial soft threshold technique is able to represent a wide class of polynomial behaviors, including classical soft thresholding.

  10. Statistical process control for AR(1) or non-Gaussian processes using wavelets coefficients

    NASA Astrophysics Data System (ADS)

    Cohen, A.; Tiplica, T.; Kobi, A.

    2015-11-01

    Autocorrelation and non-normality of process characteristic variables are two main difficulties that industrial engineers must face when they should implement control charting techniques. This paper presents new issues regarding the probability distribution of wavelets coefficients. Firstly, we highlight that wavelets coefficients have capacities to strongly decrease autocorrelation degree of original data and are normally-like distributed, especially in the case of Haar wavelet. We used AR(1) model with positive autoregressive parameters to simulate autocorrelated data. Illustrative examples are presented to show wavelets coefficients properties. Secondly, the distributional parameters of wavelets coefficients are derived, it shows that wavelets coefficients reflect an interesting statistical properties for SPC purposes.

  11. Wavelet Picture Coding and Its Several Problems of the Application to the Interlace HDTV and the Ultra-High Definition Images

    NASA Astrophysics Data System (ADS)

    Kuge, Tetsuro

    2002-08-01

    A new image coding method utilizing the wavelet transform, JPEG2000, has been developed. In this report, we consider several types of visual distortion observed in moving HDTV pictures and ultra-high definition pictures compressed by wavelet transform coding, and describe measures to reduce them. So-called 'flicker artifacts' are visually decreased by visual weighting and the mechanism is interpreted by Weber's law. The characteristic distortion named 'Comb-tooth Artifacts' caused by the combination of subhead decomposition and the interlaced TV signal structure is discussed, and a preprocessing method is proposed to counter it. The relation between the resolution levels of the subband decomposition and the distortion is investigated by coding experiments on ultra-high definition pictures.

  12. Determining building interior structures using compressive sensing

    NASA Astrophysics Data System (ADS)

    Lagunas, Eva; Amin, Moeness G.; Ahmad, Fauzia; Nájar, Montse

    2013-04-01

    We consider imaging of the building interior structures using compressive sensing (CS) with applications to through-the-wall imaging and urban sensing. We consider a monostatic synthetic aperture radar imaging system employing stepped frequency waveform. The proposed approach exploits prior information of building construction practices to form an appropriate sparse representation of the building interior layout. We devise a dictionary of possible wall locations, which is consistent with the fact that interior walls are typically parallel or perpendicular to the front wall. The dictionary accounts for the dominant normal angle reflections from exterior and interior walls for the monostatic imaging system. CS is applied to a reduced set of observations to recover the true positions of the walls. Additional information about interior walls can be obtained using a dictionary of possible corner reflectors, which is the response of the junction of two walls. Supporting results based on simulation and laboratory experiments are provided. It is shown that the proposed sparsifying basis outperforms the conventional through-the-wall CS model, the wavelet sparsifying basis, and the block sparse model for building interior layout detection.

  13. Length-Limited Data Transformation and Compression

    SciTech Connect

    Senecal, Joshua G.

    2005-09-01

    Scientific computation is used for the simulation of increasingly complex phenomena, and generates data sets of ever increasing size, often on the order of terabytes. All of this data creates difficulties. Several problems that have been identified are (1) the inability to effectively handle the massive amounts of data created, (2) the inability to get the data off the computer and into storage fast enough, and (3) the inability of a remote user to easily obtain a rendered image of the data resulting from a simulation run. This dissertation presents several techniques that were developed to address these issues. The first is a prototype bin coder based on variable-to-variable length codes. The codes utilized are created through a process of parse tree leaf merging, rather than the common practice of leaf extension. This coder is very fast and its compression efficiency is comparable to other state-of-the-art coders. The second contribution is the Piecewise-Linear Haar (PLHaar) transform, a reversible n-bit to n-bit wavelet-like transform. PLHaar is simple to implement, ideal for environments where transform coefficients must be kept the same size as the original data, and is the only n-bit to n-bit transform suitable for both lossy and lossless coding.

  14. Visually weighted reconstruction of compressive sensing MRI.

    PubMed

    Oh, Heeseok; Lee, Sanghoon

    2014-04-01

    Compressive sensing (CS) enables the reconstruction of a magnetic resonance (MR) image from undersampled data in k-space with relatively low-quality distortion when compared to the original image. In addition, CS allows the scan time to be significantly reduced. Along with a reduction in the computational overhead, we investigate an effective way to improve visual quality through the use of a weighted optimization algorithm for reconstruction after variable density random undersampling in the phase encoding direction over k-space. In contrast to conventional magnetic resonance imaging (MRI) reconstruction methods, the visual weight, in particular, the region of interest (ROI), is investigated here for quality improvement. In addition, we employ a wavelet transform to analyze the reconstructed image in the space domain and fully utilize data sparsity over the spatial and frequency domains. The visual weight is constructed by reflecting the perceptual characteristics of the human visual system (HVS), and then applied to ℓ1 norm minimization, which gives priority to each coefficient during the reconstruction process. Using objective quality assessment metrics, it was found that an image reconstructed using the visual weight has higher local and global quality than those processed by conventional methods.

  15. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon. PMID:26833218

  16. Transverse Compression of Tendons.

    PubMed

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B

    2016-04-01

    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.

  17. Fractional snow cover mapping from MODIS data using wavelet-artificial intelligence hybrid models

    NASA Astrophysics Data System (ADS)

    Moosavi, Vahid; Malekinezhad, Hossein; Shirmohammadi, Bagher

    2014-04-01

    This study was carried out to evaluate the wavelet-artificial intelligence hybrid models to produce fractional snow cover maps. At first, cloud cover was removed from MODIS data and cloud free images were produced. SVM-based binary classified ETM+ imagery were then used as reference maps in order to obtain train and test data for sub-pixel classification models. ANN and ANFIS-based modeling were performed using raw data (without wavelet-based preprocessing). In the next step, several mother wavelets and levels were used in order to decompose the original data to obtain wavelet coefficients. Then, the decomposed data were used for further modeling processes. ANN, ANFIS, wavelet-ANN and wavelet-ANFIS models were compared to evaluate the effect of wavelet transformation on the ability of artificial intelligence models. It was demonstrated that wavelet transformation as a preprocessing approach can significantly enhance the performance of ANN and ANFIS models. This study indicated an overall accuracy of 92.45% for wavelet-ANFIS model, 86.13% for wavelet-ANN, 72.23% for ANFIS model and 66.78% for ANN model. In fact, hybrid wavelet-artificial intelligence models can extract the characteristics of the original signals (i.e. model inputs) accurately through decomposing the non-stationary and complex signals into several stationary and simpler signals. The positive effect of fuzzification as well as wavelet transformation in the wavelet-ANFIS model was also confirmed.

  18. Multiscale video compression using adaptive finite-state vector quantization

    NASA Astrophysics Data System (ADS)

    Kwon, Heesung; Venkatraman, Mahesh; Nasrabadi, Nasser M.

    1998-10-01

    We investigate the use of vector quantizers (VQs) with memory to encode image sequences. A multiscale video coding technique using adaptive finite-state vector quantization (FSVQ) is presented.In this technique, a small codebook (subcodebook) is generated for each input vector from a much larger codebook (supercodebook) by the selection (through a reordering procedure) of a set of appropriate codevectors that is the best representative of the input vector. Therefore, the subcodebook dynamically adapts to the characteristics of the motion-compensated frame difference signal. Several reordering procedures are introduced, and their performance is evaluated. In adaptive FSVQ, two different methods, predefined thresholding and rate- distortion cost optimization, are used to decide between the supercodebook and subcodebook for encoding a given input vector. A cache-based vector quantizer, a form of adaptive FSVQ, is also presented for very-low-bit-rate video coding. An efficient bit-allocation strategy using quadtree decomposition is used with the cache-based VQ to compress the video signal. The proposed video codec outperforms H.263 in terms of the peak signal-to-noise ratio and perceptual quality at very low bit rates, ranging from 5 to 20 kbps. The picture quality of the proposed video codec is a significant improvement over previous codecs, in terms of annoying distortions (blocking artifacts and mosquito noises), and is comparable to that of recently developed wavelet-based video codecs. This similarity in picture quality can be explained by the fact that the proposed video codex uses multiscale segmentation and subsequent variable- rate coding, which are conceptually similar to wavelet-based coding techniques. The simplicity of the encoder and decoder of the proposed codec makes it more suitable than wavelet- based coding for real-time, very-low-bit rate video applications.

  19. Spectral Laplace-Beltrami wavelets with applications in medical images.

    PubMed

    Tan, Mingzhen; Qiu, Anqi

    2015-05-01

    The spectral graph wavelet transform (SGWT) has recently been developed to compute wavelet transforms of functions defined on non-Euclidean spaces such as graphs. By capitalizing on the established framework of the SGWT, we adopt a fast and efficient computation of a discretized Laplace-Beltrami (LB) operator that allows its extension from arbitrary graphs to differentiable and closed 2-D manifolds (smooth surfaces embedded in the 3-D Euclidean space). This particular class of manifolds are widely used in bioimaging to characterize the morphology of cells, tissues, and organs. They are often discretized into triangular meshes, providing additional geometric information apart from simple nodes and weighted connections in graphs. In comparison with the SGWT, the wavelet bases constructed with the LB operator are spatially localized with a more uniform "spread" with respect to underlying curvature of the surface. In our experiments, we first use synthetic data to show that traditional applications of wavelets in smoothing and edge detectio can be done using the wavelet bases constructed with the LB operator. Second, we show that multi-resolutional capabilities of the proposed framework are applicable in the classification of Alzheimer's patients with normal subjects using hippocampal shapes. Wavelet transforms of the hippocampal shape deformations at finer resolutions registered higher sensitivity (96%) and specificity (90%) than the classification results obtained from the direct usage of hippocampal shape deformations. In addition, the Laplace-Beltrami method requires consistently a smaller number of principal components (to retain a fixed variance) at higher resolution as compared to the binary and weighted graph Laplacians, demonstrating the potential of the wavelet bases in adapting to the geometry of the underlying manifold.

  20. Arctic Sea Ice Motion from Wavelet Analysis of Satellite Data

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Zhao, Yunhe

    1998-01-01

    Wavelet analysis of DMSP SSM/I (Special Sensor Microwave/Imager) 85 GHz and 37 GHz radiance data, SMMR (Scanning Multichannel Microwave Radiometer) 37 GHz, and NSCAT (NASA Scatterometer) 13.9 GHZ data can be used to obtain daily sea ice drift information for both the northern and southern polar regions. The derived maps of sea ice drift provide both improved spatial coverage over the existing array of Arctic Ocean buoys and better temporal resolution over techniques utilizing data from satellite synthetic aperture radars (SAR). Examples of derived ice-drift maps in the Arctic illustrate large-scale circulation reversals within a period of a couple weeks. Comparisons with ice displacements derived from buoys show good quantitative agreement. NASA Scatterometer (NSCAT) 13.9 GHZ data have been also used for wavelet analysis to derive sea-ice drift. First, the 40' incidence-angle, sigma-zero (surface roughness) daily map of whole Arctic region with 25 km of pixel size from satellite's 600 km swath has been constructed. Then, the similar wavelet transform procedure to SSM/I data can be applied. Various scales of wavelet transform and threshold have been tested. By overlaying , neighbor filtering, and block-averaging the results of multiscale wavelet transforms, the final sea ice drift vectors are much smooth and representative to the sea ice motion. This wavelet analysis procedure is robust and can make a major contribution to the understanding of ice motion over large areas at relatively high temporal resolutions. The results of wavelet analysis of SSM/I and NSCAT images and buoy data can be merged by some data fusion techniques and will help to improve our current knowledge of sea ice drift and related processes through the data assimilation of ocean-ice numerical model.

  1. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    PubMed

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.

  2. The compressible mixing layer

    NASA Technical Reports Server (NTRS)

    Vandromme, Dany; Haminh, Hieu

    1991-01-01

    The capability of turbulence modeling correctly to handle natural unsteadiness appearing in compressible turbulent flows is investigated. Physical aspects linked to the unsteadiness problem and the role of various flow parameters are analyzed. It is found that unsteady turbulent flows can be simulated by dividing these motions into an 'organized' part for which equations of motion are solved and a remaining 'incoherent' part represented by a turbulence model. Two-equation turbulence models and second-order turbulence models can yield reasonable results. For specific compressible unsteady turbulent flow, graphic presentations of different quantities may reveal complementary physical features. Strong compression zones are observed in rapid flow parts but shocklets do not yet occur.

  3. Isentropic Compression of Argon

    SciTech Connect

    H. Oona; J.C. Solem; L.R. Veeser, C.A. Ekdahl; P.J. Rodriquez; S.M. Younger; W. Lewis; W.D. Turley

    1997-08-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal.

  4. TME12/400: Application Oriented Wavelet-based Coding of Volumetric Medical Data

    PubMed Central

    Menegaz, G; Grewe, L; Lozano, A; Thiran, J-Ph

    1999-01-01

    Introduction While medical data are increasingly acquired in a multidimensional space, in clinical practice they are mainly still analyzed as images. We propose a wavelet-based coding technique exploiting the full dimensionality of the data distribution while allowing to recover a single image without any need to decode the whole volume. The proposed compression scheme is based on the Layered Zero Coding (LZC) method. Two modes are considered. In the progressive (PROG) mode, the volume is processed as a whole, while in the layer-per-layer (LPL) one each layer of each sub-band is encoded independently. The three-dimensional extension of the Embedded Zerotree Wavelet (EZW) coder is used as reference for coding efficiency. All working modalities provide a fully embedded bit-stream allowing a progressive by quality recovering of the encoded information. Methods The 3D DWT is performed mapping integers to integers thus allowing lossless compression. Two different coding systems have been considered: EZW and LZC. LZC models the expected statistical dependencies among coefficients by defining some conditional terms (contexts) which summarize the significance state of the samples belonging to a generalized neighborhood of the coefficient being encoded. Such terms are then used by a context adaptive arithmetic coder. The LPL mode has been designed in order to be able to independently decode any image of the dataset, and it is derived from the PROG mode by over-constraining the system. The sub-bands are quantized and encoded according to a sequence of uniform quantizers with decreasing step-size. This ensures progressiveness capabilities when decoding both the whole volume and a single image. Results Performances have been evaluated on two datasets: DSR and ANGIO, an opthalmologic angiographic sequence. For each mode the best context has been retained. Results show that the proposed system is competitive with EZW, and PROG mode is the more performant. The main factors

  5. Compressive Shift Retrieval

    NASA Astrophysics Data System (ADS)

    Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar

    2014-08-01

    The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

  6. Isentropic compression of argon

    SciTech Connect

    Veeser, L.R.; Ekdahl, C.A.; Oona, H.

    1997-06-01

    The compression was done in an MC-1 flux compression (explosive) generator, in order to study the transition from an insulator to a conductor. Since conductivity signals were observed in all the experiments (except when the probe is removed), both the Teflon and the argon are becoming conductive. The conductivity could not be determined (Teflon insulation properties unknown), but it could be bounded as being {sigma}=1/{rho}{le}8({Omega}cm){sub -1}, because when the Teflon breaks down, the dielectric constant is reduced. The Teflon insulator problem remains, and other ways to better insulate the probe or to measure the conductivity without a probe is being sought.

  7. Orbiting dynamic compression laboratory

    NASA Technical Reports Server (NTRS)

    Ahrens, T. J.; Vreeland, T., Jr.; Kasiraj, P.; Frisch, B.

    1984-01-01

    In order to examine the feasibility of carrying out dynamic compression experiments on a space station, the possibility of using explosive gun launchers is studied. The question of whether powders of a refractory metal (molybdenum) and a metallic glass could be well considered by dynamic compression is examined. In both cases extremely good bonds are obtained between grains of metal and metallic glass at 180 and 80 kb, respectively. When the oxide surface is reduced and the dynamic consolidation is carried out in vacuum, in the case of molybdenum, tensile tests of the recovered samples demonstrated beneficial ultimate tensile strengths.

  8. S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation

    PubMed Central

    2014-01-01

    Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data

  9. Wavelet neural networks: a practical guide.

    PubMed

    Alexandridis, Antonios K; Zapranis, Achilleas D

    2013-06-01

    Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications. PMID:23411153

  10. Smoke detection using GLCM, wavelet, and motion

    NASA Astrophysics Data System (ADS)

    Srisuwan, Teerasak; Ruchanurucks, Miti

    2014-01-01

    This paper presents a supervised smoke detection method that uses local and global features. This framework integrates and extends notions of many previous works to generate a new comprehensive method. First chrominance detection is used to screen areas that are suspected to be smoke. For these areas, local features are then extracted. The features are among homogeneity of GLCM and energy of wavelet. Then, global feature of motion of the smoke-color areas are extracted using a space-time analysis scheme. Finally these features are used to train an artificial intelligent. Here we use neural network, experiment compares importance of each feature. Hence, we can really know which features among those used by many previous works are really useful. The proposed method outperforms many of the current methods in the sense of correctness, and it does so in a reasonable computation time. It even has less limitation than conventional smoke sensors when used in open space. Best method for the experimental results is to use all the mentioned features as expected, to insure which is the best experiment result can be achieved. The achieved with high accuracy of result expected output is high value of true positive and low value of false positive. And show that our algorithm has good robustness for smoke detection.

  11. Wavelet dispersion and bright-spot detection

    SciTech Connect

    Luh, P.C.

    1989-03-01

    Since Ostrander in 1984 showed that the variations in reflectivity as a function of offset can be used for bright-spot validation, it has been known that these variations are often sensitive to errors in acquisition and processing of prestack seismic data. This presentation shows that even with perfect measurements, the bright-spot signal will be altered because the wavelet disperses as it propagates through an attenuating overburden. This in turn affects the slope term of the amplitude vs. offset (AVO) variation. For any single reflector, the dispersion of its propagated signal as a function of offset can be decomposed into a product of the vertical and horizontal dispersions. The vertical dispersion is the dispersion that a zero-offset arrival suffers through its propagation, and the horizontal dispersion is the additional dispersion that a nonzero-offset arrival must suffer further because of the residual normal moveout time. Numerical examples using a frequency-independent attenuation law show that even for a relatively high-Q or low-loss overburden, the horizontal dispersion alone can distort the AVO signal. This distortion cannot be taken care of by velocity analysis. A preferred method to overcome the dispersion effect would be to apply Q compensation on prestack seismic data before movement.

  12. Optimization of dynamic measurement of receptor kinetics by wavelet denoising.

    PubMed

    Alpert, Nathaniel M; Reilhac, Anthonin; Chio, Tat C; Selesnick, Ivan

    2006-04-01

    The most important technical limitation affecting dynamic measurements with PET is low signal-to-noise ratio (SNR). Several reports have suggested that wavelet processing of receptor kinetic data in the human brain can improve the SNR of parametric images of binding potential (BP). However, it is difficult to fully assess these reports because objective standards have not been developed to measure the tradeoff between accuracy (e.g. degradation of resolution) and precision. This paper employs a realistic simulation method that includes all major elements affecting image formation. The simulation was used to derive an ensemble of dynamic PET ligand (11C-raclopride) experiments that was subjected to wavelet processing. A method for optimizing wavelet denoising is presented and used to analyze the simulated experiments. Using optimized wavelet denoising, SNR of the four-dimensional PET data increased by about a factor of two and SNR of three-dimensional BP maps increased by about a factor of 1.5. Analysis of the difference between the processed and unprocessed means for the 4D concentration data showed that more than 80% of voxels in the ensemble mean of the wavelet processed data deviated by less than 3%. These results show that a 1.5x increase in SNR can be achieved with little degradation of resolution. This corresponds to injecting about twice the radioactivity, a maneuver that is not possible in human studies without saturating the PET camera and/or exposing the subject to more than permitted radioactivity.

  13. Spin-SILC: CMB polarisation component separation with spin wavelets

    NASA Astrophysics Data System (ADS)

    Rogers, Keir K.; Peiris, Hiranya V.; Leistedt, Boris; McEwen, Jason D.; Pontzen, Andrew

    2016-08-01

    We present Spin-SILC, a new foreground component separation method that accurately extracts the cosmic microwave background (CMB) polarisation E and B modes from raw multifrequency Stokes Q and U measurements of the microwave sky. Spin-SILC is an internal linear combination method that uses spin wavelets to analyse the spin-2 polarisation signal P = Q + iU. The wavelets are additionally directional (non-axisymmetric). This allows different morphologies of signals to be separated and therefore the cleaning algorithm is localised using an additional domain of information. The advantage of spin wavelets over standard scalar wavelets is to simultaneously and self-consistently probe scales and directions in the polarisation signal P = Q + iU and in the underlying E and B modes, therefore providing the ability to perform component separation and E-B decomposition concurrently for the first time. We test Spin-SILC on full-mission Planck simulations and data and show the capacity to correctly recover the underlying cosmological E and B modes. We also demonstrate a strong consistency of our CMB maps with those derived from existing component separation methods. Spin-SILC can be combined with the pseudo- and pure E-B spin wavelet estimators presented in a companion paper to reliably extract the cosmological signal in the presence of complicated sky cuts and noise. Therefore, it will provide a computationally-efficient method to accurately extract the CMB E and B modes for future polarisation experiments.

  14. Wavelet-based analysis of circadian behavioral rhythms.

    PubMed

    Leise, Tanya L

    2015-01-01

    The challenging problems presented by noisy biological oscillators have led to the development of a great variety of methods for accurately estimating rhythmic parameters such as period and amplitude. This chapter focuses on wavelet-based methods, which can be quite effective for assessing how rhythms change over time, particularly if time series are at least a week in length. These methods can offer alternative views to complement more traditional methods of evaluating behavioral records. The analytic wavelet transform can estimate the instantaneous period and amplitude, as well as the phase of the rhythm at each time point, while the discrete wavelet transform can extract the circadian component of activity and measure the relative strength of that circadian component compared to those in other frequency bands. Wavelet transforms do not require the removal of noise or trend, and can, in fact, be effective at removing noise and trend from oscillatory time series. The Fourier periodogram and spectrogram are reviewed, followed by descriptions of the analytic and discrete wavelet transforms. Examples illustrate application of each method and their prior use in chronobiology is surveyed. Issues such as edge effects, frequency leakage, and implications of the uncertainty principle are also addressed. PMID:25662453

  15. Nonlinear Frequency Compression

    PubMed Central

    Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-01-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality. PMID:23539261

  16. Compress Your Files

    ERIC Educational Resources Information Center

    Branzburg, Jeffrey

    2005-01-01

    File compression enables data to be squeezed together, greatly reducing file size. Why would someone want to do this? Reducing file size enables the sending and receiving of files over the Internet more quickly, the ability to store more files on the hard drive, and the ability pack many related files into one archive (for example, all files…

  17. Compression: Rent or own

    SciTech Connect

    Cahill, C.

    1997-07-01

    Historically, the decision to purchase or rent compression has been set as a corporate philosophy. As companies decentralize, there seems to be a shift away from corporate philosophy toward individual profit centers. This has led the decision to rent versus purchase to be looked at on a regional or project-by-project basis.

  18. The Compressed Video Experience.

    ERIC Educational Resources Information Center

    Weber, John

    In the fall semester 1995, Southern Arkansas University- Magnolia (SAU-M) began a two semester trial delivering college classes via a compressed video link between SAU-M and its sister school Southern Arkansas University Tech (SAU-T) in Camden. As soon as the University began broadcasting and receiving classes, it was discovered that using the…

  19. Wavelet image processing applied to optical and digital holography: past achievements and future challenges

    NASA Astrophysics Data System (ADS)

    Jones, Katharine J.

    2005-08-01

    The link between wavelets and optics goes back to the work of Dennis Gabor who both invented holography and developed Gabor decompositions. Holography involves 3-D images. Gabor decompositions involves 1-D signals. Gabor decompositions are the predecessors of wavelets. Wavelet image processing of holography, both optical holography and digital holography, will be examined with respect to past achievements and future challenges.

  20. Digital audio signal filtration based on the dual-tree wavelet transform

    NASA Astrophysics Data System (ADS)

    Yaseen, A. S.; Pavlov, A. N.

    2015-07-01

    A new method of digital audio signal filtration based on the dual-tree wavelet transform is described. An adaptive approach is proposed that allows the automatic adjustment of parameters of the wavelet filter to be optimized. A significant improvement of the quality of signal filtration is demonstrated in comparison to the traditionally used filters based on the discrete wavelet transform.

  1. Rotation and Scale Invariant Wavelet Feature for Content-Based Texture Image Retrieval.

    ERIC Educational Resources Information Center

    Lee, Moon-Chuen; Pun, Chi-Man

    2003-01-01

    Introduces a rotation and scale invariant log-polar wavelet texture feature for image retrieval. The underlying feature extraction process involves a log-polar transform followed by an adaptive row shift invariant wavelet packet transform. Experimental results show that this rotation and scale invariant wavelet feature is quite effective for image…

  2. Low-memory-usage image coding with line-based wavelet transform

    NASA Astrophysics Data System (ADS)

    Ye, Linning; Guo, Jiangling; Nutter, Brian; Mitra, Sunanda

    2011-02-01

    When compared to the traditional row-column wavelet transform, the line-based wavelet transform can achieve significant memory savings. However, the design of an image codec using the line-based wavelet transform is an intricate task because of the irregular order in which the wavelet coefficients are generated. The independent block coding feature of JPEG2000 makes it work effectively with the line-based wavelet transform. However, with wavelet tree-based image codecs, such as set partitioning in hierarchical trees, the memory usage of the codecs does not realize significant advantage with the line-based wavelet transform because many wavelet coefficients must be buffered before the coding starts. In this paper, the line-based wavelet transform was utilized to facilitate backward coding of wavelet trees (BCWT). Although the BCWT algorithm is a wavelet tree-based algorithm, its coding order differs from that of the traditional wavelet tree-based algorithms, which allows the proposed line-based image codec to become more memory efficient than other line-based image codecs, including line-based JPEG2000, while still offering comparable rate distortion performance and much lower system complexity.

  3. Assessment of glucose metabolism from the projections using the wavelet technique in small animal pet imaging.

    PubMed

    Arhjoul, Lahcen; Bentourkia, M'hamed

    2007-04-01

    The dynamic positron emission tomography (PET) images are usually modeled to extract the physiological parameters. However, to avoid reconstruction of the dynamic sequence of images with subjective data filtering, it is advantageous to apply the kinetic modeling in the projection space and to reconstruct single parametric image slices. Using the advantage of the wavelets to compress the data and to filter the noise in the sinogram, we applied the graphical analysis method (Patlak) to generate a single parametric sinogram (WAV-SINO) from PET data acquired in seven normal rats measured with fluorodeoxyglucose (FDG) in the heart. The same data set was analysed with the graphical method in the spatial domain from the sinograms (USUAL-SINO), and also from images reconstructed with non-filtered backprojection (USUAL-nFBP) and filtered backprojection (USUAL-FBP). The myocardial metabolic rates for glucose (MMRG) obtained with USUAL-nFBP, USUAL-FBP, USUAL-SINO and WAV-SINO were found to be, respectively, 7.54, 6.75, 6.52 and 6.98micromol/100g/min. While the variance with respect to USUAL-FBP was about 142% for USUAL-nFBP, 99.6% for USUAL-SINO and 101.9% for WAV-SINO, the spatial resolution as assessed from the profiles through the myocardial walls of the reconstructed images was 112% for USUAL-FBP and 105% for WAV-SINO relative to the high resolution USUAL-nFBP. The WAV-SINO parametric images showed slightly better visual quality than those obtained from the spatial domain. Finally, the wavelet filtering technique allowed to reduce the computing time, the storage space and particularly the variance in the MMRG parametric images while preserving the spatial resolution.

  4. Automated Diagnosis of Mammogram Images of Breast Cancer Using Discrete Wavelet Transform and Spherical Wavelet Transform Features

    PubMed Central

    Ganesan, Karthikeyan; Acharya, U. Rajendra; Chua, Chua Kuang; Min, Lim Choo; Abraham, Thomas K.

    2014-01-01

    Mammograms are one of the most widely used techniques for preliminary screening of breast cancers. There is great demand for early detection and diagnosis of breast cancer using mammograms. Texture based feature extraction techniques are widely used for mammographic image analysis. In specific, wavelets are a popular choice for texture analysis of these images. Though discrete wavelets have been used extensively for this purpose, spherical wavelets have rarely been used for Computer-Aided Diagnosis (CAD) of breast cancer using mammograms. In this work, a comparison of the performance between the features of Discrete Wavelet Transform (DWT) and Spherical Wavelet Transform (SWT) based on the classification results of normal, benign and malignant stage was studied. Classification was performed using Linear Discriminant Classifier (LDC), Quadratic Discriminant Classifier (QDC), Nearest Mean Classifier (NMC), Support Vector Machines (SVM) and Parzen Classifier (ParzenC). We have obtained a maximum classification accuracy of 81.73% for DWT and 88.80% for SWT features using SVM classifier. PMID:24000991

  5. Automated diagnosis of mammogram images of breast cancer using discrete wavelet transform and spherical wavelet transform features: a comparative study.

    PubMed

    Ganesan, Karthikeyan; Acharya, U Rajendra; Chua, Chua Kuang; Min, Lim Choo; Abraham, Thomas K

    2014-12-01

    Mammograms are one of the most widely used techniques for preliminary screening of breast cancers. There is great demand for early detection and diagnosis of breast cancer using mammograms. Texture based feature extraction techniques are widely used for mammographic image analysis. In specific, wavelets are a popular choice for texture analysis of these images. Though discrete wavelets have been used extensively for this purpose, spherical wavelets have rarely been used for Computer-Aided Diagnosis (CAD) of breast cancer using mammograms. In this work, a comparison of the performance between the features of Discrete Wavelet Transform (DWT) and Spherical Wavelet Transform (SWT) based on the classification results of normal, benign and malignant stage was studied. Classification was performed using Linear Discriminant Classifier (LDC), Quadratic Discriminant Classifier (QDC), Nearest Mean Classifier (NMC), Support Vector Machines (SVM) and Parzen Classifier (ParzenC). We have obtained a maximum classification accuracy of 81.73% for DWT and 88.80% for SWT features using SVM classifier. PMID:24000991

  6. ECG Signal Analysis and Arrhythmia Detection using Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Kaur, Inderbir; Rajni, Rajni; Marwaha, Anupma

    2016-06-01

    Electrocardiogram (ECG) is used to record the electrical activity of the heart. The ECG signal being non-stationary in nature, makes the analysis and interpretation of the signal very difficult. Hence accurate analysis of ECG signal with a powerful tool like discrete wavelet transform (DWT) becomes imperative. In this paper, ECG signal is denoised to remove the artifacts and analyzed using Wavelet Transform to detect the QRS complex and arrhythmia. This work is implemented in MATLAB software for MIT/BIH Arrhythmia database and yields the sensitivity of 99.85 %, positive predictivity of 99.92 % and detection error rate of 0.221 % with wavelet transform. It is also inferred that DWT outperforms principle component analysis technique in detection of ECG signal.

  7. Coherent vorticity extraction in turbulent channel flow using anisotropic wavelets

    NASA Astrophysics Data System (ADS)

    Yoshimatsu, Katsunori; Sakurai, Teluo; Schneider, Kai; Farge, Marie; Morishita, Koji; Ishihara, Takashi

    2014-11-01

    We examine the role of coherent vorticity in a turbulent channel flow. DNS data computed at friction-velocity based Reynolds number 320 is analyzed. The vorticity is decomposed using three-dimensional anisotropic orthogonal wavelets. Thresholding of the wavelet coefficients allows to extract the coherent vorticity, corresponding to few strong wavelet coefficients. It retains the vortex tubes of the turbulent flow. Turbulent statistics, e.g., energy, enstrophy and energy spectra, are close to those of the total flow. The nonlinear energy budgets are also found to be well preserved. The remaining incoherent part, represented by the large majority of the weak coefficients, corresponds to a structureless, i.e., a noise-like background flow.

  8. Content-based image classification with circular harmonic wavelets

    NASA Astrophysics Data System (ADS)

    Jacovitti, Giovanni; Neri, Alessandro

    1998-07-01

    Classification of an image on the basis of contained patterns is considered in a context of detection and estimation theory. To simplify mathematical derivations, image and reference patterns are represented on a complex support. This allows to convert the four positional parameters into two complex numbers: complex displacement and complex scale factor. The latter one represents isotropic dilations with its magnitude, and rotations with its phase. In this context, evaluation of the likelihood function under additive Gaussian noise assumption allows to relate basic template matching strategy to wavelet theory. It is shown that using circular harmonic wavelets simplifies the problem from a computational viewpoint. A general purpose pattern detection/estimation scheme is introduced by decomposing the images on a orthogonal basis formed by complex Laguerre-Gauss Harmonic wavelets.

  9. Detection of Orthoimage Mosaicking Seamlines by Means of Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Pyka, K.

    2016-06-01

    The detection of orthoimage mosaicking seamlines by means of wavelet transform was examined. Radiometric alignment was omitted, giving priority to the issue of seamlines which bypass locations where there is a parallax between orthoimages. The importance of this issue is particularly relevant for images with very high resolution. In order to create a barrier image between orthoimages, the redundant wavelet transform variant known as MODWT-MRA was used. While more computationally complex than the frequently used DWT, it enables very good multiresolution edge detection. An IT prototype was developed on the basis of the described concept, and several cases of seamline detection were tested on the basis of data with a resolution of 10 cm to 1 m. The correct seamline location was obtained for each test case. This result opens the door to future expansion of the radiometric alignment method, which is also based on wavelets.

  10. Dual tree fractional quaternion wavelet transform for disparity estimation.

    PubMed

    Kumar, Sanoj; Kumar, Sanjeev; Sukavanam, Nagarajan; Raman, Balasubramanian

    2014-03-01

    This paper proposes a novel phase based approach for computing disparity as the optical flow from the given pair of consecutive images. A new dual tree fractional quaternion wavelet transform (FrQWT) is proposed by defining the 2D Fourier spectrum upto a single quadrant. In the proposed FrQWT, each quaternion wavelet consists of a real part (a real DWT wavelet) and three imaginary parts that are organized according to the quaternion algebra. First two FrQWT phases encode the shifts of image features in the absolute horizontal and vertical coordinate system, while the third phase has the texture information. The FrQWT allowed a multi-scale framework for calculating and adjusting local disparities and executing phase unwrapping from coarse to fine scales with linear computational efficiency. PMID:24388356

  11. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  12. Long memory analysis by using maximal overlapping discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Shafie, Nur Amalina binti; Ismail, Mohd Tahir; Isa, Zaidi

    2015-05-01

    Long memory process is the asymptotic decay of the autocorrelation or spectral density around zero. The main objective of this paper is to do a long memory analysis by using the Maximal Overlapping Discrete Wavelet Transform (MODWT) based on wavelet variance. In doing so, stock market of Malaysia, China, Singapore, Japan and United States of America are used. The risk of long term and short term investment are also being looked into. MODWT can be analyzed with time domain and frequency domain simultaneously and decomposing wavelet variance to different scales without loss any information. All countries under studied show that they have long memory. Subprime mortgage crisis in 2007 is occurred in the United States of America are possible affect to the major trading countries. Short term investment is more risky than long term investment.

  13. Wavelet analysis of MR functional data from the cerebellum

    NASA Astrophysics Data System (ADS)

    Romero Sánchez, Karen; Vásquez Reyes, Marcos A.; González Gómez, Dulce I.; Hidalgo Tobón, Silvia; Hernández López, Javier M.; Dies Suarez, Pilar; Barragán Pérez, Eduardo; De Celis Alonso, Benito

    2014-11-01

    The main goal of this project was to create a computer algorithm based on wavelet analysis of BOLD signals, which automatically diagnosed ADHD using information from resting state MR experiments. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Wavelet analysis, which is a mathematical tool used to decompose time series into elementary constituents and detect hidden information, was applied here to the BOLD signal obtained from the cerebellum 8 region of all our volunteers. Statistical differences between the values of the a parameters of wavelet analysis was found and showed significant differences (p<0.02) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD.

  14. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  15. Wavelet Filtering to Reduce Conservatism in Aeroservoelastic Robust Stability Margins

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification was used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins was reduced with parametric and nonparametric time-frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data was used to reduce the effects of external desirableness and unmodeled dynamics. Parametric estimates of modal stability were also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. F-18 high Alpha Research Vehicle aeroservoelastic flight test data demonstrated improved robust stability prediction by extension of the stability boundary beyond the flight regime.

  16. On-Line Robust Modal Stability Prediction using Wavelet Processing

    NASA Technical Reports Server (NTRS)

    Brenner, Martin J.; Lind, Rick

    1998-01-01

    Wavelet analysis for filtering and system identification has been used to improve the estimation of aeroservoelastic stability margins. The conservatism of the robust stability margins is reduced with parametric and nonparametric time- frequency analysis of flight data in the model validation process. Nonparametric wavelet processing of data is used to reduce the effects of external disturbances and unmodeled dynamics. Parametric estimates of modal stability are also extracted using the wavelet transform. Computation of robust stability margins for stability boundary prediction depends on uncertainty descriptions derived from the data for model validation. The F-18 High Alpha Research Vehicle aeroservoelastic flight test data demonstrates improved robust stability prediction by extension of the stability boundary beyond the flight regime. Guidelines and computation times are presented to show the efficiency and practical aspects of these procedures for on-line implementation. Feasibility of the method is shown for processing flight data from time- varying nonstationary test points.

  17. Investigation of the Morlet wavelet for nonlinearity detection

    SciTech Connect

    Robertson, A. N.

    2001-01-01

    Frequency response functions (FRFs), typically calculated by means of the Fourier transform, are used extensively throughout structural dynamics to identify modal characteristics of a structure. Fourier methods work well with linear systems, but have limitations when nonlinearities are present, largely due to their inability to examine nonstationary data. A nonlinear system is often characterized by the variation of its structural response in time. More recently, wavelets have been introduced as an alternative method to FRF calculation. Unlike Fourier methods, wavelets are a time/frequency transform, allowing for the creation of a time-varying FRF. This paper explores the use of wavelet-based FRFs to identify nonlinear behavior in an eight degree-of-freedom spring-mass structure. Examination of temporal changes in the higher frequency range are used to determine the location of the system's nonlinearities.

  18. Wavelet analysis of MR functional data from the cerebellum

    SciTech Connect

    Karen, Romero Sánchez E-mail: marcos-vaquezr@hotmail.com Vásquez Reyes Marcos, A. E-mail: marcos-vaquezr@hotmail.com González Gómez Dulce, I. E-mail: marcos-vaquezr@hotmail.com Hernández López, Javier M.; Silvia, Hidalgo Tobón; Pilar, Dies Suarez E-mail: neurodoc@prodigy.net.mx; Eduardo, Barragán Pérez E-mail: neurodoc@prodigy.net.mx; Benito, De Celis Alonso

    2014-11-07

    The main goal of this project was to create a computer algorithm based on wavelet analysis of BOLD signals, which automatically diagnosed ADHD using information from resting state MR experiments. Male right handed volunteers (infants with ages between 7 and 11 years old) were studied and compared with age matched controls. Wavelet analysis, which is a mathematical tool used to decompose time series into elementary constituents and detect hidden information, was applied here to the BOLD signal obtained from the cerebellum 8 region of all our volunteers. Statistical differences between the values of the a parameters of wavelet analysis was found and showed significant differences (p<0.02) between groups. This difference might help in the future to distinguish healthy from ADHD patients and therefore diagnose ADHD.

  19. Image superresolution of cytology images using wavelet based patch search

    NASA Astrophysics Data System (ADS)

    Vargas, Carlos; García-Arteaga, Juan D.; Romero, Eduardo

    2015-01-01

    Telecytology is a new research area that holds the potential of significantly reducing the number of deaths due to cervical cancer in developing countries. This work presents a novel super-resolution technique that couples high and low frequency information in order to reduce the bandwidth consumption of cervical image transmission. The proposed approach starts by decomposing into wavelets the high resolution images and transmitting only the lower frequency coefficients. The transmitted coefficients are used to reconstruct an image of the original size. Additional details are added by iteratively replacing patches of the wavelet reconstructed image with equivalent high resolution patches from a previously acquired image database. Finally, the original transmitted low frequency coefficients are used to correct the final image. Results show a higher signal to noise ratio in the proposed method over simply discarding high frequency wavelet coefficients or replacing directly down-sampled patches from the image-database.

  20. Computing connection coefficients of compactly supported wavelets on bounded intervals

    SciTech Connect

    Romine, C.H.; Peyton, B.W.

    1997-04-01

    Daubechies wavelet basis functions have many properties that make them desirable as a basis for a Galerkin approach to solving PDEs: they are orthogonal, with compact support, and their connection coefficients can be computed. The method developed by Latto et al. to compute connection coefficients does not provide the correct inner product near the endpoints of a bounded interval, making the implementation of boundary conditions problematic. Moreover, the highly oscillatory nature of the wavelet basis functions makes standard numerical quadrature of integrals near the boundary impractical. The authors extend the method of Latto et al. to construct and solve a linear system of equations whose solution provides the exact computation of the integrals at the boundaries. As a consequence, they provide the correct inner product for wavelet basis functions on a bounded interval.

  1. A filtering and wavelet formulation for incompressible turbulence

    NASA Astrophysics Data System (ADS)

    Lewalle, Jacques

    2000-08-01

    Gaussian filtering and Hermitian wavelet transforms lead to a new presentation of the Navier Stokes equations by adding an independent variable. The diffusive part takes the form of an invariant translation toward smaller scales. The filtered pressure term is spatially local and is a superposition of generalized stresses at all scales larger than the scale of observation. Dominant contributors to the stresses are identified in the wavelet domain. The wavelet representation of Navier Stokes is derived from the filtered version, and has a simple algebraic structure similar to its Fourier counterpart. All nonlinear terms are shown to involve spatial transport; within this scheme, spectral transfer involves triplets of scales, covering the entire spectrum with a concentration on nearby scales, consistently with cascade models. The physical content of the equations is interpreted anew from this perspective, and several lines of application are discussed.

  2. Efficient hemodynamic event detection utilizing relational databases and wavelet analysis

    NASA Technical Reports Server (NTRS)

    Saeed, M.; Mark, R. G.

    2001-01-01

    Development of a temporal query framework for time-oriented medical databases has hitherto been a challenging problem. We describe a novel method for the detection of hemodynamic events in multiparameter trends utilizing wavelet coefficients in a MySQL relational database. Storage of the wavelet coefficients allowed for a compact representation of the trends, and provided robust descriptors for the dynamics of the parameter time series. A data model was developed to allow for simplified queries along several dimensions and time scales. Of particular importance, the data model and wavelet framework allowed for queries to be processed with minimal table-join operations. A web-based search engine was developed to allow for user-defined queries. Typical queries required between 0.01 and 0.02 seconds, with at least two orders of magnitude improvement in speed over conventional queries. This powerful and innovative structure will facilitate research on large-scale time-oriented medical databases.

  3. Semi-orthogonal wavelets for elliptic variational problems

    SciTech Connect

    Hardin, D.P.; Roach, D.W.

    1998-04-01

    In this paper the authors give a construction of wavelets which are (a) semi-orthogonal with respect to an arbitrary elliptic bilinear form a({center_dot},{center_dot}) on the Sobolev space H{sub 0}{sup 1}((0, L)) and (b) continuous and piecewise linear on an arbitrary partition of [0, L]. They illustrate this construction using a model problem. They also construct alpha-orthogonal Battle-Lemarie type wavelets which fully diagonalize the Galerkin discretized matrix for the model problem with domain IR. Finally they describe a hybrid basis consisting of a combination of elements from the semi-orthogonal wavelet basis and the hierarchical Schauder basis. Numerical experiments indicate that this basis leads to robust scalable Galerkin discretizations of the model problem which remain well-conditioned independent of {epsilon}, L, and the refinement level K.

  4. Machinery diagnostic application of the Morlet wavelet distribution

    NASA Astrophysics Data System (ADS)

    Gaberson, Howard A.

    2001-07-01

    The Morlet wavelet distribution is a time frequency distribution that can be described as a convolution of the wavelet with a vibration signal for various scales or frequencies. The Morlet wavelet, which is a Gaussian windowed complex sine wave, analysis has several subtle programming implications that both relate to and differentiate it from the short time Fourier transform. These are described, discussed, and tested on machinery vibration signals with positive results. Traditional scale with its octave representation is discarded in favor of equally spaced frequencies. A window width factor is tested to emphasize precision in either time or frequency. A variable length exponential window is necessary as a function of frequency and the width factor. The analysis is coded in MATLAB efficiently using their `conv' algorithm, and results of applying it to machinery diagnostic vibration signals are presented.

  5. Continuous Compressed Sensing for Surface Dynamical Processes with Helium Atom Scattering

    PubMed Central

    Jones, Alex; Tamtögl, Anton; Calvo-Almazán, Irene; Hansen, Anders

    2016-01-01

    Compressed Sensing (CS) techniques are used to measure and reconstruct surface dynamical processes with a helium spin-echo spectrometer for the first time. Helium atom scattering is a well established method for examining the surface structure and dynamics of materials at atomic sized resolution and the spin-echo technique opens up the possibility of compressing the data acquisition process. CS methods demonstrating the compressibility of spin-echo spectra are presented for several measurements. Recent developments on structured multilevel sampling that are empirically and theoretically shown to substantially improve upon the state of the art CS techniques are implemented. In addition, wavelet based CS approximations, founded on a new continuous CS approach, are used to construct continuous spectra. In order to measure both surface diffusion and surface phonons, which appear usually on different energy scales, standard CS techniques are not sufficient. However, the new continuous CS wavelet approach allows simultaneous analysis of surface phonons and molecular diffusion while reducing acquisition times substantially. The developed methodology is not exclusive to Helium atom scattering and can also be applied to other scattering frameworks such as neutron spin-echo and Raman spectroscopy. PMID:27301423

  6. Continuous Compressed Sensing for Surface Dynamical Processes with Helium Atom Scattering

    NASA Astrophysics Data System (ADS)

    Jones, Alex; Tamtögl, Anton; Calvo-Almazán, Irene; Hansen, Anders

    2016-06-01

    Compressed Sensing (CS) techniques are used to measure and reconstruct surface dynamical processes with a helium spin-echo spectrometer for the first time. Helium atom scattering is a well established method for examining the surface structure and dynamics of materials at atomic sized resolution and the spin-echo technique opens up the possibility of compressing the data acquisition process. CS methods demonstrating the compressibility of spin-echo spectra are presented for several measurements. Recent developments on structured multilevel sampling that are empirically and theoretically shown to substantially improve upon the state of the art CS techniques are implemented. In addition, wavelet based CS approximations, founded on a new continuous CS approach, are used to construct continuous spectra. In order to measure both surface diffusion and surface phonons, which appear usually on different energy scales, standard CS techniques are not sufficient. However, the new continuous CS wavelet approach allows simultaneous analysis of surface phonons and molecular diffusion while reducing acquisition times substantially. The developed methodology is not exclusive to Helium atom scattering and can also be applied to other scattering frameworks such as neutron spin-echo and Raman spectroscopy.

  7. TEM Video Compressive Sensing

    SciTech Connect

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-02

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  8. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

  9. Wavelet-based ground vehicle recognition using acoustic signals

    NASA Astrophysics Data System (ADS)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  10. Denoising portal images by means of wavelet techniques

    NASA Astrophysics Data System (ADS)

    Gonzalez Lopez, Antonio Francisco

    Portal images are used in radiotherapy for the verification of patient positioning. The distinguishing feature of this image type lies in its formation process: the same beam used for patient treatment is used for image formation. The high energy of the photons used in radiotherapy strongly limits the quality of portal images: Low contrast between tissues, low spatial resolution and low signal to noise ratio. This Thesis studies the enhancement of these images, in particular denoising of portal images. The statistical properties of portal images and noise are studied: power spectra, statistical dependencies between image and noise and marginal, joint and conditional distributions in the wavelet domain. Later, various denoising methods are applied to noisy portal images. Methods operating in the wavelet domain are the basis of this Thesis. In addition, the Wiener filter and the non local means filter (NLM), operating in the image domain, are used as a reference. Other topics studied in this Thesis are spatial resolution, wavelet processing and image processing in dosimetry in radiotherapy. In this regard, the spatial resolution of portal imaging systems is studied; a new method for determining the spatial resolution of the imaging equipments in digital radiology is presented; the calculation of the power spectrum in the wavelet domain is studied; reducing uncertainty in film dosimetry is investigated; a method for the dosimetry of small radiation fields with radiochromic film is presented; the optimal signal resolution is determined, as a function of the noise level and the quantization step, in the digitization process of films and the useful optical density range is set, as a function of the required uncertainty level, for a densitometric system. Marginal distributions of portal images are similar to those of natural images. This also applies to the statistical relationships between wavelet coefficients, intra-band and inter-band. These facts result in a better

  11. The impact of skull bone intensity on the quality of compressed CT neuro images

    NASA Astrophysics Data System (ADS)

    Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw

    2012-02-01

    The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.

  12. Haar wavelets, fluctuations and structure functions: convenient choices for geophysics

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; Schertzer, D.

    2012-09-01

    Geophysical processes are typically variable over huge ranges of space-time scales. This has lead to the development of many techniques for decomposing series and fields into fluctuations Δv at well-defined scales. Classically, one defines fluctuations as differences: (Δvdiff = v(x+Δx)-v(x) and this is adequate for many applications (Δx is the "lag"). However, if over a range one has scaling Δv ∝ ΔxH, these difference fluctuations are only adequate when 0 < H < 1. Hence, there is the need for other types of fluctuations. In particular, atmospheric processes in the "macroweather" range ≈10 days to 10-30 yr generally have -1 < H < 0, so that a definition valid over the range -1 < H < 1 would be very useful for atmospheric applications. A general framework for defining fluctuations is wavelets. However, the generality of wavelets often leads to fairly arbitrary choices of "mother wavelet" and the resulting wavelet coefficients may be difficult to interpret. In this paper we argue that a good choice is provided by the (historically) first wavelet, the Haar wavelet (Haar, 1910), which is easy to interpret and - if needed - to generalize, yet has rarely been used in geophysics. It is also easy to implement numerically: the Haar fluctuation (ΔvHaar at lag Δx is simply equal to the difference of the mean from x to x+ Δx/2 and from x+Δx/2 to x+Δx. Indeed, we shall see that the interest of the Haar wavelet is this relation to the integrated process rather than its wavelet nature per se. Using numerical multifractal simulations, we show that it is quite accurate, and we compare and contrast it with another similar technique, detrended fluctuation analysis. We find that, for estimating scaling exponents, the two methods are very similar, yet Haar-based methods have the advantage of being numerically faster, theoretically simpler and physically easier to interpret.

  13. Nonstationary Dynamics Data Analysis with Wavelet-SVD Filtering

    NASA Technical Reports Server (NTRS)

    Brenner, Marty; Groutage, Dale; Bessette, Denis (Technical Monitor)

    2001-01-01

    Nonstationary time-frequency analysis is used for identification and classification of aeroelastic and aeroservoelastic dynamics. Time-frequency multiscale wavelet processing generates discrete energy density distributions. The distributions are processed using the singular value decomposition (SVD). Discrete density functions derived from the SVD generate moments that detect the principal features in the data. The SVD standard basis vectors are applied and then compared with a transformed-SVD, or TSVD, which reduces the number of features into more compact energy density concentrations. Finally, from the feature extraction, wavelet-based modal parameter estimation is applied.

  14. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  15. Dollar and the stock market: An approach using Haar wavelet

    NASA Astrophysics Data System (ADS)

    Belardi, Aldo Artur; Aguiar, Renato A.

    2012-08-01

    This paper presents a methodology to detect significant changes in the price of U.S. dollar in regarding to the stock exchange using the Haar wavelet as well as statistical analyzes of several parameters. The data used arise of information provided by the stock market, both for the U.S. dollar and for the stock market. The results show that the applied methodology through Haar wavelet, are able to inform in advance the trend of the stock market through a single variable, the U.S. dollar.

  16. Wigner functions from the two-dimensional wavelet group.

    PubMed

    Ali, S T; Krasowska, A E; Murenzi, R

    2000-12-01

    Following a general procedure developed previously [Ann. Henri Poincaré 1, 685 (2000)], here we construct Wigner functions on a phase space related to the similitude group in two dimensions. Since the group space in this case is topologically homeomorphic to the phase space in question, the Wigner functions so constructed may also be considered as being functions on the group space itself. Previously the similitude group was used to construct wavelets for two-dimensional image analysis; we discuss here the connection between the wavelet transform and the Wigner function.

  17. Adaptive wavelet-based recognition of oscillatory patterns on electroencephalograms

    NASA Astrophysics Data System (ADS)

    Nazimov, Alexey I.; Pavlov, Alexey N.; Hramov, Alexander E.; Grubov, Vadim V.; Koronovskii, Alexey A.; Sitnikova, Evgenija Y.

    2013-02-01

    The problem of automatic recognition of specific oscillatory patterns on electroencephalograms (EEG) is addressed using the continuous wavelet-transform (CWT). A possibility of improving the quality of recognition by optimizing the choice of CWT parameters is discussed. An adaptive approach is proposed to identify sleep spindles (SS) and spike wave discharges (SWD) that assumes automatic selection of CWT-parameters reflecting the most informative features of the analyzed time-frequency structures. Advantages of the proposed technique over the standard wavelet-based approaches are considered.

  18. Characterizing cerebrovascular dynamics with the wavelet-based multifractal formalism

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Abdurashitov, A. S.; Sindeeva, O. A.; Sindeev, S. S.; Pavlova, O. N.; Shihalov, G. M.; Semyachkina-Glushkovskaya, O. V.

    2016-01-01

    Using the wavelet-transform modulus maxima (WTMM) approach we study the dynamics of cerebral blood flow (CBF) in rats aiming to reveal responses of macro- and microcerebral circulations to changes in the peripheral blood pressure. We show that the wavelet-based multifractal formalism allows quantifying essentially different reactions in the CBF-dynamics at the level of large and small cerebral vessels. We conclude that unlike the macrocirculation that is nearly insensitive to increased peripheral blood pressure, the microcirculation is characterized by essential changes of the CBF-complexity.

  19. Electroencephalography data analysis by using discrete wavelet packet transform

    NASA Astrophysics Data System (ADS)

    Karim, Samsul Ariffin Abdul; Ismail, Mohd Tahir; Hasan, Mohammad Khatim; Sulaiman, Jumat; Muthuvalu, Mohana Sundaram; Janier Josefina, B.

    2015-05-01

    Electroencephalography (EEG) is the electrical activity generated by the movement of neurons in the brain. It is categorized into delta waves, theta, alpha, beta and gamma. These waves exist in a different frequency band. This paper is a continuation of our previous research. EEG data will be decomposed using Discrete Wavelet Packet Transform (DWPT). Daubechies wavelets 10 (D10) will be used as the basic functions for research purposes. From the main results, it is clear that the DWPT able to characterize the EEG signal corresponding to each wave at a specific frequency. Furthermore, the numerical results obtained better than the results using DWT. Statistical analysis support our main findings.

  20. Estimation of Modal Parameters Using a Wavelet-Based Approach

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty; Haley, Sidney M.

    1997-01-01

    Modal stability parameters are extracted directly from aeroservoelastic flight test data by decomposition of accelerometer response signals into time-frequency atoms. Logarithmic sweeps and sinusoidal pulses are used to generate DAST closed loop excitation data. Novel wavelets constructed to extract modal damping and frequency explicitly from the data are introduced. The so-called Haley and Laplace wavelets are used to track time-varying modal damping and frequency in a matching pursuit algorithm. Estimation of the trend to aeroservoelastic instability is demonstrated successfully from analysis of the DAST data.

  1. Wavelet-based motion artifact removal for electrodermal activity.

    PubMed

    Chen, Weixuan; Jaques, Natasha; Taylor, Sara; Sano, Akane; Fedor, Szymon; Picard, Rosalind W

    2015-01-01

    Electrodermal activity (EDA) recording is a powerful, widely used tool for monitoring psychological or physiological arousal. However, analysis of EDA is hampered by its sensitivity to motion artifacts. We propose a method for removing motion artifacts from EDA, measured as skin conductance (SC), using a stationary wavelet transform (SWT). We modeled the wavelet coefficients as a Gaussian mixture distribution corresponding to the underlying skin conductance level (SCL) and skin conductance responses (SCRs). The goodness-of-fit of the model was validated on ambulatory SC data. We evaluated the proposed method in comparison with three previous approaches. Our method achieved a greater reduction of artifacts while retaining motion-artifact-free data.

  2. Time-frequency analysis with the continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Lang, W. Christopher; Forinash, Kyle

    1998-09-01

    The continuous wavelet transform can be used to produce spectrograms which show the frequency content of sounds (or other signals) as a function of time in a manner analogous to sheet music. While this technique is commonly used in the engineering community for signal analysis, the physics community has, in our opinion, remained relatively unaware of this development. Indeed, some find the very notion of frequency as a function of time troublesome. Here spectrograms will be displayed for familiar sounds whose pitches change with time, demonstrating the usefulness of the continuous wavelet transform.

  3. Sonar target enhancement by shrinkage of incoherent wavelet coefficients.

    PubMed

    Hunter, Alan J; van Vossen, Robbert

    2014-01-01

    Background reverberation can obscure useful features of the target echo response in broadband low-frequency sonar images, adversely affecting detection and classification performance. This paper describes a resolution and phase-preserving means of separating the target response from the background reverberation noise using a coherence-based wavelet shrinkage method proposed recently for de-noising magnetic resonance images. The algorithm weights the image wavelet coefficients in proportion to their coherence between different looks under the assumption that the target response is more coherent than the background. The algorithm is demonstrated successfully on experimental synthetic aperture sonar data from a broadband low-frequency sonar developed for buried object detection.

  4. Space-time compressive imaging.

    PubMed

    Treeaporn, Vicha; Ashok, Amit; Neifeld, Mark A

    2012-02-01

    Compressive imaging systems typically exploit the spatial correlation of the scene to facilitate a lower dimensional measurement relative to a conventional imaging system. In natural time-varying scenes there is a high degree of temporal correlation that may also be exploited to further reduce the number of measurements. In this work we analyze space-time compressive imaging using Karhunen-Loève (KL) projections for the read-noise-limited measurement case. Based on a comprehensive simulation study, we show that a KL-based space-time compressive imager offers higher compression relative to space-only compressive imaging. For a relative noise strength of 10% and reconstruction error of 10%, we find that space-time compressive imaging with 8×8×16 spatiotemporal blocks yields about 292× compression compared to a conventional imager, while space-only compressive imaging provides only 32× compression. Additionally, under high read-noise conditions, a space-time compressive imaging system yields lower reconstruction error than a conventional imaging system due to the multiplexing advantage. We also discuss three electro-optic space-time compressive imaging architecture classes, including charge-domain processing by a smart focal plane array (FPA). Space-time compressive imaging using a smart FPA provides an alternative method to capture the nonredundant portions of time-varying scenes.

  5. Islanding detection technique using wavelet energy in grid-connected PV system

    NASA Astrophysics Data System (ADS)

    Kim, Il Song

    2016-08-01

    This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.

  6. Speech signal filtration using double-density dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Yasin, A. S.; Pavlova, O. N.; Pavlov, A. N.

    2016-08-01

    We consider the task of increasing the quality of speech signal cleaning from additive noise by means of double-density dual-tree complex wavelet transform (DDCWT) as compared to the standard method of wavelet filtration based on a multiscale analysis using discrete wavelet transform (DWT) with real basis set functions such as Daubechies wavelets. It is shown that the use of DDCWT instead of DWT provides a significant increase in the mean opinion score (MOS) rating at a high additive noise and makes it possible to reduce the number of expansion levels for the subsequent correction of wavelet coefficients.

  7. [Adaptive de-noising of ECG signal based on stationary wavelet transform].

    PubMed

    Dong, Hong-sheng; Zhang, Ai-hua; Hao, Xiao-hong

    2009-03-01

    According to the limitations of wavelet threshold in de-noising method, we approached a combining algorithm of the stationary wavelet transform with adaptive filter. The stationary wavelet transformation can suppress Gibbs phenomena in traditional DWT effectively, and adaptive filter is introduced at the high scale wavelet coefficient of the stationary wavelet transformation. It would remove baseline wander and keep the shape of low frequency and low amplitude P wave, T wave and ST segment wave of ECG signal well. That is important for analyzing ECG signal of other feature information.

  8. Continuous wavelet transform for non-stationary vibration detection with phase-OTDR.

    PubMed

    Qin, Zengguang; Chen, Liang; Bao, Xiaoyi

    2012-08-27

    We propose the continuous wavelet transform for non-stationary vibration measurement by distributed vibration sensor based on phase optical time-domain reflectometry (OTDR). The continuous wavelet transform approach can give simultaneously the frequency and time information of the vibration event. Frequency evolution is obtained by the wavelet ridge detection method from the scalogram of the continuous wavelet transform. In addition, a novel signal processing algorithm based on the global wavelet spectrum is used to determine the location of vibration. Distributed vibration measurements of 500 Hz and 500 Hz to 1 kHz sweep events over 20 cm fiber length are demonstrated using a single mode fiber.

  9. Progressive compressive imager

    NASA Astrophysics Data System (ADS)

    Evladov, Sergei; Levi, Ofer; Stern, Adrian

    2012-06-01

    We have designed and built a working automatic progressive sampling imaging system based on the vector sensor concept, which utilizes a unique sampling scheme of Radon projections. This sampling scheme makes it possible to progressively add information resulting in tradeoff between compression and the quality of reconstruction. The uniqueness of our sampling is that in any moment of the acquisition process the reconstruction can produce a reasonable version of the image. The advantage of the gradual addition of the samples is seen when the sparsity rate of the object is unknown, and thus the number of needed measurements. We have developed the iterative algorithm OSO (Ordered Sets Optimization) which employs our sampling scheme for creation of nearly uniform distributed sets of samples, which allows the reconstruction of Mega-Pixel images. We present the good quality reconstruction from compressed data ratios of 1:20.

  10. Digital cinema video compression

    NASA Astrophysics Data System (ADS)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  11. Compressibility of solids

    NASA Technical Reports Server (NTRS)

    Vinet, P.; Ferrante, J.; Rose, J. H.; Smith, J. R.

    1987-01-01

    A universal form is proposed for the equation of state (EOS) of solids. Good agreement is found for a variety of test data. The form of the EOS is used to suggest a method of data analysis, which is applied to materials of geophysical interest. The isothermal bulk modulus is discussed as a function of the volume and of the pressure. The isothermal compression curves for materials of geophysical interest are examined.

  12. Basic cluster compression algorithm

    NASA Technical Reports Server (NTRS)

    Hilbert, E. E.; Lee, J.

    1980-01-01

    Feature extraction and data compression of LANDSAT data is accomplished by BCCA program which reduces costs associated with transmitting, storing, distributing, and interpreting multispectral image data. Algorithm uses spatially local clustering to extract features from image data to describe spectral characteristics of data set. Approach requires only simple repetitive computations, and parallel processing can be used for very high data rates. Program is written in FORTRAN IV for batch execution and has been implemented on SEL 32/55.

  13. Compression of Cake

    NASA Astrophysics Data System (ADS)

    Nason, Sarah; Houghton, Brittany; Renfro, Timothy

    2012-03-01

    The fall university physics class, at McMurry University, created a compression modulus experiment that even high school students could do. The class came up with this idea after a Young's modulus experiment which involved stretching wire. A question was raised of what would happen if we compressed something else? We created our own Young's modulus experiment, but in a more entertaining way. The experiment involves measuring the height of a cake both before and after a weight has been applied to the cake. We worked to derive the compression modulus by applying weight to a cake. In the end, we had our experimental cake and, ate it too! To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2012.TSS.B1.1

  14. Scale adaptive compressive tracking.

    PubMed

    Zhao, Pengpeng; Cui, Shaohui; Gao, Min; Fang, Dan

    2016-01-01

    Recently, the compressive tracking (CT) method (Zhang et al. in Proceedings of European conference on computer vision, pp 864-877, 2012) has attracted much attention due to its high efficiency, but it cannot well deal with the scale changing objects due to its constant tracking box. To address this issue, in this paper we propose a scale adaptive CT approach, which adaptively adjusts the scale of tracking box with the size variation of the objects. Our method significantly improves CT in three aspects: Firstly, the scale of tracking box is adaptively adjusted according to the size of the objects. Secondly, in the CT method, all the compressive features are supposed independent and equal contribution to the classifier. Actually, different compressive features have different confidence coefficients. In our proposed method, the confidence coefficients of features are computed and used to achieve different contribution to the classifier. Finally, in the CT method, the learning parameter λ is constant, which will result in large tracking drift on the occasion of object occlusion or large scale appearance variation. In our proposed method, a variable learning parameter λ is adopted, which can be adjusted according to the object appearance variation rate. Extensive experiments on the CVPR2013 tracking benchmark demonstrate the superior performance of the proposed method compared to state-of-the-art tracking algorithms. PMID:27386298

  15. Compression of multiwall microbubbles

    NASA Astrophysics Data System (ADS)

    Lebedeva, Natalia; Moore, Sam; Dobrynin, Andrey; Rubinstein, Michael; Sheiko, Sergei

    2012-02-01

    Optical monitoring of structural transformations and transport processes is prohibited if the objects to be studied are bulky and/or non-transparent. This paper is focused on the development of a microbbuble platform for acoustic imaging of heterogeneous media under harsh environmental conditions including high pressure (<500 atm), temperature (<100 C), and salinity (<10 wt%). We have studied the compression behavior of gas-filled microbubbles composed of multiple layers of surfactants and stabilizers. Upon hydrostatic compression, these bubbles undergo significant (up to 100x) changes in volume, which are completely reversible. Under repeated compression/expansion cycles, the pressure-volume P(V) characteristic of these microbubbles deviate from ideal-gas-law predictions. A theoretical model was developed to explain the observed deviations through contributions of shell elasticity and gas effusion. In addition, some of the microbubbles undergo peculiar buckling/smoothing transitions exhibiting intermittent formation of facetted structures, which suggest a solid-like nature of the pressurized shell. Preliminary studies illustrate that these pressure-resistant microbubbles maintain their mechanical stability and acoustic response at pressures greater than 1000 psi.

  16. The use of wavelet transforms in the solution of two-phase flow problems

    SciTech Connect

    Moridis, G.J.; Nikolaou, M.; You, Yong

    1994-10-01

    In this paper we present the use of wavelets to solve the nonlinear Partial Differential.Equation (PDE) of two-phase flow in one dimension. The wavelet transforms allow a drastically different approach in the discretization of space. In contrast to the traditional trigonometric basis functions, wavelets approximate a function not by cancellation but by placement of wavelets at appropriate locations. When an abrupt chance, such as a shock wave or a spike, occurs in a function, only local coefficients in a wavelet approximation will be affected. The unique feature of wavelets is their Multi-Resolution Analysis (MRA) property, which allows seamless investigational any spatial resolution. The use of wavelets is tested in the solution of the one-dimensional Buckley-Leverett problem against analytical solutions and solutions obtained from standard numerical models. Two classes of wavelet bases (Daubechies and Chui-Wang) and two methods (Galerkin and collocation) are investigated. We determine that the Chui-Wang, wavelets and a collocation method provide the optimum wavelet solution for this type of problem. Increasing the resolution level improves the accuracy of the solution, but the order of the basis function seems to be far less important. Our results indicate that wavelet transforms are an effective and accurate method which does not suffer from oscillations or numerical smearing in the presence of steep fronts.

  17. Coherent vorticity extraction in resistive drift-wave turbulence: Comparison of orthogonal wavelets versus proper orthogonal decomposition

    SciTech Connect

    Futatani, S.; Bos, W.J.T.; Del-Castillo-Negrete, Diego B; Schneider, Kai; Benkadda, S.; Farge, Marie

    2011-01-01

    We assess two techniques for extracting coherent vortices out of turbulent flows: the wavelet based Coherent Vorticity Extraction (CVE) and the Proper Orthogonal Decomposition (POD). The former decomposes the flow field into an orthogonal wavelet representation and subsequent thresholding of the coefficients allows one to split the flow into organized coherent vortices with non-Gaussian statistics and an incoherent random part which is structureless. POD is based on the singular value decomposition and decomposes the flow into basis functions which are optimal with respect to the retained energy for the ensemble average. Both techniques are applied to direct numerical simulation data of two-dimensional drift-wave turbulence governed by Hasegawa Wakatani equation, considering two limit cases: the quasi-hydrodynamic and the quasi-adiabatic regimes. The results are compared in terms of compression rate, retained energy, retained enstrophy and retained radial flux, together with the enstrophy spectrum and higher order statistics. (c) 2010 Published by Elsevier Masson SAS on behalf of Academie des sciences.

  18. The use of compressive sensing and peak detection in the reconstruction of microtubules length time series in the process of dynamic instability.

    PubMed

    Mahrooghy, Majid; Yarahmadian, Shantia; Menon, Vineetha; Rezania, Vahid; Tuszynski, Jack A

    2015-10-01

    Microtubules (MTs) are intra-cellular cylindrical protein filaments. They exhibit a unique phenomenon of stochastic growth and shrinkage, called dynamic instability. In this paper, we introduce a theoretical framework for applying Compressive Sensing (CS) to the sampled data of the microtubule length in the process of dynamic instability. To reduce data density and reconstruct the original signal with relatively low sampling rates, we have applied CS to experimental MT lament length time series modeled as a Dichotomous Markov Noise (DMN). The results show that using CS along with the wavelet transform significantly reduces the recovery errors comparing in the absence of wavelet transform, especially in the low and the medium sampling rates. In a sampling rate ranging from 0.2 to 0.5, the Root-Mean-Squared Error (RMSE) decreases by approximately 3 times and between 0.5 and 1, RMSE is small. We also apply a peak detection technique to the wavelet coefficients to detect and closely approximate the growth and shrinkage of MTs for computing the essential dynamic instability parameters, i.e., transition frequencies and specially growth and shrinkage rates. The results show that using compressed sensing along with the peak detection technique and wavelet transform in sampling rates reduces the recovery errors for the parameters.

  19. Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach

    NASA Astrophysics Data System (ADS)

    Aloui, Chaker; Jammazi, Rania

    2015-10-01

    In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.

  20. Wavelet transform: fundamentals, applications, and implementation using acousto-optic correlators

    NASA Astrophysics Data System (ADS)

    DeCusatis, Casimer M.; Koay, J.; Litynski, Daniel M.; Das, Pankaj K.

    1995-10-01

    In recent years there has been a great deal of interest in the use of wavelets to supplement or replace conventional Fourier transform signal processing. This paper provides a review of wavelet transforms for signal processing applications, and discusses several emerging applications which benefit from the advantages of wavelets. The wavelet transform can be implemented as an acousto-optic correlator; perfect reconstruction of digital signals may also be achieved using acousto-optic finite impulse response filter banks. Acousto-optic image correlators are discussed as a potential implementation of the wavelet transform, since a 1D wavelet filter bank may be encoded as a 2D image. We discuss applications of the wavelet transform including nondestructive testing of materials, biomedical applications in the analysis of EEG signals, and interference excision in spread spectrum communication systems. Computer simulations and experimental results for these applications are also provided.