The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron B.; Masschelein, Bart; Moury, Gilles; Schafer, Christoph
2004-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An ASIC implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm.
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Interactive decoding for the CCSDS recommendation for image data compression
NASA Astrophysics Data System (ADS)
García-Vílchez, Fernando; Serra-Sagristà, Joan; Zabala, Alaitz; Pons, Xavier
2007-10-01
In 2005, the Consultative Committee for Space Data Systems (CCSDS) approved a new Recommendation (CCSDS 122.0-B-1) for Image Data Compression. Our group has designed a new file syntax for the Recommendation. The proposal consists of adding embedded headers. Such modification provides scalability by quality, spatial location, resolution and component. The main advantages of our proposal are: 1) the definition of multiple types of progression order, which enhances abilities in transmission scenarios, and 2) the support for the extraction and decoding of specific windows of interest without needing to decode the complete code-stream. In this paper we evaluate the performance of our proposal. First we measure the impact of the embedded headers in the encoded stream. Second we compare the compression performance of our technique to JPEG2000.
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
Compressive Optical Image Encryption
Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong
2015-01-01
An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume. PMID:25992946
Compressive optical image encryption.
Li, Jun; Sheng Li, Jiao; Yang Pan, Yang; Li, Rong
2015-05-20
An optical image encryption technique based on compressive sensing using fully optical means has been proposed. An object image is first encrypted to a white-sense stationary noise pattern using a double random phase encoding (DRPE) method in a Mach-Zehnder interferometer. Then, the encrypted image is highly compressed to a signal using single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the encrypted image is reconstructed well via compressive sensing theory, and the original image can be decrypted with three reconstructed holograms and the correct keys. The numerical simulations show that the method is effective and suitable for optical image security transmission in future all-optical networks because of the ability of completely optical implementation and substantially smaller hologram data volume.
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
The CCDS Data Compression Recommendations: Development and Status
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Moury, Gilles; Armbruster, Philippe; Day, John H. (Technical Monitor)
2002-01-01
The Consultative Committee for Space Data Systems (CCSDS) has been engaging in recommending data compression standards for space applications. The first effort focused on a lossless scheme that was adopted in 1997. Since then, space missions benefiting from this recommendation range from deep space probes to near Earth observatories. The cost savings result not only from reduced onboard storage and reduced bandwidth, but also in ground archive of mission data. In many instances, this recommendation also enables more science data to be collected for added scientific value. Since 1998, the compression sub-panel of CCSDS has been investigating lossy image compression schemes and is currently working towards a common solution for a single recommendation. The recommendation will fulfill the requirements for remote sensing conducted on space platforms.
NASA Astrophysics Data System (ADS)
Xu, Yuquan; Hu, Xiyuan; Peng, Silong
2014-03-01
We propose an algorithm to recover the latent image from the blurred and compressed input. In recent years, although many image deblurring algorithms have been proposed, most of the previous methods do not consider the compression effect in blurry images. Actually, it is unavoidable in practice that most of the real-world images are compressed. This compression will introduce a typical kind of noise, blocking artifacts, which do not meet the Gaussian distribution assumed in most existing algorithms. Without properly handling this non-Gaussian noise, the recovered image will suffer severe artifacts. Inspired by the statistic property of compression error, we model the non-Gaussian noise as hyper-Laplacian distribution. Based on this model, an efficient nonblind image deblurring algorithm based on variable splitting technique is proposed to solve the resulting nonconvex minimization problem. Finally, we also address an effective blind image deblurring algorithm which can deal with the compressed and blurred images efficiently. Extensive experiments compared with state-of-the-art nonblind and blind deblurring methods demonstrate the effectiveness of the proposed method.
Image data compression investigation
NASA Technical Reports Server (NTRS)
Myrie, Carlos
1989-01-01
NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.
Progressive transmission and compression images
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1996-01-01
We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.
Compressive sensing in medical imaging
Graff, Christian G.; Sidky, Emil Y.
2015-01-01
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed. PMID:25968400
Image quality (IQ) guided multispectral image compression
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik
2016-05-01
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
Compressive Sensing for Quantum Imaging
NASA Astrophysics Data System (ADS)
Howland, Gregory A.
This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique
Perceptual Image Compression in Telemedicine
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
1996-01-01
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
Compressive passive millimeter wave imager
Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C
2015-01-27
A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.
Compressive Imaging via Approximate Message Passing
2015-09-04
We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction
A Compressed Terahertz Imaging Method
NASA Astrophysics Data System (ADS)
Zhang, Man; Pan, Rui; Xiong, Wei; He, Ting; Shen, Jing-Ling
2012-10-01
A compressed terahertz imaging method using a terahertz time domain spectroscopy system (THz-TDSS) is suggested and demonstrated. In the method, a parallel THz wave with the beam diameter 4cm from a usual THz-TDSS is used and a square shaped 2D echelon is placed in front of an imaged object. We confirm both in simulation and in experiment that only one terahertz time domain spectrum is needed to image the object. The image information is obtained from the compressed THz signal by deconvolution signal processing, and therefore the whole imaging time is greatly reduced in comparison with some other pulsed THz imaging methods. The present method will hopefully be used in real-time imaging.
Imaging of venous compression syndromes
Ganguli, Suvranu; Ghoshhajra, Brian B.; Gupta, Rajiv; Prabhakar, Anand M.
2016-01-01
Venous compression syndromes are a unique group of disorders characterized by anatomical extrinsic venous compression, typically in young and otherwise healthy individuals. While uncommon, they may cause serious complications including pain, swelling, deep venous thrombosis (DVT), pulmonary embolism, and post-thrombotic syndrome. The major disease entities are May-Thurner syndrome (MTS), variant iliac vein compression syndrome (IVCS), venous thoracic outlet syndrome (VTOS)/Paget-Schroetter syndrome, nutcracker syndrome (NCS), and popliteal venous compression (PVC). In this article, we review the key clinical features, multimodality imaging findings, and treatment options of these disorders. Emphasis is placed on the growing role of noninvasive imaging options such as magnetic resonance venography (MRV) in facilitating early and accurate diagnosis and tailored intervention. PMID:28123973
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.
1981-01-01
Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.
Study on Huber fractal image compression.
Jeng, Jyh-Horng; Tseng, Chun-Chieh; Hsieh, Jer-Guang
2009-05-01
In this paper, a new similarity measure for fractal image compression (FIC) is introduced. In the proposed Huber fractal image compression (HFIC), the linear Huber regression technique from robust statistics is embedded into the encoding procedure of the fractal image compression. When the original image is corrupted by noises, we argue that the fractal image compression scheme should be insensitive to those noises presented in the corrupted image. This leads to a new concept of robust fractal image compression. The proposed HFIC is one of our attempts toward the design of robust fractal image compression. The main disadvantage of HFIC is the high computational cost. To overcome this drawback, particle swarm optimization (PSO) technique is utilized to reduce the searching time. Simulation results show that the proposed HFIC is robust against outliers in the image. Also, the PSO method can effectively reduce the encoding time while retaining the quality of the retrieved image.
Correlation and image compression for limited-bandwidth CCD.
Thompson, Douglas G.
2005-07-01
As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.
[Realization of DICOM medical image compression technology].
Wang, Chenxi; Wang, Quan; Ren, Haiping
2013-05-01
This paper introduces the implement method of DICOM medical image compression technology, The image part of DICOM files are extracted and converted to BMP format. The non-image information in DICOM file are stored into the text. When the final image of JPEG standard and non-image information are encapsulated to DICOM format images, it realizes the compression of medical image, which is beneficial to the image storage and transmission.
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
Progressive Transmission and Compression of Images
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1996-01-01
We describe an image data compression strategy featuring progressive transmission. The method exploits subband coding and arithmetic coding for compression. We analyze the Laplacian probability density, which closely approximates the statistics of individual subbands, to determine a strategy for ordering the compressed subband data in a way that improves rate-distortion performance. Results are presented for a test image.
Psychophysical rating of image compression techniques
NASA Technical Reports Server (NTRS)
Stein, Charles S.; Hitchner, Lewis E.; Watson, Andrew B.
1989-01-01
Image compression schemes abound with little work which compares their bit-rate performance based on subjective fidelity measures. Statistical measures of image fidelity, such as squared error measures, do not necessarily correspond to subjective measures of image fidelity. Most previous comparisons of compression techniques have been based on these statistical measures. A psychophysical method has been used to estimate, for a number of compression techniques, a threshold bit-rate yielding a criterion level of performance in discriminating original and compressed images. The compression techniques studied include block truncation, Laplacian pyramid, block discrete cosine transform, with and without a human visual system scaling, and cortex transform coders.
Image data compression having minimum perceptual error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1995-01-01
A method for performing image compression that eliminates redundant and invisible image components is described. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Image coding compression based on DCT
NASA Astrophysics Data System (ADS)
Feng, Fei; Liu, Peixue; Jiang, Baohua
2012-04-01
With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.
Digital image compression in dermatology: format comparison.
Guarneri, F; Vaccaro, M; Guarneri, C
2008-09-01
Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.
Image compression requirements and standards in PACS
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1995-05-01
Cost effective telemedicine and storage create a need for medical image compression. Compression saves communication bandwidth and reduces the size of the stored images. After clinicians become acquainted with the quality of the images using some of the newer algorithms, they accept the idea of lossy compression. The older algorithms, JPEG and MPEG in particular, are generally not adequate for high quality compression of medical images. The requirements for compression for medical images center on diagnostic quality images after the restoration of the images. The compression artifacts should not interfere with the viewing of the images for diagnosis. New requirements for compression arise from the fact that the images will likely be viewed on a computer workstation, where the images may be manipulated in ways that would bring out the artifacts. A medical imaging compression standard must be applicable across a large variety of image types from CT and MR to CR and ultrasound. To have one or a very few compression algorithms that are effective across a broad range of image types is desirable. Related series of images as for CT, MR, or cardiology require inter-image processing as well as intra-image processing for effective compression. Two preferred decompositions of the medical images are lapped orthogonal transforms and wavelet transforms. These transforms decompose the images in frequency in two different ways. The lapped orthogonal transforms groups the data according to the area where the data originated, while the wavelet transforms group the data by the frequency band of the image. The compression realized depends on the similarity of close transform coefficients. Huffman coding or the coding of the RICE algorithm are a beginning for the encoding. To be really effective the coding must have an extension for the areas where there is little information, the low entropy extension. In these areas there are less than one bit per pixel and multiple pixels must be
Simultaneous denoising and compression of multispectral images
NASA Astrophysics Data System (ADS)
Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.
2013-01-01
A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.
Studies on image compression and image reconstruction
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Nori, Sekhar; Araj, A.
1994-01-01
During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.
Image Compression: Making Multimedia Publishing a Reality.
ERIC Educational Resources Information Center
Anson, Louisa
1993-01-01
Describes the new Fractal Transform technology, a method of compressing digital images to represent images as seen by the mind's eye. The International Organization for Standardization (ISO) standards for compressed image formats are discussed in relationship to Fractal Transform, and it is compared with Discrete Cosine Transform. Thirteen figures…
[Statistical study of the wavelet-based lossy medical image compression technique].
Puniene, Jūrate; Navickas, Ramūnas; Punys, Vytenis; Jurkevicius, Renaldas
2002-01-01
Medical digital images have informational redundancy. Both the amount of memory for image storage and their transmission time could be reduced if image compression techniques are applied. The techniques are divided into two groups: lossless (compression ratio does not exceed 3 times) and lossy ones. Compression ratio of lossy techniques depends on visibility of distortions. It is a variable parameter and it can exceed 20 times. A compression study was performed to evaluate the compression schemes, which were based on the wavelet transform. The goal was to develop a set of recommendations for an acceptable compression ratio for different medical image modalities: ultrasound cardiac images and X-ray angiographic images. The acceptable image quality after compression was evaluated by physicians. Statistical analysis of the evaluation results was used to form a set of recommendations.
Image Data Compression Having Minimum Perceptual Error
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1997-01-01
A method is presented for performing color or grayscale image compression that eliminates redundant and invisible image components. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The quantization matrix comprises visual masking by luminance and contrast technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Image compression based on GPU encoding
NASA Astrophysics Data System (ADS)
Bai, Zhaofeng; Qiu, Yuehong
2015-07-01
With the rapid development of digital technology, the data increased greatly in both static image and dynamic video image. It is noticeable how to decrease the redundant data in order to save or transmit information more efficiently. So the research on image compression becomes more and more important. Using GPU to achieve higher compression ratio has superiority in interactive remote visualization. Contrast to CPU, GPU may be a good way to accelerate the image compression. Currently, GPU of NIVIDIA has evolved into the eighth generation, which increasingly dominates the high-powered general purpose computer field. This paper explains the way of GPU encoding image. Some experiment results are also presented.
Image compression algorithm using wavelet transform
NASA Astrophysics Data System (ADS)
Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory
2016-09-01
Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.
Digital Image Compression Using Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Serra-Ricart, M.; Garrido, L.; Gaitan, V.; Aloy, A.
1993-01-01
The problem of storing, transmitting, and manipulating digital images is considered. Because of the file sizes involved, large amounts of digitized image information are becoming common in modern projects. Our goal is to described an image compression transform coder based on artificial neural networks techniques (NNCTC). A comparison of the compression results obtained from digital astronomical images by the NNCTC and the method used in the compression of the digitized sky survey from the Space Telescope Science Institute based on the H-transform is performed in order to assess the reliability of the NNCTC.
An image-data-compression algorithm
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Rice, R. F.
1981-01-01
Cluster Compression Algorithm (CCA) preprocesses Landsat image data immediately following satellite data sensor (receiver). Data are reduced by extracting pertinent image features and compressing this result into concise format for transmission to ground station. This results in narrower transmission bandwidth, increased data-communication efficiency, and reduced computer time in reconstructing and analyzing image. Similar technique could be applied to other types of recorded data to cut costs of transmitting, storing, distributing, and interpreting complex information.
High-performance compression of astronomical images
NASA Technical Reports Server (NTRS)
White, Richard L.
1993-01-01
Astronomical images have some rather unusual characteristics that make many existing image compression techniques either ineffective or inapplicable. A typical image consists of a nearly flat background sprinkled with point sources and occasional extended sources. The images are often noisy, so that lossless compression does not work very well; furthermore, the images are usually subjected to stringent quantitative analysis, so any lossy compression method must be proven not to discard useful information, but must instead discard only the noise. Finally, the images can be extremely large. For example, the Space Telescope Science Institute has digitized photographic plates covering the entire sky, generating 1500 images each having 14000 x 14000 16-bit pixels. Several astronomical groups are now constructing cameras with mosaics of large CCD's (each 2048 x 2048 or larger); these instruments will be used in projects that generate data at a rate exceeding 100 MBytes every 5 minutes for many years. An effective technique for image compression may be based on the H-transform (Fritze et al. 1977). The method that we have developed can be used for either lossless or lossy compression. The digitized sky survey images can be compressed by at least a factor of 10 with no noticeable losses in the astrometric and photometric properties of the compressed images. The method has been designed to be computationally efficient: compression or decompression of a 512 x 512 image requires only 4 seconds on a Sun SPARCstation 1. The algorithm uses only integer arithmetic, so it is completely reversible in its lossless mode, and it could easily be implemented in hardware for space applications.
An algorithm for compression of bilevel images.
Reavy, M D; Boncelet, C G
2001-01-01
This paper presents the block arithmetic coding for image compression (BACIC) algorithm: a new method for lossless bilevel image compression which can replace JBIG, the current standard for bilevel image compression. BACIC uses the block arithmetic coder (BAC): a simple, efficient, easy-to-implement, variable-to-fixed arithmetic coder, to encode images. BACIC models its probability estimates adaptively based on a 12-bit context of previous pixel values; the 12-bit context serves as an index into a probability table whose entries are used to compute p(1) (the probability of a bit equaling one), the probability measure BAC needs to compute a codeword. In contrast, the Joint Bilevel Image Experts Group (JBIG) uses a patented arithmetic coder, the IBM QM-coder, to compress image data and a predetermined probability table to estimate its probability measures. JBIG, though, has not get been commercially implemented; instead, JBIG's predecessor, the Group 3 fax (G3), continues to be used. BACIC achieves compression ratios comparable to JBIG's and is introduced as an alternative to the JBIG and G3 algorithms. BACIC's overall compression ratio is 19.0 for the eight CCITT test images (compared to JBIG's 19.6 and G3's 7.7), is 16.0 for 20 additional business-type documents (compared to JBIG's 16.0 and G3's 6.74), and is 3.07 for halftone images (compared to JBIG's 2.75 and G3's 0.50).
Context-Aware Image Compression
Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram
2016-01-01
We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904
Cloud Optimized Image Format and Compression
NASA Astrophysics Data System (ADS)
Becker, P.; Plesea, L.; Maurer, T.
2015-04-01
Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
Block adaptive rate controlled image data compression
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.
1979-01-01
A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.
Iris Recognition: The Consequences of Image Compression
NASA Astrophysics Data System (ADS)
Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig
2010-12-01
Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.
Image quality, compression and segmentation in medicine.
Morgan, Pam; Frankish, Clive
2002-12-01
This review considers image quality in the context of the evolving technology of image compression, and the effects image compression has on perceived quality. The concepts of lossless, perceptually lossless, and diagnostically lossless but lossy compression are described, as well as the possibility of segmented images, combining lossy compression with perceptually lossless regions of interest. The different requirements for diagnostic and training images are also discussed. The lack of established methods for image quality evaluation is highlighted and available methods discussed in the light of the information that may be inferred from them. Confounding variables are also identified. Areas requiring further research are illustrated, including differences in perceptual quality requirements for different image modalities, image regions, diagnostic subtleties, and tasks. It is argued that existing tools for measuring image quality need to be refined and new methods developed. The ultimate aim should be the development of standards for image quality evaluation which take into consideration both the task requirements of the images and the acceptability of the images to the users.
Lossless wavelet compression on medical image
NASA Astrophysics Data System (ADS)
Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong
2006-09-01
An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can
Postprocessing of Compressed Images via Sequential Denoising
NASA Astrophysics Data System (ADS)
Dar, Yehuda; Bruckstein, Alfred M.; Elad, Michael; Giryes, Raja
2016-07-01
In this work we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via Alternating Direction Method of Multipliers (ADMM), leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. Specifically, we demonstrate impressive gains in image quality for several leading compression methods - JPEG, JPEG2000, and HEVC.
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
Hyperspectral image data compression based on DSP
NASA Astrophysics Data System (ADS)
Fan, Jiming; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin
2010-11-01
The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can achieve the nearly real-time compression with excellent image quality and compression performance.
Novel wavelet coder for color image compression
NASA Astrophysics Data System (ADS)
Wang, Houng-Jyh M.; Kuo, C.-C. Jay
1997-10-01
A new still image compression algorithm based on the multi-threshold wavelet coding (MTWC) technique is proposed in this work. It is an embedded wavelet coder in the sense that its compression ratio can be controlled depending on the bandwidth requirement of image transmission. At low bite rates, MTWC can avoid the blocking artifact from JPEG to result in a better reconstructed image quality. An subband decision scheme is developed based on the rate-distortion theory to enhance the image fidelity. Moreover, a new quantization sequence order is introduced based on our analysis of error energy reduction in significant and refinement maps. Experimental results are given to demonstrate the superior performance of the proposed new algorithm in its high reconstructed quality for color and gray level image compression and low computational complexity. Generally speaking, it gives a better rate- distortion tradeoff and performs faster than most existing state-of-the-art wavelet coders.
A New Approach for Fingerprint Image Compression
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received
Issues in multiview autostereoscopic image compression
NASA Astrophysics Data System (ADS)
Shah, Druti; Dodgson, Neil A.
2001-06-01
Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.
Compressive line sensing underwater imaging system
NASA Astrophysics Data System (ADS)
Ouyang, Bing; Dalgleish, Fraser R.; Caimi, Frank M.; Giddings, Thomas E.; Britton, Walter; Vuorenkoski, Anni K.; Nootz, Gero
2014-05-01
Compressive sensing (CS) theory has drawn great interest and led to new imaging techniques in many different fields. Over the last few years, the authors have conducted extensive research on CS-based active electro-optical imaging in a scattering medium, such as the underwater environment. This paper proposes a compressive line sensing underwater imaging system that is more compatible with conventional underwater survey operations. This new imaging system builds on our frame-based CS underwater laser imager concept, which is more advantageous for hover capable platforms. We contrast features of CS underwater imaging with those of traditional underwater electro-optical imaging and highlight some advantages of the CS approach. Simulation and initial underwater validation test results are also presented.
FBI compression standard for digitized fingerprint images
NASA Astrophysics Data System (ADS)
Brislawn, Christopher M.; Bradley, Jonathan N.; Onyshczak, Remigius J.; Hopper, Thomas
1996-11-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Compression of gray-scale fingerprint images
NASA Astrophysics Data System (ADS)
Hopper, Thomas
1994-03-01
The FBI has developed a specification for the compression of gray-scale fingerprint images to support paperless identification services within the criminal justice community. The algorithm is based on a scalar quantization of a discrete wavelet transform decomposition of the images, followed by zero run encoding and Huffman encoding.
Compressive hyperspectral and multispectral imaging fusion
NASA Astrophysics Data System (ADS)
Espitia, Óscar; Castillo, Sergio; Arguello, Henry
2016-05-01
Image fusion is a valuable framework which combines two or more images of the same scene from one or multiple sensors, allowing to improve the resolution of the images and increase the interpretable content. In remote sensing a common fusion problem consists of merging hyperspectral (HS) and multispectral (MS) images that involve large amount of redundant data, which ignores the highly correlated structure of the datacube along the spatial and spectral dimensions. Compressive HS and MS systems compress the spectral data in the acquisition step allowing to reduce the data redundancy by using different sampling patterns. This work presents a compressed HS and MS image fusion approach, which uses a high dimensional joint sparse model. The joint sparse model is formulated by combining HS and MS compressive acquisition models. The high spectral and spatial resolution image is reconstructed by using sparse optimization algorithms. Different fusion spectral image scenarios are used to explore the performance of the proposed scheme. Several simulations with synthetic and real datacubes show promising results as the reliable reconstruction of a high spectral and spatial resolution image can be achieved by using as few as just the 50% of the datacube.
An Analog Processor for Image Compression
NASA Technical Reports Server (NTRS)
Tawel, R.
1992-01-01
This paper describes a novel analog Vector Array Processor (VAP) that was designed for use in real-time and ultra-low power image compression applications. This custom CMOS processor is based architectually on the Vector Quantization (VQ) algorithm in image coding, and the hardware implementation fully exploits the inherent parallelism built-in the VQ algorithm.
Universal lossless compression algorithm for textual images
NASA Astrophysics Data System (ADS)
al Zahir, Saif
2012-03-01
In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.
Multidimensional imaging using compressive Fresnel holography.
Horisaki, Ryoichi; Tanida, Jun; Stern, Adrian; Javidi, Bahram
2012-06-01
We propose a generalized framework for single-shot acquisition of multidimensional objects using compressive Fresnel holography. A multidimensional object with spatial, spectral, and polarimetric information is propagated with the Fresnel diffraction, and the propagated signal of each channel is observed by an image sensor with randomly arranged optical elements for filtering. The object data are reconstructed using a compressive sensing algorithm. This scheme is verified with numerical experiments. The proposed framework can be applied to imageries for spectrum, polarization, and so on.
The effect of lossy image compression on image classification
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have classified four different images, under various levels of JPEG compression, using the following classification algorithms: minimum-distance, maximum-likelihood, and neural network. The training site accuracy and percent difference from the original classification were tabulated for each image compression level, with maximum-likelihood showing the poorest results. In general, as compression ratio increased, the classification retained its overall appearance, but much of the pixel-to-pixel detail was eliminated. We also examined the effect of compression on spatial pattern detection using a neural network.
Robust object tracking in compressed image sequences
NASA Astrophysics Data System (ADS)
Mujica, Fernando; Murenzi, Romain; Smith, Mark J.; Leduc, Jean-Pierre
1998-10-01
Accurate object tracking is important in defense applications where an interceptor missile must hone into a target and track it through the pursuit until the strike occurs. The expense associated with an interceptor missile can be reduced through a distributed processing arrangement where the computing platform on which the tracking algorithm is run resides on the ground, and the interceptor need only carry the sensor and communications equipment as part of its electronics complement. In this arrangement, the sensor images are compressed, transmitted to the ground, and compressed to facilitate real-time downloading of the data over available bandlimited channels. The tracking algorithm is run on a ground-based computer while tracking results are transmitted back to the interceptor as soon as they become available. Compression and transmission in this scenario introduce distortion. If severe, these distortions can lead to erroneous tracking results. As a consequence, tracking algorithms employed for this purpose must be robust to compression distortions. In this paper we introduced a robust object racking algorithm based on the continuous wavelet transform. The algorithm processes image sequence data on a frame-by-frame basis, implicitly taking advantage of temporal history and spatial frame filtering to reduce the impact of compression artifacts. Test results show that tracking performance can be maintained at low transmission bit rates and can be used reliably in conjunction with many well-known image compression algorithms.
MRC for compression of Blake Archive images
NASA Astrophysics Data System (ADS)
Misic, Vladimir; Kraus, Kari; Eaves, Morris; Parker, Kevin J.; Buckley, Robert R.
2002-11-01
The William Blake Archive is part of an emerging class of electronic projects in the humanities that may be described as hypermedia archives. It provides structured access to high-quality electronic reproductions of rare and often unique primary source materials, in this case the work of poet and painter William Blake. Due to the extensive high frequency content of Blake's paintings (namely, colored engravings), they are not suitable for very efficient compression that meets both rate and distortion criteria at the same time. Resolving that problem, the authors utilized modified Mixed Raster Content (MRC) compression scheme -- originally developed for compression of compound documents -- for the compression of colored engravings. In this paper, for the first time, we have been able to demonstrate the successful use of the MRC compression approach for the compression of colored, engraved images. Additional, but not less important benefits of the MRC image representation for Blake scholars are presented: because the applied segmentation method can essentially lift the color overlay of an impression, it provides the student of Blake the unique opportunity to recreate the underlying copperplate image, model the artist's coloring process, and study them separately.
Imaging With Nature: Compressive Imaging Using a Multiply Scattering Medium
Liutkus, Antoine; Martina, David; Popoff, Sébastien; Chardon, Gilles; Katz, Ori; Lerosey, Geoffroy; Gigan, Sylvain; Daudet, Laurent; Carron, Igor
2014-01-01
The recent theory of compressive sensing leverages upon the structure of signals to acquire them with much fewer measurements than was previously thought necessary, and certainly well below the traditional Nyquist-Shannon sampling rate. However, most implementations developed to take advantage of this framework revolve around controlling the measurements with carefully engineered material or acquisition sequences. Instead, we use the natural randomness of wave propagation through multiply scattering media as an optimal and instantaneous compressive imaging mechanism. Waves reflected from an object are detected after propagation through a well-characterized complex medium. Each local measurement thus contains global information about the object, yielding a purely analog compressive sensing method. We experimentally demonstrate the effectiveness of the proposed approach for optical imaging by using a 300-micrometer thick layer of white paint as the compressive imaging device. Scattering media are thus promising candidates for designing efficient and compact compressive imagers. PMID:25005695
Multi-shot compressed coded aperture imaging
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua
2013-09-01
The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.
Compressive line sensing underwater imaging system
NASA Astrophysics Data System (ADS)
Ouyang, B.; Dalgleish, F. R.; Vuorenkoski, A. K.; Caimi, F. M.; Britton, W.
2013-05-01
Compressive sensing (CS) theory has drawn great interest and led to new imaging techniques in many different fields. In recent years, the FAU/HBOI OVOL has conducted extensive research to study the CS based active electro-optical imaging system in the scattering medium such as the underwater environment. The unique features of such system in comparison with the traditional underwater electro-optical imaging system are discussed. Building upon the knowledge from the previous work on a frame based CS underwater laser imager concept, more advantageous for hover-capable platforms such as the Hovering Autonomous Underwater Vehicle (HAUV), a compressive line sensing underwater imaging (CLSUI) system that is more compatible with the conventional underwater platforms where images are formed in whiskbroom fashion, is proposed in this paper. Simulation results are discussed.
Compressive Sensing Image Sensors-Hardware Implementation
Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram
2013-01-01
The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123
Optical Data Compression in Time Stretch Imaging
Chen, Claire Lifan; Mahjoubfar, Ata; Jalali, Bahram
2015-01-01
Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems. PMID:25906244
Compressive framework for demosaicing of natural images.
Moghadam, Abdolreza Abdolhosseini; Aghagolzadeh, Mohammad; Kumar, Mrityunjay; Radha, Hayder
2013-06-01
Typical consumer digital cameras sense only one out of three color components per image pixel. The problem of demosaicing deals with interpolating those missing color components. In this paper, we present compressive demosaicing (CD), a framework for demosaicing natural images based on the theory of compressed sensing (CS). Given sensed samples of an image, CD employs a CS solver to find the sparse representation of that image under a fixed sparsifying dictionary Ψ. As opposed to state of the art CS-based demosaicing approaches, we consider a clear distinction between the interchannel (color) and interpixel correlations of natural images. Utilizing some well-known facts about the human visual system, those two types of correlations are utilized in a nonseparable format to construct the sparsifying transform Ψ. Our simulation results verify that CD performs better (both visually and in terms of PSNR) than leading demosaicing approaches when applied to the majority of standard test images.
Spectrally Adaptable Compressive Sensing Imaging System
2014-05-01
2D coded projections. The underlying spectral 3D data cube is then recovered using compressed sensing (CS) reconstruction algorithms which assume...introduced in [?], is a remarkable imaging architecture that allows capturing spectral imaging information of a 3D cube with just a single 2D mea...allows capturing spectral imaging information of a 3D cube with just a single 2D measurement of the coded and spectrally dispersed source field
Directly Estimating Endmembers for Compressive Hyperspectral Images
Xu, Hongwei; Fu, Ning; Qiao, Liyan; Peng, Xiyuan
2015-01-01
The large volume of hyperspectral images (HSI) generated creates huge challenges for transmission and storage, making data compression more and more important. Compressive Sensing (CS) is an effective data compression technology that shows that when a signal is sparse in some basis, only a small number of measurements are needed for exact signal recovery. Distributed CS (DCS) takes advantage of both intra- and inter- signal correlations to reduce the number of measurements needed for multichannel-signal recovery. HSI can be observed by the DCS framework to reduce the volume of data significantly. The traditional method for estimating endmembers (spectral information) first recovers the images from the compressive HSI and then estimates endmembers via the recovered images. The recovery step takes considerable time and introduces errors into the estimation step. In this paper, we propose a novel method, by designing a type of coherent measurement matrix, to estimate endmembers directly from the compressively observed HSI data via convex geometry (CG) approaches without recovering the images. Numerical simulations show that the proposed method outperforms the traditional method with better estimation speed and better (or comparable) accuracy in both noisy and noiseless cases. PMID:25905699
Image and video compression for HDR content
NASA Astrophysics Data System (ADS)
Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.
2012-10-01
High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.
Sparsity optimized compressed sensing image recovery
NASA Astrophysics Data System (ADS)
Wang, Sha; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi
2014-05-01
Training over-complete dictionaries which facilitate a sparse representation of the image leads to state-of-the-art results in compressed sensing image restoration. The training sparsity should be specified when training, while the recovering sparsity should also be set when image recovery. We find that the recovering sparsity has significant effects on the image reconstruction properties. To further improve the compressed sensing image recover accuracy, in this paper, we proposed a method by optimal estimation of the recovering sparsity according to the training sparsity to control the reconstruction method, and better reconstruction results can be achieved successfully. The method mainly includes three procedures. Firstly, forecasting the possible sparsity range by analyzing a large test data set to obtain a possible sparsity set. We find that the possible sparsity is always 3~5 times the training sparsity. Secondly, to precisely estimate the optimal recovering sparsity, we choose only several samples randomly from the compressed sensing measurements and using the sparsity candidates in the possible sparsity set to reconstruct the original image patches. Thirdly, choosing the sparsity corresponding to the best recovered result as the optimal recovering sparsity to be used in image reconstruction. The estimation computational cost is relatively small and the reconstruction result can be much better than the traditional method. The experimental results show that, the PSNR of the recovered images adopting our estimation method can be higher up to 4dB compared to the traditional method without the sparsity estimation.
JPEG2000 Image Compression on Solar EUV Images
NASA Astrophysics Data System (ADS)
Fischer, Catherine E.; Müller, Daniel; De Moortel, Ineke
2017-01-01
For future solar missions as well as ground-based telescopes, efficient ways to return and process data have become increasingly important. Solar Orbiter, which is the next ESA/NASA mission to explore the Sun and the heliosphere, is a deep-space mission, which implies a limited telemetry rate that makes efficient onboard data compression a necessity to achieve the mission science goals. Missions like the Solar Dynamics Observatory (SDO) and future ground-based telescopes such as the Daniel K. Inouye Solar Telescope, on the other hand, face the challenge of making petabyte-sized solar data archives accessible to the solar community. New image compression standards address these challenges by implementing efficient and flexible compression algorithms that can be tailored to user requirements. We analyse solar images from the Atmospheric Imaging Assembly (AIA) instrument onboard SDO to study the effect of lossy JPEG2000 (from the Joint Photographic Experts Group 2000) image compression at different bitrates. To assess the quality of compressed images, we use the mean structural similarity (MSSIM) index as well as the widely used peak signal-to-noise ratio (PSNR) as metrics and compare the two in the context of solar EUV images. In addition, we perform tests to validate the scientific use of the lossily compressed images by analysing examples of an on-disc and off-limb coronal-loop oscillation time-series observed by AIA/SDO.
Listless zerotree image compression algorithm
NASA Astrophysics Data System (ADS)
Lian, Jing; Wang, Ke
2006-09-01
In this paper, an improved zerotree structure and a new coding procedure are adopted, which improve the reconstructed image qualities. Moreover, the lists in SPIHT are replaced by flag maps, and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared with SPIHT.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as phi(sub gamma), are discrete time signals, where gamma represents the dictionary index. A dictionary with a collection of these waveforms is typically complete or overcomplete. Given such a dictionary, the goal is to obtain a representation image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Performance visualization for image compression in telepathology
NASA Astrophysics Data System (ADS)
Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace
2002-04-01
The conventional approach to performance evaluation for image compression in telemedicine is simply to measure compression ratio, signal-to-noise ratio and computational load. Evaluation of performance is however a much more complex and many sided issue. It is necessary to consider more deeply the requirements of the applications. In telemedicine, the preservation of clinical information must be taken into account when assessing the suitability of any particular compression algorithm. In telemedicine the metrication of this characteristic is subjective because human judgement must be brought in to identify what is of clinical importance. The assessment must therefore take into account subjective user evaluation criteria as well as objective criteria. This paper develops the concept of user based assessment techniques for image compression used in telepathology. A novel visualization approach has been developed to show and explore the highly complex performance space taking into account both types of measure. The application considered is within a general histopathology image management system; the particular component is a store-and-forward facility for second opinion elicitation. Images of histopathology slides are transmitted to the workstations of consultants working remotely to enable them to provide second opinions.
Compressive imaging using fast transform coding
NASA Astrophysics Data System (ADS)
Thompson, Andrew; Calderbank, Robert
2016-10-01
We propose deterministic sampling strategies for compressive imaging based on Delsarte-Goethals frames. We show that these sampling strategies result in multi-scale measurements which can be related to the 2D Haar wavelet transform. We demonstrate the effectiveness of our proposed strategies through numerical experiments.
NASA Astrophysics Data System (ADS)
Gains, David
2009-05-01
Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.
Microseismic source imaging in a compressed domain
NASA Astrophysics Data System (ADS)
Vera Rodriguez, Ismael; Sacchi, Mauricio D.
2014-08-01
Microseismic monitoring is an essential tool for the characterization of hydraulic fractures. Fast estimation of the parameters that define a microseismic event is relevant to understand and control fracture development. The amount of data contained in the microseismic records however, poses a challenge for fast continuous detection and evaluation of the microseismic source parameters. Work inspired by the emerging field of Compressive Sensing has showed that it is possible to evaluate source parameters in a compressed domain, thereby reducing processing time. This technique performs well in scenarios where the amplitudes of the signal are above the noise level, as is often the case in microseismic monitoring using downhole tools. This paper extends the idea of the compressed domain processing to scenarios of microseismic monitoring using surface arrays, where the signal amplitudes are commonly at the same level as, or below, the noise amplitudes. To achieve this, we resort to the use of an imaging operator, which has previously been found to produce better results in detection and location of microseismic events from surface arrays. The operator in our method is formed by full-waveform elastodynamic Green's functions that are band-limited by a source time function and represented in the frequency domain. Where full-waveform Green's functions are not available, ray tracing can also be used to compute the required Green's functions. Additionally, we introduce the concept of the compressed inverse, which derives directly from the compression of the migration operator using a random matrix. The described methodology reduces processing time at a cost of introducing distortions into the results. However, the amount of distortion can be managed by controlling the level of compression applied to the operator. Numerical experiments using synthetic and real data demonstrate the reductions in processing time that can be achieved and exemplify the process of selecting the
Lossless compression for three-dimensional images
NASA Astrophysics Data System (ADS)
Tang, Xiaoli; Pearlman, William A.
2004-01-01
We investigate and compare the performance of several three-dimensional (3D) embedded wavelet algorithms on lossless 3D image compression. The algorithms are Asymmetric Tree Three-Dimensional Set Partitioning In Hierarchical Trees (AT-3DSPIHT), Three-Dimensional Set Partitioned Embedded bloCK (3D-SPECK), Three-Dimensional Context-Based Embedded Zerotrees of Wavelet coefficients (3D-CB-EZW), and JPEG2000 Part II for multi-component images. Two kinds of images are investigated in our study -- 8-bit CT and MR medical images and 16-bit AVIRIS hyperspectral images. First, the performances by using different size of coding units are compared. It shows that increasing the size of coding unit improves the performance somewhat. Second, the performances by using different integer wavelet transforms are compared for AT-3DSPIHT, 3D-SPECK and 3D-CB-EZW. None of the considered filters always performs the best for all data sets and algorithms. At last, we compare the different lossless compression algorithms by applying integer wavelet transform on the entire image volumes. For 8-bit medical image volumes, AT-3DSPIHT performs the best almost all the time, achieving average of 12% decreases in file size compared with JPEG2000 multi-component, the second performer. For 16-bit hyperspectral images, AT-3DSPIHT always performs the best, yielding average 5.8% and 8.9% decreases in file size compared with 3D-SPECK and JPEG2000 multi-component, respectively. Two 2D compression algorithms, JPEG2000 and UNIX zip, are also included for reference, and all 3D algorithms perform much better than 2D algorithms.
Combining image-processing and image compression schemes
NASA Technical Reports Server (NTRS)
Greenspan, H.; Lee, M.-C.
1995-01-01
An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.
Complementary compressive imaging for the telescopic system
Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Yun; Zhai, Guang-Jie
2014-01-01
Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area. PMID:25060569
Multiwavelet-transform-based image compression techniques
NASA Astrophysics Data System (ADS)
Rao, Sathyanarayana S.; Yoon, Sung H.; Shenoy, Deepak
1996-10-01
Multiwavelet transforms are a new class of wavelet transforms that use more than one prototype scaling function and wavelet in the multiresolution analysis/synthesis. The popular Geronimo-Hardin-Massopust multiwavelet basis functions have properties of compact support, orthogonality, and symmetry which cannot be obtained simultaneously in scalar wavelets. The performance of multiwavelets in still image compression is studied using vector quantization of multiwavelet subbands with a multiresolution codebook. The coding gain of multiwavelets is compared with that of other well-known wavelet families using performance measures such as unified coding gain. Implementation aspects of multiwavelet transforms such as pre-filtering/post-filtering and symmetric extension are also considered in the context of image compression.
Efficient lossless compression scheme for multispectral images
NASA Astrophysics Data System (ADS)
Benazza-Benyahia, Amel; Hamdi, Mohamed; Pesquet, Jean-Christophe
2001-12-01
Huge amounts of data are generated thanks to the continuous improvement of remote sensing systems. Archiving this tremendous volume of data is a real challenge which requires lossless compression techniques. Furthermore, progressive coding constitutes a desirable feature for telebrowsing. To this purpose, a compact and pyramidal representation of the input image has to be generated. Separable multiresolution decompositions have already been proposed for multicomponent images allowing each band to be decomposed separately. It seems however more appropriate to exploit also the spectral correlations. For hyperspectral images, the solution is to apply a 3D decomposition according to the spatial and to the spectral dimensions. This approach is not appropriate for multispectral images because of the reduced number of spectral bands. In recent works, we have proposed a nonlinear subband decomposition scheme with perfect reconstruction which exploits efficiently both the spatial and the spectral redundancies contained in multispectral images. In this paper, the problem of coding the coefficients of the resulting subband decomposition is addressed. More precisely, we propose an extension to the vector case of Shapiro's embedded zerotrees of wavelet coefficients (V-EZW) with achieves further saving in the bit stream. Simulations carried out on SPOT images indicate the outperformance of the global compression package we performed.
Compressive Hyperspectral Imaging and Anomaly Detection
2010-02-01
the desired jointly sparse a"s, one shall adjust a and b. 4.4 Hyperspectral Image Reconstruction and Denoising We apply the model x* = Da’ + e! to...iteration for compressive sensing and sparse denoising ,’" Communications in Mathematical Sciences , 2008. W. Yin, "Analysis and generalizations of...Aharon, M. Elad, and A. Bruckstein, "K- SVD : An algorithm for designing overcomplete dictionaries for sparse representation,’" IEEE Transactions on Signal
Image Segmentation, Registration, Compression, and Matching
NASA Technical Reports Server (NTRS)
Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina
2011-01-01
A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity
Lossless Astronomical Image Compression and the Effects of Random Noise
NASA Technical Reports Server (NTRS)
Pence, William
2009-01-01
In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.
Reconfigurable Hardware for Compressing Hyperspectral Image Data
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua
2010-01-01
High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of
Fpack and Funpack Utilities for FITS Image Compression and Uncompression
NASA Technical Reports Server (NTRS)
Pence, W.
2008-01-01
Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.
Fast Lossless Compression of Multispectral-Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew
2006-01-01
An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.
Outer planet Pioneer imaging communications system study. [data compression
NASA Technical Reports Server (NTRS)
1974-01-01
The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.
Selective document image data compression technique
Fu, C.Y.; Petrich, L.I.
1998-05-19
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.
Selective document image data compression technique
Fu, Chi-Yung; Petrich, Loren I.
1998-01-01
A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel.--(235 words)
Discrete directional wavelet bases for image compression
NASA Astrophysics Data System (ADS)
Dragotti, Pier L.; Velisavljevic, Vladan; Vetterli, Martin; Beferull-Lozano, Baltasar
2003-06-01
The application of the wavelet transform in image processing is most frequently based on a separable construction. Lines and columns in an image are treated independently and the basis functions are simply products of the corresponding one dimensional functions. Such method keeps simplicity in design and computation, but is not capable of capturing properly all the properties of an image. In this paper, a new truly separable discrete multi-directional transform is proposed with a subsampling method based on lattice theory. Alternatively, the subsampling can be omitted and this leads to a multi-directional frame. This transform can be applied in many areas like denoising, non-linear approximation and compression. The results on non-linear approximation and denoising show very interesting gains compared to the standard two-dimensional analysis.
A Scheme for Compressing Floating-Point Images
NASA Astrophysics Data System (ADS)
White, Richard L.; Greenfield, Perry
While many techniques have been used to compress integer data, compressing floating-point data presents a number of additional problems. We have implemented a scheme for compressing floating-point images that is fast, robust, and automatic, that allows random access to pixels without decompressing the whole image, and that generally has a scientifically negligible effect on the noise present in the image. The compressed data are stored in an FITS binary table. Most astronomical images can be compressed by approximately a factor of 3, using conservative settings for the permitted level of changes in the data. We intend to work with NOAO to incorporate this compression method into the IRAF image kernel, so that FITS images compressed using this scheme can be accessed transparently from IRAF applications without any explicit decompression steps. The scheme is simple, and it should be possible to include it in other FITS libraries as well.
Centralized and interactive compression of multiview images
NASA Astrophysics Data System (ADS)
Gelman, Andriy; Dragotti, Pier Luigi; Velisavljević, Vladan
2011-09-01
In this paper, we propose two multiview image compression methods. The basic concept of both schemes is the layer-based representation, in which the captured three-dimensional (3D) scene is partitioned into layers each related to a constant depth in the scene. The first algorithm is a centralized scheme where each layer is de-correlated using a separable multi-dimensional wavelet transform applied across the viewpoint and spatial dimensions. The transform is modified to efficiently deal with occlusions and disparity variations for different depths. Although the method achieves a high compression rate, the joint encoding approach requires the transmission of all data to the users. By contrast, in an interactive setting, the users request only a subset of the captured images, but in an unknown order a priori. We address this scenario in the second algorithm using Distributed Source Coding (DSC) principles which reduces the inter-view redundancy and facilitates random access at the image level. We demonstrate that the proposed centralized and interactive methods outperform H.264/MVC and JPEG 2000, respectively.
Image compression with embedded multiwavelet coding
NASA Astrophysics Data System (ADS)
Liang, Kai-Chieh; Li, Jin; Kuo, C.-C. Jay
1996-03-01
An embedded image coding scheme using the multiwavelet transform and inter-subband prediction is proposed in this research. The new proposed coding scheme consists of the following building components: GHM multiwavelet transform, prediction across subbands, successive approximation quantization, and adaptive binary arithmetic coding. Our major contribution is the introduction of a set of prediction rules to fully exploit the correlations between multiwavelet coefficients in different frequency bands. The performance of the proposed new method is comparable to that of state-of-the-art wavelet compression methods.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Feasibility Study of Compressive Sensing Underwater Imaging Lidar
2014-03-28
Compressive Sensing Underwater Imaging Lidar 5a. CONTRACT NUMBER 5b. GRANT NUMBER N00014-12-1-0921 5c. PROGRAM ELEMENT NUMBER 6...Feasibility study of Compressive Sensing Underwater Imaging Lidar Bing Ouyang phone: (772) 242-2288 fax : (772) 242-2257 email: bouvang@hboi.fau.edu...study of the frame based Compressive Sensing concept. ■ Another related project "Airborne Compressive Sensing Topographic Lidar " is being
Improved Compression of Wavelet-Transformed Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Klimesh, Matthew
2005-01-01
A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average
Image Data Compression In A Personal Computer Environment
NASA Astrophysics Data System (ADS)
Farrelle, Paul M.; Harrington, Daniel G.; Jain, Anil K.
1988-12-01
This paper describes an image compression engine that is valuable for compressing virtually all types of images that occur in a personal computer environment. This allows efficient handling of still frame video images (monochrome or color) as well as documents and graphics (black-and-white or color) for archival and transmission applications. Through software control different image sizes, bit depths, and choices between lossless compression, high speed compression and controlled error compression are allowed. Having integrated a diverse set of compression algorithms on a single board, the device is suitable for a multitude of picture archival and communication (PAC) applications including medical imaging, electronic publishing, prepress imaging, document processing, law enforcement and forensic imaging.
Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq
2016-04-01
In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.
Optimal Compression of Floating-Point FITS Images
NASA Astrophysics Data System (ADS)
Pence, W. D.; White, R. L.; Seaman, R.
2010-12-01
Lossless compression (e.g., with GZIP) of floating-point format astronomical FITS images is ineffective and typically only reduces the file size by 10% to 30%. We describe a much more effective compression method that is supported by the publicly available fpack and funpack FITS image compression utilities that can compress floating point images by a factor of 10 without loss of significant scientific precision. A “subtractive dithering” technique is described which permits coarser quantization (and thus higher compression) than is possible with simple scaling methods.
On-board image compression for the RAE lunar mission
NASA Technical Reports Server (NTRS)
Miller, W. H.; Lynch, T. J.
1976-01-01
The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.
Stout, N; Partsch, H; Szolnoky, G; Forner-Cordero, I; Mosti, G; Mortimer, P; Flour, M; Damstra, R; Piller, N; Geyer, M J; Benigni, J-P; Moffat, C; Cornu-Thenard, A; Schingale, F; Clark, M; Chauveau, M
2012-08-01
Chronic edema is a multifactorial condition affecting patients with various diseases. Although the pathophysiology of edema varies, compression therapy is a basic tenant of treatment, vital to reducing swelling. Clinical trials are disparate or lacking regarding specific protocols and application recommendations for compression materials and methodology to enable optimal efficacy. Compression therapy is a basic treatment modality for chronic leg edema; however, the evidence base for the optimal application, duration and intensity of compression therapy is lacking. The aim of this document was to present the proceedings of a day-long international expert consensus group meeting that examined the current state of the science for the use of compression therapy in chronic edema. An expert consensus group met in Brighton, UK, in March 2010 to examine the current state of the science for compression therapy in chronic edema of the lower extremities. Panel discussions and open space discussions examined the current literature, clinical practice patterns, common materials and emerging technologies for the management of chronic edema. This document outlines a proposed clinical research agenda focusing on compression therapy in chronic edema. Future trials comparing different compression devices, materials, pressures and parameters for application are needed to enhance the evidence base for optimal chronic oedema management. Important outcomes measures and methods of pressure and oedema quantification are outlined. Future trials are encouraged to optimize compression therapy in chronic edema of the lower extremities.
Using compressed images in multimedia education
NASA Astrophysics Data System (ADS)
Guy, William L.; Hefner, Lance V.
1996-04-01
The classic radiologic teaching file consists of hundreds, if not thousands, of films of various ages, housed in paper jackets with brief descriptions written on the jackets. The development of a good teaching file has been both time consuming and voluminous. Also, any radiograph to be copied was unavailable during the reproduction interval, inconveniencing other medical professionals needing to view the images at that time. These factors hinder motivation to copy films of interest. If a busy radiologist already has an adequate example of a radiological manifestation, it is unlikely that he or she will exert the effort to make a copy of another similar image even if a better example comes along. Digitized radiographs stored on CD-ROM offer marked improvement over the copied film teaching files. Our institution has several laser digitizers which are used to rapidly scan radiographs and produce high quality digital images which can then be converted into standard microcomputer (IBM, Mac, etc.) image format. These images can be stored on floppy disks, hard drives, rewritable optical disks, recordable CD-ROM disks, or removable cartridge media. Most hospital computer information systems include radiology reports in their database. We demonstrate that the reports for the images included in the users teaching file can be copied and stored on the same storage media as the images. The radiographic or sonographic image and the corresponding dictated report can then be 'linked' together. The description of the finding or findings of interest on the digitized image is thus electronically tethered to the image. This obviates the need to write much additional detail concerning the radiograph, saving time. In addition, the text on this disk can be indexed such that all files with user specified features can be instantly retrieve and combined in a single report, if desired. With the use of newer image compression techniques, hundreds of cases may be stored on a single CD
Wavelet-based Image Compression using Subband Threshold
NASA Astrophysics Data System (ADS)
Muzaffar, Tanzeem; Choi, Tae-Sun
2002-11-01
Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.
Sparse representations for online-learning-based hyperspectral image compression.
Ülkü, İrem; Töreyin, Behçet Uğur
2015-10-10
Sparse models provide data representations in the fewest possible number of nonzero elements. This inherent characteristic enables sparse models to be utilized for data compression purposes. Hyperspectral data is large in size. In this paper, a framework for sparsity-based hyperspectral image compression methods using online learning is proposed. There are various sparse optimization models. A comparative analysis of sparse representations in terms of their hyperspectral image compression performance is presented. For this purpose, online-learning-based hyperspectral image compression methods are proposed using four different sparse representations. Results indicate that, independent of the sparsity models, online-learning-based hyperspectral data compression schemes yield the best compression performances for data rates of 0.1 and 0.3 bits per sample, compared to other state-of-the-art hyperspectral data compression techniques, in terms of image quality measured as average peak signal-to-noise ratio.
MR image compression using a wavelet transform coding algorithm.
Angelidis, P A
1994-01-01
We present here a technique for MR image compression. It is based on a transform coding scheme using the wavelet transform and vector quantization. Experimental results show that the method offers high compression ratios with low degradation of the image quality. The technique is expected to be particularly useful wherever storing and transmitting large numbers of images is necessary.
KRESKA: A compression system for small and very large images
NASA Technical Reports Server (NTRS)
Ohnesorge, Krystyna W.; Sennhauser, Rene
1995-01-01
An effective lossless compression system for grayscale images is presented using finite context variable order Markov models. A new method to accurately estimate the probability of the escape symbol is proposed. The choice of the best model order and rules for selecting context pixels are discussed. Two context precision and two symbol precision techniques to handle noisy image data with Markov models are introduced. Results indicate that finite context variable order Markov models lead to effective lossless compression systems for small and very large images. The system achieves higher compression ratios than some of the better known image compression techniques such as lossless JPEG, JBIG, or FELICS.
Image compression system and method having optimized quantization tables
NASA Technical Reports Server (NTRS)
Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)
1998-01-01
A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.
The impact of lossless image compression to radiographs
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Abel, Jürgen; Weiss, Claudia
2006-03-01
The increasing number of digital imaging modalities results in data volumes of several Tera Bytes per year that must be transferred and archived in a common-sized hospital. Hence, data compression is an important issue for picture archiving and communication systems (PACS). The effect of lossy image compression is frequently analyzed with respect to images from a certain modality supporting a certain diagnosis. However, novel compression schemes have been developed recently allowing efficient but lossless compression. In this study, we compare the lossless compression schemes embedded in the tagged image file format (TIFF), graphics interchange format (GIF), and Joint Photographic Experts Group (JPEG 2000 II) with the Borrows-Wheeler compression algorithm (BWCA) with respect to image content and origin. Repeated measures ANOVA was based on 1.200 images in total. Statistically significant effects (p < 0,0001) of compression scheme, image content, and image origin were found. Best mean compression factor of 3.5 (2.272 bpp) is obtained applying BTW to secondarily digitized radiographs of the head, while the lowest factor of 1,05 (7.587 bpp) resulted from the TIFF packbits algorithm applied to pelvis images captured digitally. Over all, the BWCA is slightly but significantly more effective than JPEG 2000. Both compression schemes reduce the required bits per pixel (bpp) below 3. Also, secondarily digitized images are more compressible than the directly digital ones. Interestingly, JPEG outperforms BWCA for directly digital images regardless of image content, while BWCA performs better than JPEG on secondarily digitized radiographs. In conclusion, efficient lossless image compression schemes are available for PACS.
Effects on MR images compression in tissue classification quality
NASA Astrophysics Data System (ADS)
Santalla, H.; Meschino, G.; Ballarin, V.
2007-11-01
It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of "quality" is essential. What we understand for "quality"? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images.
High Bit-Depth Medical Image Compression with HEVC.
Parikh, Saurin; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor
2017-01-27
Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as HEVC can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.
Treatment of metastatic spinal cord compression: cepo review and clinical recommendations
L’Espérance, S.; Vincent, F.; Gaudreault, M.; Ouellet, J.A.; Li, M.; Tosikyan, A.; Goulet, S.
2012-01-01
Background Metastatic spinal cord compression (mscc) is an oncologic emergency that, unless diagnosed early and treated appropriately, can lead to permanent neurologic impairment. After an analysis of relevant studies evaluating the effectiveness of various treatment modalities, the Comité de l’évolution des pratiques en oncologie (cepo) made recommendations on mscc management. Method A review of the scientific literature published up to February 2011 considered only phase ii and iii trials that included assessment of neurologic function. A total of 26 studies were identified. Recommendations Considering the evidence available to date, cepo recommends that cancer patients with mscc be treated by a specialized multidisciplinary team.dexamethasone 16 mg daily be administered to symptomatic patients as soon as mscc is diagnosed or suspected.high-loading-dose corticosteroids be avoided.histopathologic diagnosis and scores from scales evaluating prognosis and spinal instability be considered before treatment.corticosteroids and chemotherapy with radiotherapy be offered to patients with spinal cord compression caused by myeloma, lymphoma, or germ cell tumour without sign of spinal instability or compression by bone fragment.short-course radiotherapy be administered to patients with spinal cord compression and short life expectancy.long-course radiotherapy be administered to patients with inoperable spinal cord compression and good life expectancy.decompressive surgery followed by long-course radiotherapy be offered to appropriate symptomatic mscc patients (including spinal instability, displacement of vertebral fragment); andpatients considered for surgery have a life expectancy of at least 3–6 months. PMID:23300371
Image compression using the W-transform
Reynolds, W.D. Jr.
1995-12-31
The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.
Optimal Compression Methods for Floating-point Format Images
NASA Technical Reports Server (NTRS)
Pence, W. D.; White, R. L.; Seaman, R.
2009-01-01
We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.
Comparison of two SVD-based color image compression schemes.
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR.
Comparison of two SVD-based color image compression schemes
Li, Ying; Wei, Musheng; Zhang, Fengxia; Zhao, Jianli
2017-01-01
Color image compression is a commonly used process to represent image data as few bits as possible, which removes redundancy in the data while maintaining an appropriate level of quality for the user. Color image compression algorithms based on quaternion are very common in recent years. In this paper, we propose a color image compression scheme, based on the real SVD, named real compression scheme. First, we form a new real rectangular matrix C according to the red, green and blue components of the original color image and perform the real SVD for C. Then we select several largest singular values and the corresponding vectors in the left and right unitary matrices to compress the color image. We compare the real compression scheme with quaternion compression scheme by performing quaternion SVD using the real structure-preserving algorithm. We compare the two schemes in terms of operation amount, assignment number, operation speed, PSNR and CR. The experimental results show that with the same numbers of selected singular values, the real compression scheme offers higher CR, much less operation time, but a little bit smaller PSNR than the quaternion compression scheme. When these two schemes have the same CR, the real compression scheme shows more prominent advantages both on the operation time and PSNR. PMID:28257451
A Framework of Hyperspectral Image Compression using Neural Networks
Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; Velez, Carlos; Gonzalez, Jenipher
2015-01-01
Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows us to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.
A Framework of Hyperspectral Image Compression using Neural Networks
Masalmah, Yahya M.; Martínez Nieves, Christian; Rivera Soto, Rafael; ...
2015-01-01
Hyperspectral image analysis has gained great attention due to its wide range of applications. Hyperspectral images provide a vast amount of information about underlying objects in an image by using a large range of the electromagnetic spectrum for each pixel. However, since the same image is taken multiple times using distinct electromagnetic bands, the size of such images tend to be significant, which leads to greater processing requirements. The aim of this paper is to present a proposed framework for image compression and to study the possible effects of spatial compression on quality of unmixing results. Image compression allows usmore » to reduce the dimensionality of an image while still preserving most of the original information, which could lead to faster image processing. Lastly, this paper presents preliminary results of different training techniques used in Artificial Neural Network (ANN) based compression algorithm.« less
Statisically lossless image compression for CR and DR
NASA Astrophysics Data System (ADS)
Young, Susan S.; Whiting, Bruce R.; Foos, David H.
1999-05-01
This paper proposes an image compression algorithm that can improve the compression efficiency for digital projection radiographs over current lossless JPEG by utilizing a quantization companding function and a new lossless image compression standard called JPEG-LS. The companding and compression processes can also be augmented by a pre- processing step to first segment the foreground portions of the image and then substitute the foreground pixel values with a uniform code value. The quantization companding function approach is based on a theory that relates the onset of distortion to changes in the second-order statistics in an image. By choosing an appropriate companding function, the properties of the second-order statistics can be retained to within an insignificant error, and the companded image can then be lossless compressed using JPEG-LS; we call the reconstructed image statistically lossless. The approach offers a theoretical basis supporting the integrity of the compressed-reconstructed data relative to the original image, while providing a modest level of compression efficiency. This intermediate level of compression could help to increase the conform level for radiologists that do not currently utilize lossy compression and may also have benefits form a medico-legal perspective.
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
Fast computational scheme of image compression for 32-bit microprocessors
NASA Technical Reports Server (NTRS)
Kasperovich, Leonid
1994-01-01
This paper presents a new computational scheme of image compression based on the discrete cosine transform (DCT), underlying JPEG and MPEG International Standards. The algorithm for the 2-d DCT computation uses integer operations (register shifts and additions / subtractions only); its computational complexity is about 8 additions per image pixel. As a meaningful example of an on-board image compression application we consider the software implementation of the algorithm for the Mars Rover (Marsokhod, in Russian) imaging system being developed as a part of Mars-96 International Space Project. It's shown that fast software solution for 32-bit microprocessors may compete with the DCT-based image compression hardware.
Compressive optical image watermarking using joint Fresnel transform correlator architecture
NASA Astrophysics Data System (ADS)
Li, Jun; Zhong, Ting; Dai, Xiaofang; Yang, Chanxia; Li, Rong; Tang, Zhilie
2017-02-01
A new optical image watermarking technique based on compressive sensing using joint Fresnel transform correlator architecture has been presented. A secret scene or image is first embedded into a host image to perform optical image watermarking by use of joint Fresnel transform correlator architecture. Then, the watermarked image is compressed to much smaller signal data using single-pixel compressive holographic imaging in optical domain. At the received terminal, the watermarked image is reconstructed well via compressive sensing theory and a specified holographic reconstruction algorithm. The preliminary numerical simulations show that it is effective and suitable for optical image security transmission in the coming absolutely optical network for the reason of the completely optical implementation and largely decreased holograms data volume.
Texture-based medical image retrieval in compressed domain using compressive sensing.
Yadav, Kuldeep; Srivastava, Avi; Mittal, Ankush; Ansari, M A
2014-01-01
Content-based image retrieval has gained considerable attention in today's scenario as a useful tool in many applications; texture is one of them. In this paper, we focus on texture-based image retrieval in compressed domain using compressive sensing with the help of DC coefficients. Medical imaging is one of the fields which have been affected most, as there had been huge size of image database and getting out the concerned image had been a daunting task. Considering this, in this paper we propose a new model of image retrieval process using compressive sampling, since it allows accurate recovery of image from far fewer samples of unknowns and it does not require a close relation of matching between sampling pattern and characteristic image structure with increase acquisition speed and enhanced image quality.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.
Fu, C.Y.; Petrich, L.I.
1997-12-30
An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.
CWICOM: A Highly Integrated & Innovative CCSDS Image Compression ASIC
NASA Astrophysics Data System (ADS)
Poupat, Jean-Luc; Vitulli, Raffaele
2013-08-01
The space market is more and more demanding in terms of on image compression performances. The earth observation satellites instrument resolution, the agility and the swath are continuously increasing. It multiplies by 10 the volume of picture acquired on one orbit. In parallel, the satellites size and mass are decreasing, requiring innovative electronic technologies reducing size, mass and power consumption. Astrium, leader on the market of the combined solutions for compression and memory for space application, has developed a new image compression ASIC which is presented in this paper. CWICOM is a high performance and innovative image compression ASIC developed by Astrium in the frame of the ESA contract n°22011/08/NLL/LvH. The objective of this ESA contract is to develop a radiation hardened ASIC that implements the CCSDS 122.0-B-1 Standard for Image Data Compression, that has a SpaceWire interface for configuring and controlling the device, and that is compatible with Sentinel-2 interface and with similar Earth Observation missions. CWICOM stands for CCSDS Wavelet Image COMpression ASIC. It is a large dynamic, large image and very high speed image compression ASIC potentially relevant for compression of any 2D image with bi-dimensional data correlation such as Earth observation, scientific data compression… The paper presents some of the main aspects of the CWICOM development, such as the algorithm and specification, the innovative memory organization, the validation approach and the status of the project.
Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing
2013-04-01
Signal Process., vol. 57, no. 6, pp. 2275-2284, 2009. [20] A. Gurbuz, J. McClellan, and W. Scott, Jr., "Compressive sensing for subsurface imaging using...SciTech Publishing, 2010, pp. 922- 938. [45] A. C. Gurbuz, J. H. McClellan, and W. R. Scott, Jr., "Compressive sensing for subsurface imaging using
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
Wavelet/scalar quantization compression standard for fingerprint images
Brislawn, C.M.
1996-06-12
US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.
Iliac vein compression syndrome: Clinical, imaging and pathologic findings
Brinegar, Katelyn N; Sheth, Rahul A; Khademhosseini, Ali; Bautista, Jemianne; Oklu, Rahmi
2015-01-01
May-Thurner syndrome (MTS) is the pathologic compression of the left common iliac vein by the right common iliac artery, resulting in left lower extremity pain, swelling, and deep venous thrombosis. Though this syndrome was first described in 1851, there are currently no standardized criteria to establish the diagnosis of MTS. Since MTS is treated by a wide array of specialties, including interventional radiology, vascular surgery, cardiology, and vascular medicine, the need for an established diagnostic criterion is imperative in order to reduce misdiagnosis and inappropriate treatment. Although MTS has historically been diagnosed by the presence of pathologic features, the use of dynamic imaging techniques has led to a more radiologic based diagnosis. Thus, imaging plays an integral part in screening patients for MTS, and the utility of a wide array of imaging modalities has been evaluated. Here, we summarize the historical aspects of the clinical features of this syndrome. We then provide a comprehensive assessment of the literature on the efficacy of imaging tools available to diagnose MTS. Lastly, we provide clinical pearls and recommendations to aid physicians in diagnosing the syndrome through the use of provocative measures. PMID:26644823
The impact of skull bone intensity on the quality of compressed CT neuro images
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Vrscay, Edward R.; Wang, Zhou; Cavaro-Menard, Christine; Koff, David; Wallace, Bill; Obara, Boguslaw
2012-02-01
The increasing use of technologies such as CT and MRI, along with a continuing improvement in their resolution, has contributed to the explosive growth of digital image data being generated. Medical communities around the world have recognized the need for efficient storage, transmission and display of medical images. For example, the Canadian Association of Radiologists (CAR) has recommended compression ratios for various modalities and anatomical regions to be employed by lossy JPEG and JPEG2000 compression in order to preserve diagnostic quality. Here we investigate the effects of the sharp skull edges present in CT neuro images on JPEG and JPEG2000 lossy compression. We conjecture that this atypical effect is caused by the sharp edges between the skull bone and the background regions as well as between the skull bone and the interior regions. These strong edges create large wavelet coefficients that consume an unnecessarily large number of bits in JPEG2000 compression because of its bitplane coding scheme, and thus result in reduced quality at the interior region, which contains most diagnostic information in the image. To validate the conjecture, we investigate a segmentation based compression algorithm based on simple thresholding and morphological operators. As expected, quality is improved in terms of PSNR as well as the structural similarity (SSIM) image quality measure, and its multiscale (MS-SSIM) and informationweighted (IW-SSIM) versions. This study not only supports our conjecture, but also provides a solution to improve the performance of JPEG and JPEG2000 compression for specific types of CT images.
Context Modeler for Wavelet Compression of Spectral Hyperspectral Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Xie, Hua; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
A context-modeling sub-algorithm has been developed as part of an algorithm that effects three-dimensional (3D) wavelet-based compression of hyperspectral image data. The context-modeling subalgorithm, hereafter denoted the context modeler, provides estimates of probability distributions of wavelet-transformed data being encoded. These estimates are utilized by an entropy coding subalgorithm that is another major component of the compression algorithm. The estimates make it possible to compress the image data more effectively than would otherwise be possible. The following background discussion is prerequisite to a meaningful summary of the context modeler. This discussion is presented relative to ICER-3D, which is the name attached to a particular compression algorithm and the software that implements it. The ICER-3D software is summarized briefly in the preceding article, ICER-3D Hyperspectral Image Compression Software (NPO-43238). Some aspects of this algorithm were previously described, in a slightly more general context than the ICER-3D software, in "Improving 3D Wavelet-Based Compression of Hyperspectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. In turn, ICER-3D is a product of generalization of ICER, another previously reported algorithm and computer program that can perform both lossless and lossy wavelet-based compression and decompression of gray-scale-image data. In ICER-3D, hyperspectral image data are decomposed using a 3D discrete wavelet transform (DWT). Following wavelet decomposition, mean values are subtracted from spatial planes of spatially low-pass subbands prior to encoding. The resulting data are converted to sign-magnitude form and compressed. In ICER-3D, compression is progressive, in that compressed information is ordered so that as more of the compressed data stream is received, successive reconstructions of the hyperspectral image data are of successively higher overall fidelity.
[Hyperspectral image compression technology research based on EZW].
Wei, Jun-Xia; Xiangli, Bin; Duan, Xiao-Feng; Xu, Zhao-Hui; Xue, Li-Jun
2011-08-01
Along with the development of hyperspectral remote sensing technology, hyperspectral imaging technology has been applied in the aspect of aviation and spaceflight, which is different from multispectral imaging, and with the band width of nanoscale spectral imaging the target continuously, the image resolution is very high. However, with the increasing number of band, spectral data quantity will be more and more, and these data storage and transmission is the problem that the authors must face. Along with the development of wavelet compression technology, in field of image compression, many people adopted and improved EZW, the present paper used the method in hyperspectral spatial dimension compression, but does not involved the spectrum dimension compression. From hyperspectral image compression reconstruction results, whether from the peak signal-to-noise ratio (PSNR) and spectral curve or from the subjective comparison of source and reconstruction image, the effect is well. If the first compression of image from spectrum dimension is made, then compression on space dimension, the authors believe the effect will be better.
NASA Astrophysics Data System (ADS)
Chen, Tinghuan; Zhang, Meng; Wu, Jianhui; Yuen, Chau; Tong, You
2016-10-01
Because of simple encryption and compression procedure in single step, compressed sensing (CS) is utilized to encrypt and compress an image. Difference of sparsity levels among blocks of the sparsely transformed image degrades compression performance. In this paper, motivated by this difference of sparsity levels, we propose an encryption and compression approach combining Kronecker CS (KCS) with elementary cellular automata (ECA). In the first stage of encryption, ECA is adopted to scramble the sparsely transformed image in order to uniformize sparsity levels. A simple approximate evaluation method is introduced to test the sparsity uniformity. Due to low computational complexity and storage, in the second stage of encryption, KCS is adopted to encrypt and compress the scrambled and sparsely transformed image, where the measurement matrix with a small size is constructed from the piece-wise linear chaotic map. Theoretical analysis and experimental results show that our proposed scrambling method based on ECA has great performance in terms of scrambling and uniformity of sparsity levels. And the proposed encryption and compression method can achieve better secrecy, compression performance and flexibility.
Wavelet-based compression of pathological images for telemedicine applications
NASA Astrophysics Data System (ADS)
Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun
2000-05-01
In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.
Estimating JPEG2000 compression for image forensics using Benford's Law
NASA Astrophysics Data System (ADS)
Qadir, Ghulam; Zhao, Xi; Ho, Anthony T. S.
2010-05-01
With the tremendous growth and usage of digital images nowadays, the integrity and authenticity of digital content is becoming increasingly important, and a growing concern to many government and commercial sectors. Image Forensics, based on a passive statistical analysis of the image data only, is an alternative approach to the active embedding of data associated with Digital Watermarking. Benford's Law was first introduced to analyse the probability distribution of the 1st digit (1-9) numbers of natural data, and has since been applied to Accounting Forensics for detecting fraudulent income tax returns [9]. More recently, Benford's Law has been further applied to image processing and image forensics. For example, Fu et al. [5] proposed a Generalised Benford's Law technique for estimating the Quality Factor (QF) of JPEG compressed images. In our previous work, we proposed a framework incorporating the Generalised Benford's Law to accurately detect unknown JPEG compression rates of watermarked images in semi-fragile watermarking schemes. JPEG2000 (a relatively new image compression standard) offers higher compression rates and better image quality as compared to JPEG compression. In this paper, we propose the novel use of Benford's Law for estimating JPEG2000 compression for image forensics applications. By analysing the DWT coefficients and JPEG2000 compression on 1338 test images, the initial results indicate that the 1st digit probability of DWT coefficients follow the Benford's Law. The unknown JPEG2000 compression rates of the image can also be derived, and proved with the help of a divergence factor, which shows the deviation between the probabilities and Benford's Law. Based on 1338 test images, the mean divergence for DWT coefficients is approximately 0.0016, which is lower than DCT coefficients at 0.0034. However, the mean divergence for JPEG2000 images compression rate at 0.1 is 0.0108, which is much higher than uncompressed DWT coefficients. This result
Compressing subbanded image data with Lempel-Ziv-based coders
NASA Technical Reports Server (NTRS)
Glover, Daniel; Kwatra, S. C.
1993-01-01
A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.
Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong
2016-08-01
Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.
Multispectral image compression based on DSC combined with CCSDS-IDC.
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.
Multispectral Image Compression Based on DSC Combined with CCSDS-IDC
Li, Jin; Xing, Fei; Sun, Ting; You, Zheng
2014-01-01
Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741
NASA Astrophysics Data System (ADS)
Li, Dunling; Loew, Murray H.
2004-05-01
This paper provides a theoretical foundation for the closed-form expression of model observers on compressed images. In medical applications, model observers, especially the channelized Hotelling observer, have been successfully used to predict human observer performance and to evaluate image quality for detection tasks in various backgrounds. To use model observers, however, requires knowledge of noise statistics. This paper first identifies quantization noise as the sole distortion source in transform coding, one of the most commonly used methods for image compression. Then, it represents transform coding as a 1-D block-based matrix expression, it further derives first and second moments, and the probability density function (pdf) of the compression noise at pixel, block and image levels. The compression noise statistics depend on the transform matrix and the quantization matrix in the transform coding algorithm. Compression noise is jointly normally distributed when the dimension of the transform (the block size) is typical and the contents of image sets vary randomly. Moreover, this paper uses JPEG as a test example to verify the derived statistics. The test simulation results show that the closed-form expression of JPEG quantization and compression noise statistics correctly predicts the estimated ones from actual images.
Science-based Region-of-Interest Image Compression
NASA Technical Reports Server (NTRS)
Wagstaff, K. L.; Castano, R.; Dolinar, S.; Klimesh, M.; Mukai, R.
2004-01-01
As the number of currently active space missions increases, so does competition for Deep Space Network (DSN) resources. Even given unbounded DSN time, power and weight constraints onboard the spacecraft limit the maximum possible data transmission rate. These factors highlight a critical need for very effective data compression schemes. Images tend to be the most bandwidth-intensive data, so image compression methods are particularly valuable. In this paper, we describe a method for prioritizing regions in an image based on their scientific value. Using a wavelet compression method that can incorporate priority information, we ensure that the highest priority regions are transmitted with the highest fidelity.
Medical image compression algorithm based on wavelet transform
NASA Astrophysics Data System (ADS)
Chen, Minghong; Zhang, Guoping; Wan, Wei; Liu, Minmin
2005-02-01
With rapid development of electronic imaging and multimedia technology, the telemedicine is applied to modern medical servings in the hospital. Digital medical image is characterized by high resolution, high precision and vast data. The optimized compression algorithm can alleviate restriction in the transmission speed and data storage. This paper describes the characteristics of human vision system based on the physiology structure, and analyses the characteristics of medical image in the telemedicine, then it brings forward an optimized compression algorithm based on wavelet zerotree. After the image is smoothed, it is decomposed with the haar filters. Then the wavelet coefficients are quantified adaptively. Therefore, we can maximize efficiency of compression and achieve better subjective visual image. This algorithm can be applied to image transmission in the telemedicine. In the end, we examined the feasibility of this algorithm with an image transmission experiment in the network.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong
2015-05-01
Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.
Dynamic CT perfusion image data compression for efficient parallel processing.
Barros, Renan Sales; Olabarriaga, Silvia Delgado; Borst, Jordi; van Walderveen, Marianne A A; Posthuma, Jorrit S; Streekstra, Geert J; van Herk, Marcel; Majoie, Charles B L M; Marquering, Henk A
2016-03-01
The increasing size of medical imaging data, in particular time series such as CT perfusion (CTP), requires new and fast approaches to deliver timely results for acute care. Cloud architectures based on graphics processing units (GPUs) can provide the processing capacity required for delivering fast results. However, the size of CTP datasets makes transfers to cloud infrastructures time-consuming and therefore not suitable in acute situations. To reduce this transfer time, this work proposes a fast and lossless compression algorithm for CTP data. The algorithm exploits redundancies in the temporal dimension and keeps random read-only access to the image elements directly from the compressed data on the GPU. To the best of our knowledge, this is the first work to present a GPU-ready method for medical image compression with random access to the image elements from the compressed data.
Pre-Processor for Compression of Multispectral Image Data
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron
2006-01-01
A computer program that preprocesses multispectral image data has been developed to provide the Mars Exploration Rover (MER) mission with a means of exploiting the additional correlation present in such data without appreciably increasing the complexity of compressing the data.
Simultaneous fusion, compression, and encryption of multiple images.
Alfalou, A; Brosseau, C; Abdallah, N; Jridi, M
2011-11-21
We report a new spectral multiple image fusion analysis based on the discrete cosine transform (DCT) and a specific spectral filtering method. In order to decrease the size of the multiplexed file, we suggest a procedure of compression which is based on an adapted spectral quantization. Each frequency is encoded with an optimized number of bits according its importance and its position in the DC domain. This fusion and compression scheme constitutes a first level of encryption. A supplementary level of encryption is realized by making use of biometric information. We consider several implementations of this analysis by experimenting with sequences of gray scale images. To quantify the performance of our method we calculate the MSE (mean squared error) and the PSNR (peak signal to noise ratio). Our results consistently improve performances compared to the well-known JPEG image compression standard and provide a viable solution for simultaneous compression and encryption of multiple images.
An image compression technique for use on token ring networks
NASA Technical Reports Server (NTRS)
Gorjala, B.; Sayood, Khalid; Meempat, G.
1992-01-01
A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.
A High Performance Image Data Compression Technique for Space Applications
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack
2003-01-01
A highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression rate specified by the user. The compression scheme performs well on a suite of test images acquired from spacecraft instruments. It can also be applied to three-dimensional data cube resulting from hyper-spectral imaging instrument. Flight qualifiable hardware implementations are in development. The implementation is being designed to compress data in excess of 20 Msampledsec and support quantization from 2 to 16 bits. This paper presents the algorithm, its applications and status of development.
Effect of severe image compression on face recognition algorithms
NASA Astrophysics Data System (ADS)
Zhao, Peilong; Dong, Jiwen; Li, Hengjian
2015-10-01
In today's information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.
The Pixon Method for Data Compression Image Classification, and Image Reconstruction
NASA Technical Reports Server (NTRS)
Puetter, Richard; Yahil, Amos
2002-01-01
As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.
Image-adapted visually weighted quantization matrices for digital image compression
NASA Technical Reports Server (NTRS)
Watson, Andrew B. (Inventor)
1994-01-01
A method for performing image compression that eliminates redundant and invisible image components is presented. The image compression uses a Discrete Cosine Transform (DCT) and each DCT coefficient yielded by the transform is quantized by an entry in a quantization matrix which determines the perceived image quality and the bit rate of the image being compressed. The present invention adapts or customizes the quantization matrix to the image being compressed. The quantization matrix comprises visual masking by luminance and contrast techniques and by an error pooling technique all resulting in a minimum perceptual error for any given bit rate, or minimum bit rate for a given perceptual error.
Fast DPCM scheme for lossless compression of aurora spectral images
NASA Astrophysics Data System (ADS)
Kong, Wanqiu; Wu, Jiaji
2016-10-01
Aurora has abundant information to be stored. Aurora spectral image electronically preserves spectral information and visual observation of aurora during a period to be studied later. These images are helpful for the research of earth-solar activities and to understand the aurora phenomenon itself. However, the images are produced with a quite high sampling frequency, which leads to the challenging transmission load. In order to solve the problem, lossless compression turns out to be required. Indeed, each frame of aurora spectral images differs from the classical natural image and also from the frame of hyperspectral image. Existing lossless compression algorithms are not quite applicable. On the other hand, the key of compression is to decorrelate between pixels. We consider exploiting a DPCM-based scheme for the lossless compression because DPCM is effective for decorrelation. Such scheme makes use of two-dimensional redundancy both in the spatial and spectral domain with a relatively low complexity. Besides, we also parallel it for a faster computation speed. All codes are implemented on a structure consists of nested for loops of which the outer and the inner loops are respectively designed for spectral and spatial decorrelation. And the parallel version is represented on CPU platform using different numbers of cores. Experimental results show that compared to traditional lossless compression methods, the DPCM scheme has great advantage in compression gain and meets the requirement of real-time transmission. Besides, the parallel version has expected computation performance with a high CPU utilization.
Planning/scheduling techniques for VQ-based image compression
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Manohar, Mareboyana; Tilton, James C.
1994-01-01
The enormous size of the data holding and the complexity of the information system resulting from the EOS system pose several challenges to computer scientists, one of which is data archival and dissemination. More than ninety percent of the data holdings of NASA is in the form of images which will be accessed by users across the computer networks. Accessing the image data in its full resolution creates data traffic problems. Image browsing using a lossy compression reduces this data traffic, as well as storage by factor of 30-40. Of the several image compression techniques, VQ is most appropriate for this application since the decompression of the VQ compressed images is a table lookup process which makes minimal additional demands on the user's computational resources. Lossy compression of image data needs expert level knowledge in general and is not straightforward to use. This is especially true in the case of VQ. It involves the selection of appropriate codebooks for a given data set and vector dimensions for each compression ratio, etc. A planning and scheduling system is described for using the VQ compression technique in the data access and ingest of raw satellite data.
Image compression software for the SOHO LASCO and EIT experiments
NASA Technical Reports Server (NTRS)
Grunes, Mitchell R.; Howard, Russell A.; Hoppel, Karl; Mango, Stephen A.; Wang, Dennis
1994-01-01
This paper describes the lossless and lossy image compression algorithms to be used on board the Solar Heliospheric Observatory (SOHO) in conjunction with the Large Angle Spectrometric Coronograph and Extreme Ultraviolet Imaging Telescope experiments. It also shows preliminary results obtained using similar prior imagery and discusses the lossy compression artifacts which will result. This paper is in part intended for the use of SOHO investigators who need to understand the results of SOHO compression in order to better allocate the transmission bits which they have been allocated.
Imaging industry expectations for compressed sensing in MRI
NASA Astrophysics Data System (ADS)
King, Kevin F.; Kanwischer, Adriana; Peters, Rob
2015-09-01
Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
Correlation estimation and performance optimization for distributed image compression
NASA Astrophysics Data System (ADS)
He, Zhihai; Cao, Lei; Cheng, Hui
2006-01-01
Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.
Comparison Of Data Compression Schemes For Medical Images
NASA Astrophysics Data System (ADS)
Noh, Ki H.; Jenkins, Janice M.
1986-06-01
Medical images acquired and stored digitally continue to pose a major problem in the area of picture archiving and transmission. The need for accurate reproduction of such images, which constitute patient medical records, and the medico-legal problems of possible loss of information has led us to examine the suitability of data compression schemes for several different medical image modalities. We have examined both reversible coding and irreversible coding as methods of image for-matting and reproduction. In reversible coding we have tested run-length coding and arithmetic coding on image bit planes. In irreversible coding, we have studied transform coding, linear predictive coding, and block truncation coding and their effects on image quality versus compression ratio in several image modalities. In transform coding, we have applied discrete Fourier coding, discrete cosine coding, discrete sine transform, and Walsh-Hadamard transform to images in which a subset of the transformed coefficients were retained and quantized. In linear predictive coding, we used a fixed level quantizer. In the case of block truncation coding, the first and second moments were retained. Results of all types of irreversible coding for data compression were unsatisfactory in terms of reproduction of the original image. Run-length coding was useful on several bit planes of an image but not on others. Arithmetic coding was found to be completely reversible and resulted in up to 2 to 1 compression ratio.
OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis
Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J
2015-01-01
Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344
DCT and DST Based Image Compression for 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, Mohammed M.; Rodrigues, Marcos A.
2017-03-01
This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.
Feasibility studies of optical processing of image bandwidth compression schemes
NASA Astrophysics Data System (ADS)
Hunt, B. R.
1987-05-01
The two research activities are included as two separate divisions of this research report. The research activities are as follows: 1. Adaptive Recursive Interpolated DPCM for image data compression (ARIDPCM). A consistent theme in the search supported under Grant Number AFOSR under Grant AFOSR-81-0170 has been novel methods of image data compression that are suitable for implementation by optical processing. Initial investigation led to the IDPCM method of image data compression. 2. Deblurring images through turbulent atmosphere. A common problem in astronomy is the imaging of astronomical fluctuations of the atmosphere. The microscale fluctuations limit the resolution of any object by ground-based telescope, the phenomenon of stars twinkling being the most commonly observed form of this degradation. This problem also has military significance in limiting the ground-based observation of satellites in earth orbit. As concerns about SDI arise, the observation of Soviet Satellites becomes more important, and this observation is limited by atmospheric turbulence.
Image Compression Algorithm Altered to Improve Stereo Ranging
NASA Technical Reports Server (NTRS)
Kiely, Aaron
2008-01-01
A report discusses a modification of the ICER image-data-compression algorithm to increase the accuracy of ranging computations performed on compressed stereoscopic image pairs captured by cameras aboard the Mars Exploration Rovers. (ICER and variants thereof were discussed in several prior NASA Tech Briefs articles.) Like many image compressors, ICER was designed to minimize a mean-square-error measure of distortion in reconstructed images as a function of the compressed data volume. The present modification of ICER was preceded by formulation of an alternative error measure, an image-quality metric that focuses on stereoscopic-ranging quality and takes account of image-processing steps in the stereoscopic-ranging process. This metric was used in empirical evaluation of bit planes of wavelet-transform subbands that are generated in ICER. The present modification, which is a change in a bit-plane prioritization rule in ICER, was adopted on the basis of this evaluation. This modification changes the order in which image data are encoded, such that when ICER is used for lossy compression, better stereoscopic-ranging results are obtained as a function of the compressed data volume.
NASA Astrophysics Data System (ADS)
Kowalik-Urbaniak, Ilona; Brunet, Dominique; Wang, Jiheng; Koff, David; Smolarski-Koff, Nadine; Vrscay, Edward R.; Wallace, Bill; Wang, Zhou
2014-03-01
Our study, involving a collaboration with radiologists (DK,NSK) as well as a leading international developer of medical imaging software (AGFA), is primarily concerned with improved methods of assessing the diagnostic quality of compressed medical images and the investigation of compression artifacts resulting from JPEG and JPEG2000. In this work, we compare the performances of the Structural Similarity quality measure (SSIM), MSE/PSNR, compression ratio CR and JPEG quality factor Q, based on experimental data collected in two experiments involving radiologists. An ROC and Kolmogorov-Smirnov analysis indicates that compression ratio is not always a good indicator of visual quality. Moreover, SSIM demonstrates the best performance, i.e., it provides the closest match to the radiologists' assessments. We also show that a weighted Youden index1 and curve tting method can provide SSIM and MSE thresholds for acceptable compression ratios.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
A Novel Psychovisual Threshold on Large DCT for Image Compression
2015-01-01
A psychovisual experiment prescribes the quantization values in image compression. The quantization process is used as a threshold of the human visual system tolerance to reduce the amount of encoded transform coefficients. It is very challenging to generate an optimal quantization value based on the contribution of the transform coefficient at each frequency order. The psychovisual threshold represents the sensitivity of the human visual perception at each frequency order to the image reconstruction. An ideal contribution of the transform at each frequency order will be the primitive of the psychovisual threshold in image compression. This research study proposes a psychovisual threshold on the large discrete cosine transform (DCT) image block which will be used to automatically generate the much needed quantization tables. The proposed psychovisual threshold will be used to prescribe the quantization values at each frequency order. The psychovisual threshold on the large image block provides significant improvement in the quality of output images. The experimental results on large quantization tables from psychovisual threshold produce largely free artifacts in the visual output image. Besides, the experimental results show that the concept of psychovisual threshold produces better quality image at the higher compression rate than JPEG image compression. PMID:25874257
Computational complexity of object-based image compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
2005-09-01
Image compression via transform coding applied to small rectangular regions or encoding blocks appears to be approaching asymptotic rate-distortion performance. However, an emerging compression technology, called object-based compression (OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of
Watermarking of ultrasound medical images in teleradiology using compressed watermark
Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq
2016-01-01
Abstract. The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel’s least significant bits (LSBs). The watermark lossless compression and embedding at pixel’s LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes. PMID:26839914
Preprocessing and compression of Hyperspectral images captured onboard UAVs
NASA Astrophysics Data System (ADS)
Herrero, Rolando; Cadirola, Martin; Ingle, Vinay K.
2015-10-01
Advancements in image sensors and signal processing have led to the successful development of lightweight hyperspectral imaging systems that are critical to the deployment of Photometry and Remote Sensing (PaRS) capabilities in unmanned aerial vehicles (UAVs). In general, hyperspectral data cubes include a few dozens of spectral bands that are extremely useful for remote sensing applications that range from detection of land vegetation to monitoring of atmospheric products derived from the processing of lower level radiance images. Because these data cubes are captured in the challenging environment of UAVs, where resources are limited, source encoding by means of compression is a fundamental mechanism that considerably improves the overall system performance and reliability. In this paper, we focus on the hyperspectral images captured by a state-of-the-art commercial hyperspectral camera by showing the results of applying ultraspectral data compression to the obtained data set. Specifically the compression scheme that we introduce integrates two stages; (1) preprocessing and (2) compression itself. The outcomes of this procedure are linear prediction coefficients and an error signal that, when encoded, results in a compressed version of the original image. Second, preprocessing and compression algorithms are optimized and have their time complexity analyzed to guarantee their successful deployment using low power ARM based embedded processors in the context of UAVs. Lastly, we compare the proposed architecture against other well known schemes and show how the compression scheme presented in this paper outperforms all of them by providing substantial improvement and delivering both lower compression rates and lower distortion.
Medical Image Compression Using a New Subband Coding Method
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug
1995-01-01
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
The FBI compression standard for digitized fingerprint images
Brislawn, C.M.; Bradley, J.N.; Onyshczak, R.J.; Hopper, T.
1996-10-01
The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.
Three-dimensional image compression with integer wavelet transforms.
Bilgin, A; Zweig, G; Marcellin, M W
2000-04-10
A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.
Three-Dimensional Image Compression With Integer Wavelet Transforms
NASA Astrophysics Data System (ADS)
Bilgin, Ali; Zweig, George; Marcellin, Michael W.
2000-04-01
A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.
Joint transform correlator using JPEG-compressed reference images
NASA Astrophysics Data System (ADS)
Widjaja, Joewono
2013-06-01
Pattern recognition by using joint transform correlator with JPEG-compressed reference images is studied. Human face and fingerprint images are used as test scenes with different spatial frequency contents. Recognition performance is quantitatively measured by taking into account effect of imbalance illumination and noise presence. The feasibility of implementing the proposed JTC is verified by using computer simulations and experiments.
Image Compression on a VLSI Neural-Based Vector Quantizer.
ERIC Educational Resources Information Center
Chen, Oscal T.-C.; And Others
1992-01-01
Describes a modified frequency-sensitive self-organization (FSO) algorithm for image data compression and the associated VLSI architecture. Topics discussed include vector quantization; VLSI neural processor architecture; detailed circuit implementation; and a neural network vector quantization prototype chip. Examples of images using the FSO…
Multiview image compression based on LDV scheme
NASA Astrophysics Data System (ADS)
Battin, Benjamin; Niquin, Cédric; Vautrot, Philippe; Debons, Didier; Lucas, Laurent
2011-03-01
In recent years, we have seen several different approaches dealing with multiview compression. First, we can find the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas. We first perform a global per-pixel matching step (providing a good consistency between each view) in order to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal deformations generated by our approach.
Compressive spectral integral imaging using a microlens array
NASA Astrophysics Data System (ADS)
Feng, Weiyi; Rueda, Hoover; Fu, Chen; Qian, Chen; Arce, Gonzalo R.
2016-05-01
In this paper, a compressive spectral integral imaging system using a microlens array (MLA) is proposed. This system can sense the 4D spectro-volumetric information into a compressive 2D measurement image on the detector plane. In the reconstruction process, the 3D spatial information at different depths and the spectral responses of each spatial volume pixel can be obtained simultaneously. In the simulation, sensing of the 3D objects is carried out by optically recording elemental images (EIs) using a scanned pinhole camera. With the elemental images, a spectral data cube with different perspectives and depth information can be reconstructed using the TwIST algorithm in the multi-shot compressive spectral imaging framework. Then, the 3D spatial images with one dimensional spectral information at arbitrary depths are computed using the computational integral imaging method by inversely mapping the elemental images according to geometrical optics. The simulation results verify the feasibility of the proposed system. The 3D volume images and the spectral information of the volume pixels can be successfully reconstructed at the location of the 3D objects. The proposed system can capture both 3D volumetric images and spectral information in a video rate, which is valuable in biomedical imaging and chemical analysis.
Perceptual rate-distortion optimized image compression based on block compressive sensing
NASA Astrophysics Data System (ADS)
Xu, Jin; Qiao, Yuansong; Wen, Quan; Fu, Zhizhong
2016-09-01
The emerging compressive sensing (CS) theory provides a paradigm for image compression. Most current efforts in CS-based image compression have been focused on enhancing the objective coding efficiency. In order to achieve a maximal perceptual quality under the measurements budget constraint, we propose a perceptual rate-distortion optimized (RDO) CS-based image codec in this paper. By incorporating both the human visual system characteristics and the signal sparsity into a RDO model designed for the block compressive sensing framework, the measurements allocation for each block is formulated as an optimization problem, which can be efficiently solved by the Lagrangian relaxation method. After the optimal measurement number is determined, each block is adaptively sampled using an image-dependent measurement matrix. To make our proposed codec applicable to different scenarios, we also propose two solutions to implement the perceptual RDO measurements allocation technique: one at the encoder side and the other at the decoder side. The experimental results show that our codec outperforms the other existing CS-based image codecs in terms of both objective and subjective performances. In particular, our codec can also achieve a low complexity encoder by adopting the decoder-based solution for the perceptual RDO measurements allocation.
Compressive Estimation and Imaging Based on Autoregressive Models.
Testa, Matteo; Magli, Enrico
2016-11-01
Compressed sensing (CS) is a fast and efficient way to obtain compact signal representations. Oftentimes, one wishes to extract some information from the available compressed signal. Since CS signal recovery is typically expensive from a computational point of view, it is inconvenient to first recover the signal and then extract the information. A much more effective approach consists in estimating the information directly from the signal's linear measurements. In this paper, we propose a novel framework for compressive estimation of autoregressive (AR) process parameters based on ad hoc sensing matrix construction. More in detail, we introduce a compressive least square estimator for AR(p) parameters and a specific AR(1) compressive Bayesian estimator. We exploit the proposed techniques to address two important practical problems. The first is compressive covariance estimation for Toeplitz structured covariance matrices where we tackle the problem with a novel parametric approach based on the estimated AR parameters. The second is a block-based compressive imaging system, where we introduce an algorithm that adaptively calculates the number of measurements to be acquired for each block from a set of initial measurements based on its degree of compressibility. We show that the proposed techniques outperform the state-of-the-art methods for these two problems.
Improved satellite image compression and reconstruction via genetic algorithms
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael; Lamont, Gary
2008-10-01
A wide variety of signal and image processing applications, including the US Federal Bureau of Investigation's fingerprint compression standard [3] and the JPEG-2000 image compression standard [26], utilize wavelets. This paper describes new research that demonstrates how a genetic algorithm (GA) may be used to evolve transforms that outperform wavelets for satellite image compression and reconstruction under conditions subject to quantization error. The new approach builds upon prior work by simultaneously evolving real-valued coefficients representing matched forward and inverse transform pairs at each of three levels of a multi-resolution analysis (MRA) transform. The training data for this investigation consists of actual satellite photographs of strategic urban areas. Test results show that a dramatic reduction in the error present in reconstructed satellite images may be achieved without sacrificing the compression capabilities of the forward transform. The transforms evolved during this research outperform previous start-of-the-art solutions, which optimized coefficients for the reconstruction transform only. These transforms also outperform wavelets, reducing error by more than 0.76 dB at a quantization level of 64. In addition, transforms trained using representative satellite images do not perform quite as well when subsequently tested against images from other classes (such as fingerprints or portraits). This result suggests that the GA developed for this research is automatically learning to exploit specific attributes common to the class of images represented in the training population.
Feature preserving compression of high resolution SAR images
NASA Astrophysics Data System (ADS)
Yang, Zhigao; Hu, Fuxiang; Sun, Tao; Qin, Qianqing
2006-10-01
Compression techniques are required to transmit the large amounts of high-resolution synthetic aperture radar (SAR) image data over the available channels. Common Image compression methods may lose detail and weak information in original images, especially at smoothness areas and edges with low contrast. This is known as "smoothing effect". It becomes difficult to extract and recognize some useful image features such as points and lines. We propose a new SAR image compression algorithm that can reduce the "smoothing effect" based on adaptive wavelet packet transform and feature-preserving rate allocation. For the reason that images should be modeled as non-stationary information resources, a SAR image is partitioned to overlapped blocks. Each overlapped block is then transformed by adaptive wavelet packet according to statistical features of different blocks. In quantifying and entropy coding of wavelet coefficients, we integrate feature-preserving technique. Experiments show that quality of our algorithm up to 16:1 compression ratio is improved significantly, and more weak information is reserved.
Fast-adaptive near-lossless image compression
NASA Astrophysics Data System (ADS)
He, Kejing
2016-05-01
The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.
Medical image compression with embedded-wavelet transform
NASA Astrophysics Data System (ADS)
Cheng, Po-Yuen; Lin, Freddie S.; Jannson, Tomasz
1997-10-01
The need for effective medical image compression and transmission techniques continues to grow because of the huge volume of radiological images captured each year. The limited bandwidth and efficiency of current networking systems cannot meet this need. In response, Physical Optics Corporation devised an efficient medical image management system to significantly reduce the storage space and transmission bandwidth required for digitized medical images. The major functions of this system are: (1) compressing medical imagery, using a visual-lossless coder, to reduce the storage space required; (2) transmitting image data progressively, to use the transmission bandwidth efficiently; and (3) indexing medical imagery according to image characteristics, to enable automatic content-based retrieval. A novel scalable wavelet-based image coder was developed to implement the system. In addition to its high compression, this approach is scalable in both image size and quality. The system provides dramatic solutions to many medical image handling problems. One application is the efficient storage and fast transmission of medical images over picture archiving and communication systems. In addition to reducing costs, the potential impact on improving the quality and responsiveness of health care delivery in the US is significant.
Effect of Image Linearization on Normalized Compression Distance
NASA Astrophysics Data System (ADS)
Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela
Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.
Compression of Ultrasonic NDT Image by Wavelet Based Local Quantization
NASA Astrophysics Data System (ADS)
Cheng, W.; Li, L. Q.; Tsukada, K.; Hanasaki, K.
2004-02-01
Compression on ultrasonic image that is always corrupted by noise will cause `over-smoothness' or much distortion. To solve this problem to meet the need of real time inspection and tele-inspection, a compression method based on Discrete Wavelet Transform (DWT) that can also suppress the noise without losing much flaw-relevant information, is presented in this work. Exploiting the multi-resolution and interscale correlation property of DWT, a simple way named DWCs classification, is introduced first to classify detail wavelet coefficients (DWCs) as dominated by noise, signal or bi-effected. A better denoising can be realized by selective thresholding DWCs. While in `Local quantization', different quantization strategies are applied to the DWCs according to their classification and the local image property. It allocates the bit rate more efficiently to the DWCs thus achieve a higher compression rate. Meanwhile, the decompressed image shows the effects of noise suppressed and flaw characters preserved.
Compression through decomposition into browse and residual images
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.; Manohar, M.
1993-01-01
Economical archival and retrieval of image data is becoming increasingly important considering the unprecedented data volumes expected from the Earth Observing System (EOS) instruments. For cost effective browsing the image data (possibly from remote site), and retrieving the original image data from the data archive, we suggest an integrated image browse and data archive system employing incremental transmission. We produce our browse image data with the JPEG/DCT lossy compression approach. Image residual data is then obtained by taking the pixel by pixel differences between the original data and the browse image data. We then code the residual data with a form of variable length coding called diagonal coding. In our experiments, the JPEG/DCT is used at different quality factors (Q) to generate the browse and residual data. The algorithm has been tested on band 4 of two Thematic mapper (TM) data sets. The best overall compression ratios (of about 1.7) were obtained when a quality factor of Q=50 was used to produce browse data at a compression ratio of 10 to 11. At this quality factor the browse image data has virtually no visible distortions for the images tested.
High-speed lossless compression for angiography image sequences
NASA Astrophysics Data System (ADS)
Kennedy, Jonathon M.; Simms, Michael; Kearney, Emma; Dowling, Anita; Fagan, Andrew; O'Hare, Neil J.
2001-05-01
High speed processing of large amounts of data is a requirement for many diagnostic quality medical imaging applications. A demanding example is the acquisition, storage and display of image sequences in angiography. The functional performance requirements for handling angiography data were identified. A new lossless image compression algorithm was developed, implemented in C++ for the Intel Pentium/MS-Windows environment and optimized for speed of operation. Speeds of up to 6M pixels per second for compression and 12M pixels per second for decompression were measured. This represents an improvement of up to 400% over the next best high-performance algorithm (LOCO-I) without significant reduction in compression ratio. Performance tests were carried out at St. James's Hospital using actual angiography data. Results were compared with the lossless JPEG standard and other leading methods such as JPEG-LS (LOCO-I) and the lossless wavelet approach proposed for JPEG 2000. Our new algorithm represents a significant improvement in the performance of lossless image compression technology without using specialized hardware. It has been applied successfully to image sequence decompression at video rate for angiography, one of the most challenging application areas in medical imaging.
Feature-preserving image/video compression
NASA Astrophysics Data System (ADS)
Al-Jawad, Naseer; Jassim, Sabah
2005-10-01
Advances in digital image processing, the advents of multimedia computing, and the availability of affordable high quality digital cameras have led to increased demand for digital images/videos. There has been a fast growth in the number of information systems that benefit from digital imaging techniques and present many tough challenges. In this paper e are concerned with applications for which image quality is a critical requirement. The fields of medicine, remote sensing, real time surveillance, and image-based automatic fingerprint/face identification systems are all but few examples of such applications. Medical care is increasingly dependent on imaging for diagnostics, surgery, and education. It is estimated that medium size hospitals in the US generate terabytes of MRI images and X-Ray images are generated to be stored in very large databases which are frequently accessed and searched for research and training. On the other hand, the rise of international terrorism and the growth of identity theft have added urgency to the development of new efficient biometric-based person verification/authentication systems. In future, such systems can provide an additional layer of security for online transactions or for real-time surveillance.
Image and Video Compression with VLSI Neural Networks
NASA Technical Reports Server (NTRS)
Fang, W.; Sheu, B.
1993-01-01
An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Pulse-compression ghost imaging lidar via coherent detection
NASA Astrophysics Data System (ADS)
Deng, Chenjin; Gong, Wenlin; Han, Shensheng
2016-11-01
Ghost imaging (GI) lidar, as a novel remote sensing technique,has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which can dramatically improve the detection sensitivity and detection range.
Pulse-compression ghost imaging lidar via coherent detection.
Deng, Chenjin; Gong, Wenlin; Han, Shensheng
2016-11-14
Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target's spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.
Integer wavelet transform for embedded lossy to lossless image compression.
Reichel, J; Menegaz, G; Nadenau, M J; Kunt, M
2001-01-01
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.
NASA Astrophysics Data System (ADS)
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2016-05-01
In this paper, a novel approach based on compressive sensing and chaos is proposed for simultaneously compressing, fusing and encrypting multi-modal images. The sparsely represented source images are firstly measured with the key-controlled pseudo-random measurement matrix constructed using logistic map, which reduces the data to be processed and realizes the initial encryption. Then the obtained measurements are fused by the proposed adaptive weighted fusion rule. The fused measurement is further encrypted into the ciphertext through an iterative procedure including improved random pixel exchanging technique and fractional Fourier transform. The fused image can be reconstructed by decrypting the ciphertext and using a recovery algorithm. The proposed algorithm not only reduces data volume but also simplifies keys, which improves the efficiency of transmitting data and distributing keys. Numerical results demonstrate the feasibility and security of the proposed scheme.
Application of joint orthogonal bases in compressive sensing ghost image
NASA Astrophysics Data System (ADS)
Fan, Xiang; Chen, Yi; Cheng, Zheng-dong; Liang, Zheng-yu; Zhu, Bin
2016-11-01
Sparse decomposition is one of the core issue of compressive sensing ghost image. At this stage, traditional methods still have the problems of poor sparsity and low reconstruction accuracy, such as discrete fourier transform and discrete cosine transform. In order to solve these problems, joint orthogonal bases transform is proposed to optimize ghost imaging. First, introduce the principle of compressive sensing ghost imaging and point out that sparsity is related to the minimum sample data required for imaging. Then, analyze the development and principle of joint orthogonal bases in detail and find out it can use less nonzero coefficients to reach the same identification effect as other methods. So, joint orthogonal bases transform is able to provide the sparsest representation. Finally, the experimental setup is built in order to verify simulation results. Experimental results indicate that the PSNR of joint orthogonal bases is much higher than traditional methods by using same sample data in compressive sensing ghost image.Therefore, joint orthogonal bases transform can realize better imaging quality under less sample data, which can satisfy the system requirements of convenience and rapid speed in ghost image.
Overview of parallel processing approaches to image and video compression
NASA Astrophysics Data System (ADS)
Shen, Ke; Cook, Gregory W.; Jamieson, Leah H.; Delp, Edward J., III
1994-05-01
In this paper we present an overview of techniques used to implement various image and video compression algorithms using parallel processing. Approaches used can largely be divided into four areas. The first is the use of special purpose architectures designed specifically for image and video compression. An example of this is the use of an array of DSP chips to implement a version of MPEG1. The second approach is the use of VLSI techniques. These include various chip sets for JPEG and MPEG1. The third approach is algorithm driven, in which the structure of the compression algorithm describes the architecture, e.g. pyramid algorithms. The fourth approach is the implementation of algorithms on high performance parallel computers. Examples of this approach are the use of a massively parallel computer such as the MasPar MP-1 or the use of a coarse-grained machine such as the Intel Touchstone Delta.
Improved zerotree coding algorithm for wavelet image compression
NASA Astrophysics Data System (ADS)
Chen, Jun; Li, Yunsong; Wu, Chengke
2000-12-01
A listless minimum zerotree coding algorithm based on the fast lifting wavelet transform with lower memory requirement and higher compression performance is presented in this paper. Most state-of-the-art image compression techniques based on wavelet coefficients, such as EZW and SPIHT, exploit the dependency between the subbands in a wavelet transformed image. We propose a minimum zerotree of wavelet coefficients which exploits the dependency not only between the coarser and the finer subbands but also within the lowest frequency subband. And a ne listless significance map coding algorithm based on the minimum zerotree, using new flag maps and new scanning order different form Wen-Kuo Lin et al. LZC, is also proposed. A comparison reveals that the PSNR results of LMZC are higher than those of LZC, and the compression performance of LMZC outperforms that of SPIHT in terms of hard implementation.
Compressive Hyperspectral Imaging and Anomaly Detection
2013-03-01
simple, yet effective method of using the spatial information to increase the accuracy of target detection. The idea is to apply TV denoising [4] to the...a zero value, and isolated false alarm pixels are usually eliminated by the TV denoising algorithm. 2 2.1.1 TV Denoising Here we briefly describe the...total variation denoising model[4] we use in the above. Given an image I ∈ R2, we solve the following L1 minimization problem to denoise the image
Adaptive Compression of Multisensor Image Data
1992-03-01
upsample and reconstruct the subimages which are then added together to form the reconstructed image. In order to prevent distortions resulting from...smooth surfaces such as metallic or painted objects have predominantly path A reflections and that rougher surfaces such as soils and vegetation support
Fractal image compression: A resolution independent representation for imagery
NASA Technical Reports Server (NTRS)
Sloan, Alan D.
1993-01-01
A deterministic fractal is an image which has low information content and no inherent scale. Because of their low information content, deterministic fractals can be described with small data sets. They can be displayed at high resolution since they are not bound by an inherent scale. A remarkable consequence follows. Fractal images can be encoded at very high compression ratios. This fern, for example is encoded in less than 50 bytes and yet can be displayed at resolutions with increasing levels of detail appearing. The Fractal Transform was discovered in 1988 by Michael F. Barnsley. It is the basis for a new image compression scheme which was initially developed by myself and Michael Barnsley at Iterated Systems. The Fractal Transform effectively solves the problem of finding a fractal which approximates a digital 'real world image'.
Wavelet-based pavement image compression and noise reduction
NASA Astrophysics Data System (ADS)
Zhou, Jian; Huang, Peisen S.; Chiang, Fu-Pen
2005-08-01
For any automated distress inspection system, typically a huge number of pavement images are collected. Use of an appropriate image compression algorithm can save disk space, reduce the saving time, increase the inspection distance, and increase the processing speed. In this research, a modified EZW (Embedded Zero-tree Wavelet) coding method, which is an improved version of the widely used EZW coding method, is proposed. This method, unlike the two-pass approach used in the original EZW method, uses only one pass to encode both the coordinates and magnitudes of wavelet coefficients. An adaptive arithmetic encoding method is also implemented to encode four symbols assigned by the modified EZW into binary bits. By applying a thresholding technique to terminate the coding process, the modified EZW coding method can compress the image and reduce noise simultaneously. The new method is much simpler and faster. Experimental results also show that the compression ratio was increased one and one-half times compared to the EZW coding method. The compressed and de-noised data can be used to reconstruct wavelet coefficients for off-line pavement image processing such as distress classification and quantification.
Knowledge-based image bandwidth compression and enhancement
NASA Astrophysics Data System (ADS)
Saghri, John A.; Tescher, Andrew G.
1987-01-01
Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.
Optimized satellite image compression and reconstruction via evolution strategies
NASA Astrophysics Data System (ADS)
Babb, Brendan; Moore, Frank; Peterson, Michael
2009-05-01
This paper describes the automatic discovery, via an Evolution Strategy with Covariance Matrix Adaptation (CMA-ES), of vectors of real-valued coefficients representing matched forward and inverse transforms that outperform the 9/7 Cohen-Daubechies-Feauveau (CDF) discrete wavelet transform (DWT) for satellite image compression and reconstruction under conditions subject to quantization error. The best transform evolved during this study reduces the mean squared error (MSE) present in reconstructed satellite images by an average of 33.78% (1.79 dB), while maintaining the average information entropy (IE) of compressed images at 99.57% in comparison to the wavelet. In addition, this evolved transform achieves 49.88% (3.00 dB) average MSE reduction when tested on 80 images from the FBI fingerprint test set, and 42.35% (2.39 dB) average MSE reduction when tested on a set of 18 digital photographs, while achieving average IE of 104.36% and 100.08%, respectively. These results indicate that our evolved transform greatly improves the quality of reconstructed images without substantial loss of compression capability over a broad range of image classes.
AMA Statistical Information Based Analysis of a Compressive Imaging System
NASA Astrophysics Data System (ADS)
Hope, D.; Prasad, S.
Recent advances in optics and instrumentation have dramatically increased the amount of data, both spatial and spectral, that can be obtained about a target scene. The volume of the acquired data can and, in fact, often does far exceed the amount of intrinsic information present in the scene. In such cases, the large volume of data alone can impede the analysis and extraction of relevant information about the scene. One approach to overcoming this impedance mismatch between the volume of data and intrinsic information in the scene the data are supposed to convey is compressive sensing. Compressive sensing exploits the fact that most signals of interest, such as image scenes, possess natural correlations in their physical structure. These correlations, which can occur spatially as well as spectrally, can suggest a more natural sparse basis for compressing and representing the scene than standard pixels or voxels. A compressive sensing system attempts to acquire and encode the scene in this sparse basis, while preserving all relevant information in the scene. One criterion for assessing the content, acquisition, and processing of information in the image scene is Shannon information. This metric describes fundamental limits on encoding and reliably transmitting information about a source, such as an image scene. In this framework, successful encoding of the image requires an optimal choice of a sparse basis, while losses of information during transmission occur due to a finite system response and measurement noise. An information source can be represented by a certain class of image scenes, .e.g. those that have a common morphology. The ability to associate the recorded image with the correct member of the class that produced the image depends on the amount of Shannon information in the acquired data. In this manner, one can analyze the performance of a compressive imaging system for a specific class or ensemble of image scenes. We present such an information
Aldossari, M; Alfalou, A; Brosseau, C
2014-09-22
This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.
Distributed imaging using an array of compressive cameras
NASA Astrophysics Data System (ADS)
Ke, Jun; Shankar, Premchandra; Neifeld, Mark A.
2009-01-01
We describe a distributed computational imaging system that employs an array of feature specific sensors, also known as compressive imagers, to directly measure the linear projections of an object. Two different schemes for implementing these non-imaging sensors are discussed. We consider the task of object reconstruction and quantify the fidelity of reconstruction using the root mean squared error (RMSE) metric. We also study the lifetime of such a distributed sensor network. The sources of energy consumption in a distributed feature specific imaging (DFSI) system are discussed and compared with those in a distributed conventional imaging (DCI) system. A DFSI system consisting of 20 imagers collecting DCT, Hadamard, or PCA features has a lifetime of 4.8× that of the DCI system when the noise level is 20% and the reconstruction RMSE requirement is 6%. To validate the simulation results we emulate a distributed computational imaging system using an experimental setup consisting of an array of conventional cameras.
Neural networks for data compression and invariant image recognition
NASA Technical Reports Server (NTRS)
Gardner, Sheldon
1989-01-01
An approach to invariant image recognition (I2R), based upon a model of biological vision in the mammalian visual system (MVS), is described. The complete I2R model incorporates several biologically inspired features: exponential mapping of retinal images, Gabor spatial filtering, and a neural network associative memory. In the I2R model, exponentially mapped retinal images are filtered by a hierarchical set of Gabor spatial filters (GSF) which provide compression of the information contained within a pixel-based image. A neural network associative memory (AM) is used to process the GSF coded images. We describe a 1-D shape function method for coding of scale and rotationally invariant shape information. This method reduces image shape information to a periodic waveform suitable for coding as an input vector to a neural network AM. The shape function method is suitable for near term applications on conventional computing architectures equipped with VLSI FFT chips to provide a rapid image search capability.
Wavelet-based image compression using fixed residual value
NASA Astrophysics Data System (ADS)
Muzaffar, Tanzeem; Choi, Tae-Sun
2000-12-01
Wavelet based compression is getting popular due to its promising compaction properties at low bitrate. Zerotree wavelet image coding scheme efficiently exploits multi-level redundancy present in transformed data to minimize coding bits. In this paper, a new technique is proposed to achieve high compression by adding new zerotree and significant symbols to original EZW coder. Contrary to four symbols present in basic EZW scheme, modified algorithm uses eight symbols to generate fewer bits for a given data. Subordinate pass of EZW is eliminated and replaced with fixed residual value transmission for easy implementation. This modification simplifies the coding technique as well and speeds up the process, retaining the property of embeddedness.
Videos and images from 25 years of teaching compressible flow
NASA Astrophysics Data System (ADS)
Settles, Gary
2008-11-01
Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.
JPIC-Rad-Hard JPEG2000 Image Compression ASIC
NASA Astrophysics Data System (ADS)
Zervas, Nikos; Ginosar, Ran; Broyde, Amitai; Alon, Dov
2010-08-01
JPIC is a rad-hard high-performance image compression ASIC for the aerospace market. JPIC implements tier 1 of the ISO/IEC 15444-1 JPEG2000 (a.k.a. J2K) image compression standard [1] as well as the post compression rate-distortion algorithm, which is part of tier 2 coding. A modular architecture enables employing a single JPIC or multiple coordinated JPIC units. JPIC is designed to support wide data sources of imager in optical, panchromatic and multi-spectral space and airborne sensors. JPIC has been developed as a collaboration of Alma Technologies S.A. (Greece), MBT/IAI Ltd (Israel) and Ramon Chips Ltd (Israel). MBT IAI defined the system architecture requirements and interfaces, The JPEG2K-E IP core from Alma implements the compression algorithm [2]. Ramon Chips adds SERDES interfaces and host interfaces and integrates the ASIC. MBT has demonstrated the full chip on an FPGA board and created system boards employing multiple JPIC units. The ASIC implementation, based on Ramon Chips' 180nm CMOS RadSafe[TM] RH cell library enables superior radiation hardness.
NASA Astrophysics Data System (ADS)
Moré, G.; Pesquer, L.; Blanes, I.; Serra-Sagristà, J.; Pons, X.
2012-12-01
World coverage Digital Elevation Models (DEM) have progressively increased their spatial resolution (e.g., ETOPO, SRTM, or Aster GDEM) and, consequently, their storage requirements. On the other hand, lossy data compression facilitates accessing, sharing and transmitting large spatial datasets in environments with limited storage. However, since lossy compression modifies the original information, rigorous studies are needed to understand its effects and consequences. The present work analyzes the influence of DEM quality -modified by lossy compression-, on the radiometric correction of remote sensing imagery, and the eventual propagation of the uncertainty in the resulting land cover classification. Radiometric correction is usually composed of two parts: atmospheric correction and topographical correction. For topographical correction, DEM provides the altimetry information that allows modeling the incidence radiation on terrain surface (cast shadows, self shadows, etc). To quantify the effects of the DEM lossy compression on the radiometric correction, we use radiometrically corrected images for classification purposes, and compare the accuracy of two standard coding techniques for a wide range of compression ratios. The DEM has been obtained by resampling the DEM v.2 of Catalonia (ICC), originally having 15 m resolution, to the Landsat TM resolution. The Aster DEM has been used to fill the gaps beyond the administrative limits of Catalonia. The DEM has been lossy compressed with two coding standards at compression ratios 5:1, 10:1, 20:1, 100:1 and 200:1. The employed coding standards have been JPEG2000 and CCSDS-IDC; the former is an international ISO/ITU-T standard for almost any type of images, while the latter is a recommendation of the CCSDS consortium for mono-component remote sensing images. Both techniques are wavelet-based followed by an entropy-coding stage. Also, for large compression ratios, both techniques need a post processing for correctly
Mechanical compression for contrasting OCT images of biotissues
NASA Astrophysics Data System (ADS)
Kirillin, Mikhail Y.; Argba, Pavel D.; Kamensky, Vladislav A.
2011-06-01
Contrasting of biotissue layers in OCT images after application of mechanical compression is discussed. The study is performed on ex vivo samples of human rectum, and in vivo on skin of human volunteers. We show that mechanical compression provides contrasting of biotissue layer boundaries due to different mechanical properties of layers. We show that alteration of pressure from 0 up to 0.45 N/mm2 causes contrast increase from 1 to 10 dB in OCT imaging of human rectum ex vivo. Results of ex vivo studies are in good agreement with Monte Carlo simulations. Application of pressure of 0.45 N/mm2 causes increase in contrast of epidermis-dermis junction in OCT-images of human skin in vivo for about 10 dB.
Implementation of aeronautic image compression technology on DSP
NASA Astrophysics Data System (ADS)
Wang, Yujing; Gao, Xueqiang; Wang, Mei
2007-11-01
According to the designed characteristics and demands of aeronautic image compression system, lifting scheme wavelet and SPIHT algorithm was selected as the key part of software implementation, which was introduced with details. In order to improve execution efficiency, border processing was simplified reasonably and SPIHT (Set Partitioning in Hierarchical Trees) algorithm was also modified partly. The results showed that the selected scheme has a 0.4dB improvement in PSNR(peak-peak-ratio) compared with classical Shaprio's scheme. To improve the operating speed, the hardware system was then designed based on DSP and many optimization measures were then applied successfully. Practical test showed that the system can meet the real-time demand with good quality of reconstruct image, which has been used in an aeronautic image compression system practically.
A Progressive Image Compression Method Based on EZW Algorithm
NASA Astrophysics Data System (ADS)
Du, Ke; Lu, Jianming; Yahagi, Takashi
A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.
Spatial compression algorithm for the analysis of very large multivariate images
Keenan, Michael R.
2008-07-15
A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.
A novel image fusion approach based on compressive sensing
NASA Astrophysics Data System (ADS)
Yin, Hongpeng; Liu, Zhaodong; Fang, Bin; Li, Yanxia
2015-11-01
Image fusion can integrate complementary and relevant information of source images captured by multiple sensors into a unitary synthetic image. The compressive sensing-based (CS) fusion approach can greatly reduce the processing speed and guarantee the quality of the fused image by integrating fewer non-zero coefficients. However, there are two main limitations in the conventional CS-based fusion approach. Firstly, directly fusing sensing measurements may bring greater uncertain results with high reconstruction error. Secondly, using single fusion rule may result in the problems of blocking artifacts and poor fidelity. In this paper, a novel image fusion approach based on CS is proposed to solve those problems. The non-subsampled contourlet transform (NSCT) method is utilized to decompose the source images. The dual-layer Pulse Coupled Neural Network (PCNN) model is used to integrate low-pass subbands; while an edge-retention based fusion rule is proposed to fuse high-pass subbands. The sparse coefficients are fused before being measured by Gaussian matrix. The fused image is accurately reconstructed by Compressive Sampling Matched Pursuit algorithm (CoSaMP). Experimental results demonstrate that the fused image contains abundant detailed contents and preserves the saliency structure. These also indicate that our proposed method achieves better visual quality than the current state-of-the-art methods.
Split Bregman's optimization method for image construction in compressive sensing
NASA Astrophysics Data System (ADS)
Skinner, D.; Foo, S.; Meyer-Bäse, A.
2014-05-01
The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.
Single-pixel optical imaging with compressed reference intensity patterns
NASA Astrophysics Data System (ADS)
Chen, Wen; Chen, Xudong
2015-03-01
Ghost imaging with single-pixel bucket detector has attracted more and more current attention due to its marked physical characteristics. However, in ghost imaging, a large number of reference intensity patterns are usually required for object reconstruction, hence many applications based on ghost imaging (such as tomography and optical security) may be tedious since heavy storage or transmission is requested. In this paper, we report that the compressed reference intensity patterns can be used for object recovery in computational ghost imaging (with single-pixel bucket detector), and object verification can be further conducted. Only a small portion (such as 2.0% pixels) of each reference intensity pattern is used for object reconstruction, and the recovered object is verified by using nonlinear correlation algorithm. Since statistical characteristic and speckle averaging property are inherent in ghost imaging, sidelobes or multiple peaks can be effectively suppressed or eliminated in the nonlinear correlation outputs when random pixel positions are selected from each reference intensity pattern. Since pixel positions can be randomly selected from each 2D reference intensity pattern (such as total measurements of 20000), a large key space and high flexibility can be generated when the proposed method is applied for authenticationbased cryptography. When compressive sensing is used to recover the object with a small number of measurements, the proposed strategy could still be feasible through further compressing the recorded data (i.e., reference intensity patterns) followed by object verification. It is expected that the proposed method not only compresses the recorded data and facilitates the storage or transmission, but also can build up novel capability (i.e., classical or quantum information verification) for ghost imaging.
Digital image compression for a 2f multiplexing optical setup
NASA Astrophysics Data System (ADS)
Vargas, J.; Amaya, D.; Rueda, E.
2016-07-01
In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.
Image compression using address-vector quantization
NASA Astrophysics Data System (ADS)
Nasrabadi, Nasser M.; Feng, Yushu
1990-12-01
A novel vector quantization scheme, the address-vector quantizer (A-VQ), is proposed which exploits the interblock correlation by encoding a group of blocks together using an address-codebook (AC). The AC is a set of address-codevectors (ACVs), each representing a combination of addresses or indices. Each element of the ACV is an address of an entry in the LBG-codebook, representing a vector-quantized block. The AC consists of an active (addressable) region and an inactive (nonaddressable) region. During encoding the ACVs in the AC are reordered adaptively to bring the most probable ACVs into the active region. When encoding an ACV, the active region is checked, and if such an address combination exists, its index is transmitted to the receiver. Otherwise, the address of each block is transmitted individually. The SNR of the images encoded by the A-VQ method is the same as that of a memoryless vector quantizer, but the bit rate is by a factor of approximately two.
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Zhang, Aidi; Zheng, Fen; Gong, Lihua
2014-10-01
The existing ways to encrypt images based on compressive sensing usually treat the whole measurement matrix as the key, which renders the key too large to distribute and memorize or store. To solve this problem, a new image compression-encryption hybrid algorithm is proposed to realize compression and encryption simultaneously, where the key is easily distributed, stored or memorized. The input image is divided into 4 blocks to compress and encrypt, then the pixels of the two adjacent blocks are exchanged randomly by random matrices. The measurement matrices in compressive sensing are constructed by utilizing the circulant matrices and controlling the original row vectors of the circulant matrices with logistic map. And the random matrices used in random pixel exchanging are bound with the measurement matrices. Simulation results verify the effectiveness, security of the proposed algorithm and the acceptable compression performance.
Improving 3D Wavelet-Based Compression of Hyperspectral Images
NASA Technical Reports Server (NTRS)
Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh
2009-01-01
Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a
Region-based compression of remote sensing stereo image pairs
NASA Astrophysics Data System (ADS)
Yan, Ruomei; Li, Yunsong; Wu, Chengke; Wang, Keyan; Li, Shizhong
2009-08-01
According to the data characteristics of remote sensing stereo image pairs, a novel compression algorithm based on the combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for texture classification. Second, an improved ABM is used in the area with flat terrain (flat area), while the disparity estimation, a combination of quadtree decomposition and FBM, is used in the area with alpine terrain (alpine area). Furthermore, the radiation compensation is applied in every area. Finally, the disparities, the residual image, and the reference image are compressed by JPEG2000 together. The new algorithm provides a reasonable prediction in different areas according to characteristics of image textures, which improves the precision of the sensed image. The experimental results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional algorithm at low or medium bitrates, and the subjective quality is obviously enhanced.
Adaptive compression of remote sensing stereo image pairs
NASA Astrophysics Data System (ADS)
Li, Yunsong; Yan, Ruomei; Wu, Chengke; Wang, Keyan; Li, Shizhong; Wang, Yu
2010-09-01
According to the data characteristics of remote sensing stereo image pairs, a novel adaptive compression algorithm based on the combination of feature-based image matching (FBM), area-based image matching (ABM), and region-based disparity estimation is proposed. First, the Scale Invariant Feature Transform (SIFT) and the Sobel operator are carried out for texture classification. Second, an improved ABM is used in the flat area, while the disparity estimation is used in the alpine area. The radiation compensation is applied to further improve the performance. Finally, the residual image and the reference image are compressed by JPEG2000 independently. The new algorithm provides a reasonable prediction in different areas according to the image textures, which improves the precision of the sensed image. The experimental results show that the PSNR of the proposed algorithm can obtain up to about 3dB's gain compared with the traditional algorithm at low or medium bitrates, and the DTM and subjective quality is also obviously enhanced.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
Phase Preserving Dynamic Range Compression of Aeromagnetic Images
NASA Astrophysics Data System (ADS)
Kovesi, Peter
2014-05-01
Geoscientific images with a high dynamic range, such as aeromagnetic images, are difficult to present in a manner that facilitates interpretation. The data values may range over 20000 nanoteslas or more but a computer monitor is typically designed to present input data constrained to 8 bit values. Standard photographic high dynamic range tonemapping algorithms may be unsuitable, or inapplicable to such data because they are have been developed on the basis of statistics of natural images, feature types found in natural images, and models of the human visual system. These algorithms may also require image segmentation and/or decomposition of the image into base and detail layers but these operations may have no meaning for geoscientific images. For geological and geophysical data high dynamic range images are often dealt with via histogram equalization. The problem with this approach is that the contrast stretch or compression applied to data values depends on how frequently the data values occur in the image and not on the magnitude of any data features themselves. This can lead to inappropriate distortions in the output. Other approaches include use of the Automatic Gain Control algorithm developed by Rajagopalan, or the tilt derivative. A difficulty with these approaches is that the signal can be over-normalized and perception of the overall variations in the signal can be lost. To overcome these problems a method is presented that compresses the dynamic range of an image while preserving local features. It makes no assumptions about the formation of the image, the feature types it contains, or its range of values. Thus, unlike algorithms designed for photographic images, this algorithm can be applied to a wide range of scientific images. The method is based on extracting local phase and amplitude values across the image using monogenic filters. The dynamic range of the image can then be reduced by applying a range reducing function to the amplitude values, for
Lossless compression of images from China-Brazil Earth Resources Satellite
NASA Astrophysics Data System (ADS)
Pinho, Marcelo S.
2011-11-01
The aim of this work is to evaluate the performance of different schemes of lossless compression when applied to compact images collected by the satellite CBERS-2B. This satellite is the third one constructed under the CBERS Program (China- Brazil Earth Resources Satellite) and it was launched in 2007. This work focuses in the compression of images from the CCD camera which has a resolution of 20 x 20 meters and five bands. CBERS-2B transmits the data from CCD in real time, with no compression and it does not storage even a small part of images. In fact, this satellite can work in this way because the bit rate produced by CCD is smaller than the transmitter bit rate. However, the resolution and the number of spectral bands of imaging systems are increasing and the constrains in power and bandwidth bound the communication capacity of a satellite channel. Therefore, in the future satellites the communication systems must be reviewed. There are many algorithms for image compression described in the literature and some of them have already been used in remote sensing satellites (RSS). When the bit rate produced by the imaging system is much higher than the transmitter bit rate, a lossy encoder must be used. However, when the gap between the bit rates is not so high, a lossless procedure can be an interesting choice. This work evaluates JPEG-LS, CALIC, SPIHT, JPEG2000, CCSDS recommendation, H.264, and JPEG-XR when they are used to compress images from the CCD camera of CBERS-2B with no loss. The algorithms are applied in a set of twenty images with 5, 812 x 5, 812 pixels, running in blocks of 128 x 128; 256 x 256; 512 x 512; and 1, 024x1, 024 pixels. The tests are done independently in each original band and also in five transformed bands, obtained by a procedure which decorrelates them. In general, the results have shown that algorithms based on predictive schemes (CALIC and JPEG-LS) applied in transformed decorrelated bands produces a better performance in the mean
Rank minimization code aperture design for spectrally selective compressive imaging.
Arguello, Henry; Arce, Gonzalo R
2013-03-01
A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.
Accelerated dynamic EPR imaging using fast acquisition and compressive recovery
NASA Astrophysics Data System (ADS)
Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.
2016-12-01
Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.
Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.
Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L
2016-12-01
Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.
Remotely sensed image compression based on wavelet transform
NASA Technical Reports Server (NTRS)
Kim, Seong W.; Lee, Heung K.; Kim, Kyung S.; Choi, Soon D.
1995-01-01
In this paper, we present an image compression algorithm that is capable of significantly reducing the vast amount of information contained in multispectral images. The developed algorithm exploits the spectral and spatial correlations found in multispectral images. The scheme encodes the difference between images after contrast/brightness equalization to remove the spectral redundancy, and utilizes a two-dimensional wavelet transform to remove the spatial redundancy. the transformed images are then encoded by Hilbert-curve scanning and run-length-encoding, followed by Huffman coding. We also present the performance of the proposed algorithm with the LANDSAT MultiSpectral Scanner data. The loss of information is evaluated by PSNR (peak signal to noise ratio) and classification capability.
Real-Time Digital Compression Of Television Image Data
NASA Technical Reports Server (NTRS)
Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1990-01-01
Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.
Evaluation of color-embedded wavelet image compression techniques
NASA Astrophysics Data System (ADS)
Saenz, Martha; Salama, Paul; Shen, Ke; Delp, Edward J., III
1998-12-01
Color embedded image compression is investigated by means of a set of core experiments that seek to evaluate the advantages of various color transformations, spatial orientation trees and the use of monochrome embedded coding schemes such as EZW and SPIHT. In order to take advantage of the interdependencies of the color components for a given color space, two new spatial orientation trees that relate frequency bands and color components are investigated.
Bradley, J.N.; Brislawn, C.M.
1992-04-11
This report describes the development of a Wavelet Vector Quantization (WVQ) image compression algorithm for fingerprint raster files. The pertinent work was performed at Los Alamos National Laboratory for the Federal Bureau of Investigation. This document describes a previously-sent package of C-language source code, referred to as LAFPC, that performs the WVQ fingerprint compression and decompression tasks. The particulars of the WVQ algorithm and the associated design procedure are detailed elsewhere; the purpose of this document is to report the results of the design algorithm for the fingerprint application and to delineate the implementation issues that are incorporated in LAFPC. Special attention is paid to the computation of the wavelet transform, the fast search algorithm used for the VQ encoding, and the entropy coding procedure used in the transmission of the source symbols.
A geometric approach to multi-view compressive imaging
NASA Astrophysics Data System (ADS)
Park, Jae Young; Wakin, Michael B.
2012-12-01
In this paper, we consider multi-view imaging problems in which an ensemble of cameras collect images describing a common scene. To simplify the acquisition and encoding of these images, we study the effectiveness of non-collaborative compressive sensing encoding schemes wherein each sensor directly and independently compresses its image using randomized measurements. After these measurements and also perhaps the camera positions are transmitted to a central node, the key to an accurate reconstruction is to fully exploit the joint correlation among the signal ensemble. To capture such correlations, we propose a geometric modeling framework in which the image ensemble is treated as a sampling of points from a low-dimensional manifold in the ambient signal space. Building on results that guarantee stable embeddings of manifolds under random measurements, we propose a "manifold lifting" algorithm for recovering the ensemble that can operate even without knowledge of the camera positions. We divide our discussion into two scenarios, the near-field and far-field cases, and describe how the manifold lifting algorithm could be applied to these scenarios. At the end of this paper, we present an in-depth case study of a far-field imaging scenario, where the aim is to reconstruct an ensemble of satellite images taken from different positions with limited but overlapping fields of view. In this case study, we demonstrate the impressive power of random measurements to capture single- and multi-image structure without explicitly searching for it, as the randomized measurement encoding in conjunction with the proposed manifold lifting algorithm can even outperform image-by-image transform coding.
High-resolution three-dimensional imaging with compress sensing
NASA Astrophysics Data System (ADS)
Wang, Jingyi; Ke, Jun
2016-10-01
LIDAR three-dimensional imaging technology have been used in many fields, such as military detection. However, LIDAR require extremely fast data acquisition speed. This makes the manufacture of detector array for LIDAR system is very difficult. To solve this problem, we consider using compress sensing which can greatly decrease the data acquisition and relax the requirement of a detection device. To use the compressive sensing idea, a spatial light modulator will be used to modulate the pulsed light source. Then a photodetector is used to receive the reflected light. A convex optimization problem is solved to reconstruct the 2D depth map of the object. To improve the resolution in transversal direction, we use multiframe image restoration technology. For each 2D piecewise-planar scene, we move the SLM half-pixel each time. Then the position where the modulated light illuminates will changed accordingly. We repeat moving the SLM to four different directions. Then we can get four low-resolution depth maps with different details of the same plane scene. If we use all of the measurements obtained by the subpixel movements, we can reconstruct a high-resolution depth map of the sense. A linear minimum-mean-square error algorithm is used for the reconstruction. By combining compress sensing and multiframe image restoration technology, we reduce the burden on data analyze and improve the efficiency of detection. More importantly, we obtain high-resolution depth maps of a 3D scene.
Block-based adaptive lifting schemes for multiband image compression
NASA Astrophysics Data System (ADS)
Masmoudi, Hela; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe
2004-02-01
In this paper, we are interested in designing lifting schemes adapted to the statistics of the wavelet coefficients of multiband images for compression applications. More precisely, nonseparable vector lifting schemes are used in order to capture simultaneously the spatial and the spectral redundancies. The underlying operators are then computed in order to minimize the entropy of the resulting multiresolution representation. To this respect, we have developed a new iterative block-based classification algorithm. Simulation tests carried out on remotely sensed multispectral images indicate that a substantial gain in terms of bit-rate is achieved by the proposed adaptive coding method w.r.t the non-adaptive one.
An adaptive technique to maximize lossless image data compression of satellite images
NASA Technical Reports Server (NTRS)
Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe
1994-01-01
Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.
Compressive imaging system design using task-specific information.
Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A
2008-09-01
We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.
Subband directional vector quantization in radiological image compression
NASA Astrophysics Data System (ADS)
Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel
1992-05-01
The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.
Compressive sensing for direct millimeter-wave holographic imaging.
Qiao, Lingbo; Wang, Yingxin; Shen, Zongjun; Zhao, Ziran; Chen, Zhiqiang
2015-04-10
Direct millimeter-wave (MMW) holographic imaging, which provides both the amplitude and phase information by using the heterodyne mixing technique, is considered a powerful tool for personnel security surveillance. However, MWW imaging systems usually suffer from the problem of high cost or relatively long data acquisition periods for array or single-pixel systems. In this paper, compressive sensing (CS), which aims at sparse sampling, is extended to direct MMW holographic imaging for reducing the number of antenna units or the data acquisition time. First, following the scalar diffraction theory, an exact derivation of the direct MMW holographic reconstruction is presented. Then, CS reconstruction strategies for complex-valued MMW images are introduced based on the derived reconstruction formula. To pursue the applicability for near-field MMW imaging and more complicated imaging targets, three sparsity bases, including total variance, wavelet, and curvelet, are evaluated for the CS reconstruction of MMW images. We also discuss different sampling patterns for single-pixel, linear array and two-dimensional array MMW imaging systems. Both simulations and experiments demonstrate the feasibility of recovering MMW images from measurements at 1/2 or even 1/4 of the Nyquist rate.
Compressive sampling in passive millimeter-wave imaging
NASA Astrophysics Data System (ADS)
Gopalsami, N.; Elmer, T. W.; Liao, S.; Ahern, R.; Heifetz, A.; Raptis, A. C.; Luessi, M.; Babacan, D.; Katsaggelos, A. K.
2011-05-01
We present a Hadamard transform based imaging technique and have implemented it on a single-pixel passive millimeter-wave imager in the 146-154 GHz range. The imaging arrangement uses a set of Hadamard transform masks of size p x q at the image plane of a lens and the transformed image signals are focused and collected by a horn antenna of the imager. The cyclic nature of Hadamard matrix allows the use of a single extended 2-D Hadamard mask of size (2p-1) x (2q-1) to expose a p x q submask for each acquisition by raster scanning the large mask one pixel at a time. A total of N = pq acquisitions can be made with a complete scan. The original p x q image may be reconstructed by a simple matrix operation. Instead of full N acquisitions, we can use a subset of the masks for compressive sensing. In this regard, we have developed a relaxation technique that recovers the full Hadamard measurement space from sub-sampled Hadamard acquisitions. We have reconstructed high fidelity images with 1/9 of the full Hadamard acquisitions, thus reducing the image acquisition time by a factor of 9.
Image Recommendation Algorithm Using Feature-Based Collaborative Filtering
NASA Astrophysics Data System (ADS)
Kim, Deok-Hwan
As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.
Development of a compressive sampling hyperspectral imager prototype
NASA Astrophysics Data System (ADS)
Barducci, Alessandro; Guzzi, Donatella; Lastri, Cinzia; Nardino, Vanni; Marcoionni, Paolo; Pippi, Ivan
2013-10-01
Compressive sensing (CS) is a new technology that investigates the chance to sample signals at a lower rate than the traditional sampling theory. The main advantage of CS is that compression takes place during the sampling phase, making possible significant savings in terms of the ADC, data storage memory, down-link bandwidth, and electrical power absorption. The CS technology could have primary importance for spaceborne missions and technology, paving the way to noteworthy reductions of payload mass, volume, and cost. On the contrary, the main CS disadvantage is made by the intensive off-line data processing necessary to obtain the desired source estimation. In this paper we summarize the CS architecture and its possible implementations for Earth observation, giving evidence of possible bottlenecks hindering this technology. CS necessarily employs a multiplexing scheme, which should produce some SNR disadvantage. Moreover, this approach would necessitate optical light modulators and 2-dim detector arrays of high frame rate. This paper describes the development of a sensor prototype at laboratory level that will be utilized for the experimental assessment of CS performance and the related reconstruction errors. The experimental test-bed adopts a push-broom imaging spectrometer, a liquid crystal plate, a standard CCD camera and a Silicon PhotoMultiplier (SiPM) matrix. The prototype is being developed within the framework of the ESA ITI-B Project titled "Hyperspectral Passive Satellite Imaging via Compressive Sensing".
Objective Quality Assessment and Perceptual Compression of Screen Content Images.
Wang, Shiqi; Gu, Ke; Zeng, Kai; Wang, Zhou; Lin, Weisi
2016-05-25
Screen content image (SCI) has recently emerged as an active topic due to the rapidly increasing demand in many graphically rich services such as wireless displays and virtual desktops. Image quality models play an important role in measuring and optimizing user experience of SCI compression and transmission systems, but are currently lacking. SCIs are often composed of pictorial regions and computer generated textual/graphical content, which exhibit different statistical properties that often lead to different viewer behaviors. Inspired by this, we propose an objective quality assessment approach for SCIs that incorporates both visual field adaptation and information content weighting into structural similarity based local quality assessment. Furthermore, we develop a perceptual screen content coding scheme based on the newly proposed quality assessment measure, targeting at further improving the SCI compression performance. Experimental results show that the proposed quality assessment method not only better predicts the perceptual quality of SCIs, but also demonstrates great potentials in the design of perceptually optimal SCI compression schemes.
Progressive image data compression with adaptive scale-space quantization
NASA Astrophysics Data System (ADS)
Przelaskowski, Artur
1999-12-01
Some improvements of embedded zerotree wavelet algorithm are considere. Compression methods tested here are based on dyadic wavelet image decomposition, scalar quantization and coding in progressive fashion. Profitable coders with embedded form of code and rate fixing abilities like Shapiro EZW and Said nad Pearlman SPIHT are modified to improve compression efficiency. We explore the modifications of the initial threshold value, reconstruction levels and quantization scheme in SPIHT algorithm. Additionally, we present the result of the best filter bank selection. The most efficient biorthogonal filter banks are tested. Significant efficiency improvement of SPIHT coder was finally noticed even up to 0.9dB of PSNR in some cases. Because of the problems with optimization of quantization scheme in embedded coder we propose another solution: adaptive threshold selection of wavelet coefficients in progressive coding scheme. Two versions of this coder are tested: progressive in quality and resolution. As a result, improved compression effectiveness is achieved - close to 1.3 dB in comparison to SPIHT for image Barbara. All proposed algorithms are optimized automatically and are not time-consuming. But sometimes the most efficient solution must be found in iterative way. Final results are competitive across the most efficient wavelet coders.
Motion-compensated compressed sensing for dynamic imaging
NASA Astrophysics Data System (ADS)
Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali
2010-08-01
The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.
Emerging standards for still image compression: A software implementation and simulation study
NASA Technical Reports Server (NTRS)
Pollara, F.; Arnold, S.
1991-01-01
The software implementation is described of an emerging standard for the lossy compression of continuous tone still images. This software program can be used to compress planetary images and other 2-D instrument data. It provides a high compression image coding capability that preserves image fidelity at compression rates competitive or superior to most known techniques. This software implementation confirms the usefulness of such data compression and allows its performance to be compared with other schemes used in deep space missions and for data based storage.
High-speed compressive range imaging based on active illumination.
Sun, Yangyang; Yuan, Xin; Pang, Shuo
2016-10-03
We report a compressive imaging method based on active illumination, which reconstructs a 3D scene at a frame rate beyond the acquisition speed limit of the camera. We have built an imaging prototype that projects temporally varying illumination pattern and demonstrated a joint reconstruction algorithm that iteratively retrieves both the range and high-temporal-frequency information from the 2D low-frame rate measurement. The reflectance and depth-map videos have been reconstructed at 1000 frames per second (fps) from the measurement captured at 200 fps. The range resolution is in agreement with the resolution calculated from the triangulation methods based on the same system geometry. We expect such an imaging method could become a simple solution to a wide range of applications, including industrial metrology, 3D printing, and vehicle navigations.
Efficient image compression scheme based on differential coding
NASA Astrophysics Data System (ADS)
Zhu, Li; Wang, Guoyou; Liu, Ying
2007-11-01
Embedded zerotree (EZW) and Set Partitioning in Hierarchical Trees (SPIHT) coding, introduced by J.M. Shapiro and Amir Said, are very effective and being used in many fields widely. In this study, brief explanation of the principles of SPIHT was first provided, and then, some improvement of SPIHT algorithm according to experiments was introduced. 1) For redundancy among the coefficients in the wavelet region, we propose differential method to reduce it during coding. 2) Meanwhile, based on characteristic of the coefficients' distribution in subband, we adjust sorting pass and optimize differential coding, in order to reduce the redundancy coding in each subband. 3) The image coding result, calculated by certain threshold, shows that through differential coding, the rate of compression get higher, and the quality of reconstructed image have been get raised greatly, when bpp (bit per pixel)=0.5, PSNR (Peak Signal to Noise Ratio) of reconstructed image exceeds that of standard SPIHT by 0.2~0.4db.
Degradative encryption: An efficient way to protect SPIHT compressed images
NASA Astrophysics Data System (ADS)
Xiang, Tao; Qu, Jinyu; Yu, Chenyun; Fu, Xinwen
2012-11-01
Degradative encryption, a new selective image encryption paradigm, is proposed to encrypt only a small part of image data to make the detail blurred but keep the skeleton discernible. The efficiency is further optimized by combining compression and encryption. A format-compliant degradative encryption algorithm based on set partitioning in hierarchical trees (SPIHT) is then proposed, and the scheme is designed to work in progressive mode for gaining a tradeoff between efficiency and security. Extensive experiments are conducted to evaluate the strength and efficiency of the scheme, and it is found that less than 10% data need to be encrypted for a secure degradation. In security analysis, the scheme is verified to be immune to cryptographic attacks as well as those adversaries utilizing image processing techniques. The scheme can find its wide applications in online try-and-buy service on mobile devices, searchable multimedia encryption in cloud computing, etc.
Application of strong zerotrees to compression of correlated MRI image sets
NASA Astrophysics Data System (ADS)
Soloveyko, Olexandr M.; Musatenko, Yurij S.; Kurashov, Vitalij N.; Dubikovskiy, Vladislav A.
2001-08-01
It is known that gainful interframe compression of magnetic resonance(MR) image set is quite difficult problem. Only few authors reported gain in performance of compressors like that comparing to separate compression of every MR image from the set (intraframe compression). Known reasons of such a situation are significant noise in MR images and presence of only low frequency correlations in images of the set. Recently we suggested new method of correlated image set compression based on Karhunen-Loeve(KL) transform and special EZW compression scheme with strong zerotrees(KLSEZW). KLSEZW algorithm showed good results in compression of video sequences with low and middle motion even without motion compensation. The paper presents successful application of the basic method and its modification to interframe MR image compression problem.
NASA Technical Reports Server (NTRS)
Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron
2011-01-01
A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.
Objective index of image fidelity for JPEG2000 compressed body CT images
Kim, Kil Joong; Lee, Kyoung Ho; Kang, Heung-Sik; Kim, So Yeon; Kim, Young Hoon; Kim, Bohyoung; Seo, Jinwook; Mantiuk, Rafal
2009-07-15
Compression ratio (CR) has been the de facto standard index of compression level for medical images. The aim of the study is to evaluate the CR, peak signal-to-noise ratio (PSNR), and a perceptual quality metric (high-dynamic range visual difference predictor HDR-VDP) as objective indices of image fidelity for Joint Photographic Experts Group (JPEG) 2000 compressed body computed tomography (CT) images, from the viewpoint of visually lossless compression approach. A total of 250 body CT images obtained with five different scan protocols (5-mm-thick abdomen, 0.67-mm-thick abdomen, 5-mm-thick lung, 0.67-mm-thick lung, and 5-mm-thick low-dose lung) were compressed to one of five CRs (reversible, 6:1, 8:1, 10:1, and 15:1). The PSNR and HDR-VDP values were calculated for the 250 pairs of the original and compressed images. By alternately displaying an original and its compressed image on the same monitor, five radiologists independently determined if the pair was distinguishable or indistinguishable. The kappa statistic for the interobserver agreement among the five radiologists' responses was 0.70. According to the radiologists' responses, the number of distinguishable image pairs tended to significantly differ among the five scan protocols at 6:1-10:1 compressions (Fisher-Freeman-Halton exact tests). Spearman's correlation coefficients between each of the CR, PSNR, and HDR-VDP and the number of radiologists who responded as distinguishable were 0.72, -0.77, and 0.85, respectively. Using the radiologists' pooled responses as the reference standards, the areas under the receiver-operating-characteristic curves for the CR, PSNR, and HDR-VDP were 0.87, 0.93, and 0.97, respectively, showing significant differences between the CR and PSNR (p=0.04), or HDR-VDP (p<0.001), and between the PSNR and HDR-VDP (p<0.001). In conclusion, the CR is less suitable than the PSNR or HDR-VDP as an objective index of image fidelity for JPEG2000 compressed body CT images. The HDR-VDP is more
Application-oriented region of interest based image compression using bit-allocation optimization
NASA Astrophysics Data System (ADS)
Zhu, Yuanping
2015-01-01
Region of interest (ROI) based image compression can offer a high image-compression ratio along with high quality in the important regions of the image. For many applications, stable compression quality is required for both the ROIs and the images. However, image compression does not consider information specific to the application and cannot meet this requirement well. This paper proposes an application-oriented ROI-based image-compression method using bit-allocation optimization. Unlike typical methods that define bit-rate parameters empirically, the proposed method adjusts the bit-rate parameters adaptively to both images and ROIs. First, an application-dependent optimization model is constructed. The relationship between the compression parameters and the image content is learned from a training image set. Image redundancy is used to measure the compression capability of image content. Then, during compression, the global bit rate and the ROI bit rate are adjusted in the images and ROIs, respectively-supported by the application-dependent information in the optimization model. As a result, stable compression quality is assured in the applications. Experiments with two different applications showed that quality deviation in the reconstructed images decreased, verifying the effectiveness of the proposed method.
Real-time Image Generation for Compressive Light Field Displays
NASA Astrophysics Data System (ADS)
Wetzstein, G.; Lanman, D.; Hirsch, M.; Raskar, R.
2013-02-01
With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.
Multifrequency Bayesian compressive sensing methods for microwave imaging.
Poli, Lorenzo; Oliveri, Giacomo; Ding, Ping Ping; Moriyama, Toshifumi; Massa, Andrea
2014-11-01
The Bayesian retrieval of sparse scatterers under multifrequency transverse magnetic illuminations is addressed. Two innovative imaging strategies are formulated to process the spectral content of microwave scattering data according to either a frequency-hopping multistep scheme or a multifrequency one-shot scheme. To solve the associated inverse problems, customized implementations of single-task and multitask Bayesian compressive sensing are introduced. A set of representative numerical results is discussed to assess the effectiveness and the robustness against the noise of the proposed techniques also in comparison with some state-of-the-art deterministic strategies.
COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation
NASA Technical Reports Server (NTRS)
Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos
2015-01-01
The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.
Compressed Sensing MR Image Reconstruction Exploiting TGV and Wavelet Sparsity
Du, Huiqian; Han, Yu; Mei, Wenbo
2014-01-01
Compressed sensing (CS) based methods make it possible to reconstruct magnetic resonance (MR) images from undersampled measurements, which is known as CS-MRI. The reference-driven CS-MRI reconstruction schemes can further decrease the sampling ratio by exploiting the sparsity of the difference image between the target and the reference MR images in pixel domain. Unfortunately existing methods do not work well given that contrast changes are incorrectly estimated or motion compensation is inaccurate. In this paper, we propose to reconstruct MR images by utilizing the sparsity of the difference image between the target and the motion-compensated reference images in wavelet transform and gradient domains. The idea is attractive because it requires neither the estimation of the contrast changes nor multiple times motion compensations. In addition, we apply total generalized variation (TGV) regularization to eliminate the staircasing artifacts caused by conventional total variation (TV). Fast composite splitting algorithm (FCSA) is used to solve the proposed reconstruction problem in order to improve computational efficiency. Experimental results demonstrate that the proposed method can not only reduce the computational cost but also decrease sampling ratio or improve the reconstruction quality alternatively. PMID:25371704
Terahertz compressive imaging with metamaterial spatial light modulators
NASA Astrophysics Data System (ADS)
Watts, Claire M.; Shrekenhamer, David; Montoya, John; Lipworth, Guy; Hunt, John; Sleasman, Timothy; Krishna, Sanjay; Smith, David R.; Padilla, Willie J.
2014-08-01
Imaging at long wavelengths, for example at terahertz and millimetre-wave frequencies, is a highly sought-after goal of researchers because of the great potential for applications ranging from security screening and skin cancer detection to all-weather navigation and biodetection. Here, we design, fabricate and demonstrate active metamaterials that function as real-time tunable, spectrally sensitive spatial masks for terahertz imaging with only a single-pixel detector. A modulation technique permits imaging with negative mask values, which is typically difficult to achieve with intensity-based components. We demonstrate compressive techniques allowing the acquisition of high-frame-rate, high-fidelity images. Our system is all solid-state with no moving parts, yields improved signal-to-noise ratios over standard raster-scanning techniques, and uses a source orders of magnitude lower in power than conventional set-ups. The demonstrated imaging system establishes a new path for terahertz imaging that is distinct from existing focal-plane-array-based cameras.
Edge-preserving image compression using adaptive lifting wavelet transform
NASA Astrophysics Data System (ADS)
Zhang, Libao; Qiu, Bingchang
2015-07-01
In this paper, a novel 2-D adaptive lifting wavelet transform is presented. The proposed algorithm is designed to further reduce the high-frequency energy of wavelet transform, improve the image compression efficiency and preserve the edge or texture of original images more effectively. In this paper, a new optional direction set, covering the surrounding integer pixels and sub-pixels, is designed. Hence, our algorithm adapts far better to the image orientation features in local image blocks. To obtain the computationally efficient and coding performance, the complete processes of 2-D adaptive lifting wavelet transform is introduced and implemented. Compared with the traditional lifting-based wavelet transform, the adaptive directional lifting and the direction-adaptive discrete wavelet transform, the new structure reduces the high-frequency wavelet coefficients more effectively, and the texture structures of the reconstructed images are more refined and clear than that of the other methods. The peak signal-to-noise ratio and the subjective quality of the reconstructed images are significantly improved.
Resolution enhancement for ISAR imaging via improved statistical compressive sensing
NASA Astrophysics Data System (ADS)
Zhang, Lei; Wang, Hongxian; Qiao, Zhi-jun
2016-12-01
Developing compressed sensing (CS) theory reveals that optimal reconstruction of an unknown signal can be achieved from very limited observations by utilizing signal sparsity. For inverse synthetic aperture radar (ISAR), the image of an interesting target is generally constructed by limited strong scattering centers, representing strong spatial sparsity. Such prior sparsity intrinsically paves a way to improved ISAR imaging performance. In this paper, we develop a super-resolution algorithm for forming ISAR images from limited observations. When the amplitude of the target scattered field follows an identical Laplace probability distribution, the approach converts super-resolution imaging into sparsity-driven optimization in the Bayesian statistics sense. We show that improved performance is achievable by taking advantage of the meaningful spatial structure of the scattered field. Further, we use the nonidentical Laplace distribution with small scale on strong signal components and large scale on noise to discriminate strong scattering centers from noise. A maximum likelihood estimator combined with a bandwidth extrapolation technique is also developed to estimate the scale parameters. Real measured data processing indicates the proposal can reconstruct the high-resolution image though only limited pulses even with low SNR, which shows advantages over current super-resolution imaging methods.
A linear mixture analysis-based compression for hyperspectral image analysis
C. I. Chang; I. W. Ginsberg
2000-06-30
In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.
Underwater Acoustic Matched Field Imaging Based on Compressed Sensing
Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong
2015-01-01
Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method. PMID:26457708
Honey Bee Mating Optimization Vector Quantization Scheme in Image Compression
NASA Astrophysics Data System (ADS)
Horng, Ming-Huwi
The vector quantization is a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde-Buzo-Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The proposed method is called the honey bee mating optimization based LBG (HBMO-LBG) algorithm. The results were compared with the other two methods that are LBG and PSO-LBG algorithms. Experimental results showed that the proposed HBMO-LBG algorithm is more reliable and the reconstructed images get higher quality than those generated form the other three methods.
Underwater Acoustic Matched Field Imaging Based on Compressed Sensing.
Yan, Huichen; Xu, Jia; Long, Teng; Zhang, Xudong
2015-10-07
Matched field processing (MFP) is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP) model from wave propagation theory by using randomly deployed sensors. In addition, the model's recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP) algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.
High dynamic range coherent imaging using compressed sensing.
He, Kuan; Sharma, Manoj Kumar; Cossairt, Oliver
2015-11-30
In both lensless Fourier transform holography (FTH) and coherent diffraction imaging (CDI), a beamstop is used to block strong intensities which exceed the limited dynamic range of the sensor, causing a loss in low-frequency information, making high quality reconstructions difficult or even impossible. In this paper, we show that an image can be recovered from high-frequencies alone, thereby overcoming the beamstop problem in both FTH and CDI. The only requirement is that the object is sparse in a known basis, a common property of most natural and manmade signals. The reconstruction method relies on compressed sensing (CS) techniques, which ensure signal recovery from incomplete measurements. Specifically, in FTH, we perform compressed sensing (CS) reconstruction of captured holograms and show that this method is applicable not only to standard FTH, but also multiple or extended reference FTH. For CDI, we propose a new phase retrieval procedure, which combines Fienup's hybrid input-output (HIO) method and CS. Both numerical simulations and proof-of-principle experiments are shown to demonstrate the effectiveness and robustness of the proposed CS-based reconstructions in dealing with missing data in both FTH and CDI.
Pairwise KLT-Based Compression for Multispectral Images
NASA Astrophysics Data System (ADS)
Nian, Yongjian; Liu, Yu; Ye, Zhen
2016-12-01
This paper presents a pairwise KLT-based compression algorithm for multispectral images. Although the KLT has been widely employed for spectral decorrelation, its complexity is high if it is performed on the global multispectral images. To solve this problem, this paper presented a pairwise KLT for spectral decorrelation, where KLT is only performed on two bands every time. First, KLT is performed on the first two adjacent bands and two principle components are obtained. Secondly, one remainning band and the principal component (PC) with the larger eigenvalue is selected to perform a KLT on this new couple. This procedure is repeated until the last band is reached. Finally, the optimal truncation technique of post-compression rate-distortion optimization is employed for the rate allocation of all the PCs, followed by embedded block coding with optimized truncation to generate the final bit-stream. Experimental results show that the proposed algorithm outperforms the algorithm based on global KLT. Moreover, the pairwise KLT structure can significantly reduce the complexity compared with a global KLT.
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
NASA Astrophysics Data System (ADS)
Leihong, Zhang; Zilan, Pan; Luying, Wu; Xiuhua, Ma
2016-11-01
To solve the problem that large images can hardly be retrieved for stringent hardware restrictions and the security level is low, a method based on compressive ghost imaging (CGI) with Fast Fourier Transform (FFT) is proposed, named FFT-CGI. Initially, the information is encrypted by the sender with FFT, and the FFT-coded image is encrypted by the system of CGI with a secret key. Then the receiver decrypts the image with the aid of compressive sensing (CS) and FFT. Simulation results are given to verify the feasibility, security, and compression of the proposed encryption scheme. The experiment suggests the method can improve the quality of large images compared with conventional ghost imaging and achieve the imaging for large-sized images, further the amount of data transmitted largely reduced because of the combination of compressive sensing and FFT, and improve the security level of ghost images through ciphertext-only attack (COA), chosen-plaintext attack (CPA), and noise attack. This technique can be immediately applied to encryption and data storage with the advantages of high security, fast transmission, and high quality of reconstructed information.
Learning-based compressed sensing for infrared image super resolution
NASA Astrophysics Data System (ADS)
Zhao, Yao; Sui, Xiubao; Chen, Qian; Wu, Shaochi
2016-05-01
This paper presents an infrared image super-resolution method based on compressed sensing (CS). First, the reconstruction model under the CS framework is established and a Toeplitz matrix is selected as the sensing matrix. Compared with traditional learning-based methods, the proposed method uses a set of sub-dictionaries instead of two coupled dictionaries to recover high resolution (HR) images. And Toeplitz sensing matrix allows the proposed method time-efficient. Second, all training samples are divided into several feature spaces by using the proposed adaptive k-means classification method, which is more accurate than the standard k-means method. On the basis of this approach, a complex nonlinear mapping from the HR space to low resolution (LR) space can be converted into several compact linear mappings. Finally, the relationships between HR and LR image patches can be obtained by multi-sub-dictionaries and HR infrared images are reconstructed by the input LR images and multi-sub-dictionaries. The experimental results show that the proposed method is quantitatively and qualitatively more effective than other state-of-the-art methods.
Oriented wavelet transform for image compression and denoising.
Chappelier, Vivien; Guillemot, Christine
2006-10-01
In this paper, we introduce a new transform for image processing, based on wavelets and the lifting paradigm. The lifting steps of a unidimensional wavelet are applied along a local orientation defined on a quincunx sampling grid. To maximize energy compaction, the orientation minimizing the prediction error is chosen adaptively. A fine-grained multiscale analysis is provided by iterating the decomposition on the low-frequency band. In the context of image compression, the multiresolution orientation map is coded using a quad tree. The rate allocation between the orientation map and wavelet coefficients is jointly optimized in a rate-distortion sense. For image denoising, a Markov model is used to extract the orientations from the noisy image. As long as the map is sufficiently homogeneous, interesting properties of the original wavelet are preserved such as regularity and orthogonality. Perfect reconstruction is ensured by the reversibility of the lifting scheme. The mutual information between the wavelet coefficients is studied and compared to the one observed with a separable wavelet transform. The rate-distortion performance of this new transform is evaluated for image coding using state-of-the-art subband coders. Its performance in a denoising application is also assessed against the performance obtained with other transforms or denoising methods.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Application of region selective embedded zerotree wavelet coder in CT image compression.
Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping
2005-01-01
Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.
Wu, Ai-Min; Ni, Wen-Fei
2013-01-01
The intravertebral vacuum cleft (IVC) sign in vertebral compression fracture patients has obtained much attention. The pathogenesis, image character and efficacy of surgical intervention were disputed. Many pathogenesis theories were proposed, and its image characters are distinct from malignancy and infection. Percutaneous vertebroplasty (PVP) or percutaneous kyphoplasty (PKP) have been the main therapeutic methods for these patients in recent years. Avascular necrosis theory is the most supported; PVP could relieve back pain, restore vertebral body height and correct the kyphotic angulation (KA), and is recommended for these patients. PKP seems to be more effective for the correction of KA and lower cement leakage. The Kümmell's disease with IVC sign reported by modern authors was incomplete consistent with syndrome reported by Dr. Hermann Kümmell. PMID:23741556
3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography
Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina
2015-01-01
We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591
Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.
ERIC Educational Resources Information Center
Culik, Karel II; Kari, Jarkko
1994-01-01
Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…
ERIC Educational Resources Information Center
Brazelton, G. Blue; Renn, Kristen A.; Stewart, Dafina-Lazarus
2015-01-01
In this chapter, the editors provide a summary of the information shared in this sourcebook about the success of students who have minoritized identities of sexuality or gender and offer recommendations for policy, practice, and further research.
Recommending images of user interests from the biomedical literature
NASA Astrophysics Data System (ADS)
Clukey, Steven; Xu, Songhua
2013-03-01
Every year hundreds of thousands of biomedical images are published in journals and conferences. Consequently, finding images relevant to one's interests becomes an ever daunting task. This vast amount of literature creates a need for intelligent and easy-to-use tools that can help researchers effectively navigate through the content corpus and conveniently locate materials of their interests. Traditionally, literature search tools allow users to query content using topic keywords. However, manual query composition is often time and energy consuming. A better system would be one that can automatically deliver relevant content to a researcher without having the end user manually manifest one's search intent and interests via search queries. Such a computer-aided assistance for information access can be provided by a system that first determines a researcher's interests automatically and then recommends images relevant to the person's interests accordingly. The technology can greatly improve a researcher's ability to stay up to date in their fields of study by allowing them to efficiently browse images and documents matching their needs and interests among the vast amount of the biomedical literature. A prototype system implementation of the technology can be accessed via http://www.smartdataware.com.
Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.
López, Yuri Álvarez; Lorenzo, José Ángel Martínez
2017-01-15
One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.
Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers
Álvarez López, Yuri; Martínez Lorenzo, José Ángel
2017-01-01
One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841
Adaptive wavelet transform algorithm for lossy image compression
NASA Astrophysics Data System (ADS)
Pogrebnyak, Oleksiy B.; Ramirez, Pablo M.; Acevedo Mosqueda, Marco Antonio
2004-11-01
A new algorithm of locally adaptive wavelet transform based on the modified lifting scheme is presented. It performs an adaptation of the wavelet high-pass filter at the prediction stage to the local image data activity. The proposed algorithm uses the generalized framework for the lifting scheme that permits to obtain easily different wavelet filter coefficients in the case of the (~N, N) lifting. Changing wavelet filter order and different control parameters, one can obtain the desired filter frequency response. It is proposed to perform the hard switching between different wavelet lifting filter outputs according to the local data activity estimate. The proposed adaptive transform possesses a good energy compaction. The designed algorithm was tested on different images. The obtained simulation results show that the visual and quantitative quality of the restored images is high. The distortions are less in the vicinity of high spatial activity details comparing to the non-adaptive transform, which introduces ringing artifacts. The designed algorithm can be used for lossy image compression and in the noise suppression applications.
Fourier-domain beamforming: the path to compressed ultrasound imaging.
Chernyakova, Tanya; Eldar, Yonina
2014-08-01
Sonography techniques use multiple transducer elements for tissue visualization. Signals received at each element are sampled before digital beamforming. The sampling rates required to perform high-resolution digital beamforming are significantly higher than the Nyquist rate of the signal and result in considerable amount of data that must be stored and processed. A recently developed technique, compressed beamforming, based on the finite rate of innovation model, compressed sensing (CS), and Xampling ideas, allows a reduction in the number of samples needed to reconstruct an image comprised of strong reflectors. A drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here, we build on previous work and extend it to a general concept of beamforming in frequency. This allows exploitation of the low bandwidth of the ultrasound signal and bypassing of the oversampling dictated by digital implementation of beamforming in time. By using beamforming in frequency, the same image quality is obtained from far fewer samples. We next present a CS technique that allows for further rate reduction, using only a portion of the beamformed signal's bandwidth. We demonstrate our methods on in vivo cardiac data and show that reductions up to 1/28 of the standard beamforming rates are possible. Finally, we present an implementation on an ultrasound machine using sub-Nyquist sampling and processing. Our results prove that the concept of sub-Nyquist processing is feasible for medical ultrasound, leading to the potential of considerable reduction in future ultrasound machines' size, power consumption, and cost.
Spatially Regularized Compressed Sensing for High Angular Resolution Diffusion Imaging
Rathi, Yogesh; Dolui, Sudipto
2013-01-01
Despite the relative recency of its inception, the theory of compressive sampling (aka compressed sensing) (CS) has already revolutionized multiple areas of applied sciences, a particularly important instance of which is medical imaging. Specifically, the theory has provided a different perspective on the important problem of optimal sampling in magnetic resonance imaging (MRI), with an ever-increasing body of works reporting stable and accurate reconstruction of MRI scans from the number of spectral measurements which would have been deemed unacceptably small as recently as five years ago. In this paper, the theory of CS is employed to palliate the problem of long acquisition times, which is known to be a major impediment to the clinical application of high angular resolution diffusion imaging (HARDI). Specifically, we demonstrate that a substantial reduction in data acquisition times is possible through minimization of the number of diffusion encoding gradients required for reliable reconstruction of HARDI scans. The success of such a minimization is primarily due to the availability of spherical ridgelet transformation, which excels in sparsifying HARDI signals. What makes the resulting reconstruction procedure even more accurate is a combination of the sparsity constraints in the diffusion domain with additional constraints imposed on the estimated diffusion field in the spatial domain. Accordingly, the present paper describes an original way to combine the diffusion-and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy. Finally, details are provided on an efficient numerical scheme which can be used to solve the aforementioned reconstruction problem by means of standard and readily available estimation tools. The paper is concluded with experimental results which support the practical value of the proposed reconstruction methodology. PMID:21536524
Compressive spectral polarization imaging by a pixelized polarizer and colored patterned detector.
Fu, Chen; Arguello, Henry; Sadler, Brian M; Arce, Gonzalo R
2015-11-01
A compressive spectral and polarization imager based on a pixelized polarizer and colored patterned detector is presented. The proposed imager captures several dispersed compressive projections with spectral and polarization coding. Stokes parameter images at several wavelengths are reconstructed directly from 2D projections. Employing a pixelized polarizer and colored patterned detector enables compressive sensing over spatial, spectral, and polarization domains, reducing the total number of measurements. Compressive sensing codes are specially designed to enhance the peak signal-to-noise ratio in the reconstructed images. Experiments validate the architecture and reconstruction algorithms.
NASA Astrophysics Data System (ADS)
Truong, Trieu-Kien; Chen, Shi-Huang
2006-03-01
In this paper, a new medical image compression algorithm using cubic spline interpolation (CSI) is presented for telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to achieve image compression. It has been shown in literatures that the CSI can be combined with the JPEG or JPEG2000 algorithm to develop a modified JPEG or JPEG2000 codec, which obtains a higher compression ratio and a better quality of reconstructed image than the standard JPEG and JPEG2000 codecs. This paper further makes use of the modified JPEG codec to medical image compression. Experimental results show that the proposed scheme can increase 25~30% compression ratio of original JPEG medical data compression system with similar visual quality. This system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine applications.
NASA Astrophysics Data System (ADS)
Klimesh, M.; Stanton, V.; Watola, D.
2000-10-01
We describe a hardware implementation of a state-of-the-art lossless image compression algorithm. The algorithm is based on the LOCO-I (low complexity lossless compression for images) algorithm developed by Weinberger, Seroussi, and Sapiro, with modifications to lower the implementation complexity. In this setup, the compression itself is performed entirely in hardware using a field programmable gate array and a small amount of random access memory. The compression speed achieved is 1.33 Mpixels/second. Our algorithm yields about 15 percent better compression than the Rice algorithm.
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
Image compression with QM-AYA adaptive binary arithmetic coder
NASA Astrophysics Data System (ADS)
Cheng, Joe-Ming; Langdon, Glen G., Jr.
1993-01-01
The Q-coder has been reported in the literature, and is a renorm-driven binary adaptive arithmetic coder. A similar renorm-driven coder, the QM coder, uses the same approach with an initial attack to more rapidly estimate the statistics in the beginning, and with a different state table. The QM coder is the adaptive binary arithmetic coder employed in the JBIG and JPEG image compression algorithms. The QM-AYA arithmetic coder is similar to the QM coder, with a different state table, that offers balanced improvements to the QM probability estimation for the less skewed distributions. The QM-AYA performs better when the probability estimate is near 0.5 for each binary symbol. An approach for constructing effective index change tables for Q-coder type adaptation is discussed.
Rate and power efficient image compressed sensing and transmission
NASA Astrophysics Data System (ADS)
Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan
2016-01-01
This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.
Stable and Robust Sampling Strategies for Compressive Imaging.
Krahmer, Felix; Ward, Rachel
2014-02-01
In many signal processing applications, one wishes to acquire images that are sparse in transform domains such as spatial finite differences or wavelets using frequency domain samples. For such applications, overwhelming empirical evidence suggests that superior image reconstruction can be obtained through variable density sampling strategies that concentrate on lower frequencies. The wavelet and Fourier transform domains are not incoherent because low-order wavelets and low-order frequencies are correlated, so compressive sensing theory does not immediately imply sampling strategies and reconstruction guarantees. In this paper, we turn to a more refined notion of coherence-the so-called local coherence-measuring for each sensing vector separately how correlated it is to the sparsity basis. For Fourier measurements and Haar wavelet sparsity, the local coherence can be controlled and bounded explicitly, so for matrices comprised of frequencies sampled from a suitable inverse square power-law density, we can prove the restricted isometry property with near-optimal embedding dimensions. Consequently, the variable-density sampling strategy we provide allows for image reconstructions that are stable to sparsity defects and robust to measurement noise. Our results cover both reconstruction by ℓ1-minimization and total variation minimization. The local coherence framework developed in this paper should be of independent interest, as it implies that for optimal sparse recovery results, it suffices to have bounded average coherence from sensing basis to sparsity basis-as opposed to bounded maximal coherence-as long as the sampling strategy is adapted accordingly.
Television image compression and small animal remote monitoring
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Jackson, Robert W.
1990-01-01
It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.
Image compression with directional lifting on separated sections
NASA Astrophysics Data System (ADS)
Zhu, Jieying; Wang, Nengchao
2007-11-01
A novel image compression scheme is presented that the directional sections are separated and transformed differently from the rest of image. The discrete directions of anisotropic pixels are calculated and then grouped to compact directional sections. One dimensional (1-D) adaptive directional lifting is continuously applied along orientations of direction sections other than applying 1-D wavelet transform alternately in two dimensions for the whole image. For the rest sections, 2-D adaptive lifting filters are applied according to pixels' positions. Our single embedded coding stream can be truncated exactly for any bit rate. Experiments have showed that large coefficients can be significantly reduced along directional sections by our transform which makes energy more compact than traditional wavelet transform. Though rate-distortion (R-D) optimization isn't exploited, the PSNR is still comparable to that of JPEG-2000 with 9/7 filters at high bit rates. And at low bit rates, the visual quality is better than that of JPEG-2000 for along directional sections both blurring and ringing artifacts can be avoided and edge preservation is good.
Adaptive wavelet transform algorithm for image compression applications
NASA Astrophysics Data System (ADS)
Pogrebnyak, Oleksiy B.; Manrique Ramirez, Pablo
2003-11-01
A new algorithm of locally adaptive wavelet transform is presented. The algorithm implements the integer-to-integer lifting scheme. It performs an adaptation of the wavelet function at the prediction stage to the local image data activity. The proposed algorithm is based on the generalized framework for the lifting scheme that permits to obtain easily different wavelet coefficients in the case of the (N~,N) lifting. It is proposed to perform the hard switching between (2, 4) and (4, 4) lifting filter outputs according to an estimate of the local data activity. When the data activity is high, i.e., in the vicinity of edges, the (4, 4) lifting is performed. Otherwise, in the plain areas, the (2,4) decomposition coefficients are calculated. The calculations are rather simples that permit the implementation of the designed algorithm in fixed point DSP processors. The proposed adaptive transform possesses the perfect restoration of the processed data and possesses good energy compactation. The designed algorithm was tested on different images. The proposed adaptive transform algorithm can be used for image/signal lossless compression.
Intelligent fuzzy approach for fast fractal image compression
NASA Astrophysics Data System (ADS)
Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila
2014-12-01
Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.
Integer cosine transform chip design for image compression
NASA Astrophysics Data System (ADS)
Ruiz, Gustavo A.; Michell, Juan A.; Buron, Angel M.; Solana, Jose M.; Manzano, Miguel A.; Diaz, J.
2003-04-01
The Discrete Cosine Transform (DCT) is the most widely used transform for image compression. The Integer Cosine Transform denoted ICT (10, 9, 6, 2, 3, 1) has been shown to be a promising alternative to the DCT due to its implementation simplicity, similar performance and compatibility with the DCT. This paper describes the design and implementation of a 8×8 2-D ICT processor for image compression, that meets the numerical characteristic of the IEEE std. 1180-1990. This processor uses a low latency data flow that minimizes the internal memory and a parallel pipelined architecture, based on a numerical strength reduction Integer Cosine Transform (10, 9, 6, 2, 3, 1) algorithm, in order to attain high throughput and continuous data flow. A prototype of the 8×8 ICT processor has been implemented using a standard cell design methodology and a 0.35-μm CMOS CSD 3M/2P 3.3V process on a 10 mm2 die. Pipeline circuit techniques have been used to attain the maximum frequency of operation allowed by the technology, attaining a critical path of 1.8ns, which should be increased by a 20% to allow for line delays, placing the estimated operational frequency at 500Mhz. The circuit includes 12446 cells, being flip-flops 6757 of them. Two clock signals have been distributed, an external one (fs) and an internal one (fs/2). The high number of flip-flops has forced the use of a strategy to minimize clock-skew, combining big sized buffers on the periphery and using wide metal lines (clock-trunks) to distribute the signals.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Technical Reports Server (NTRS)
Pence, William D.; White, R. L.; Seaman, R.
2010-01-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information
NASA Astrophysics Data System (ADS)
Pence, W. D.; White, R. L.; Seaman, R.
2010-09-01
We describe a compression method for floating-point astronomical images that gives compression ratios of 6–10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process can greatly improve the precision of measurements in the images. This is especially important if the analysis algorithm relies on the mode or the median, which would be similarly quantized if the pixel values are not dithered. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.
Radon transform imaging: low-cost video compressive imaging at extreme resolutions
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Wang, Jian; Gupta, Mohit
2016-05-01
Most compressive imaging architectures rely on programmable light-modulators to obtain coded linear measurements of a signal. As a consequence, the properties of the light modulator place fundamental limits on the cost, performance, practicality, and capabilities of the compressive camera. For example, the spatial resolution of the single pixel camera is limited to that of its light modulator, which is seldom greater than 4 megapixels. In this paper, we describe a novel approach to compressive imaging that avoids the use of spatial light modulator. In its place, we use novel cylindrical optics and a rotation gantry to directly sample the Radon transform of the image focused on the sensor plane. We show that the reconstruction problem is identical to sparse tomographic recovery and we can leverage the vast literature in compressive magnetic resonance imaging (MRI) to good effect. The proposed design has many important advantages over existing compressive cameras. First, we can achieve a resolution of N × N pixels using a sensor with N photodetectors; hence, with commercially available SWIR line-detectors with 10k pixels, we can potentially achieve spatial resolutions of 100 megapixels, a capability that is unprecedented. Second, our design is scalable more gracefully across wavebands of light since we only require sensors and optics that are optimized for the wavelengths of interest; in contrast, spatial light modulators like DMDs require expensive coatings to be effective in non-visible wavebands. Third, we can exploit properties of line-detectors including electronic shutters and pixels with large aspect ratios to optimize light throughput. On the ip side, a drawback of our approach is the need for moving components in the imaging architecture.
Interlabial masses in little girls: review and imaging recommendations
Nussbaum, A.R.; Lebowitz, R.L.
1983-07-01
When an interlabial mass is seen on physical examination in a little girl, there is often confusion about its etiology, its implications, and what should be done next. Five common interlabial masses, which superficially are strikingly similar, include a prolapsed ectopic ureterocele, a prolapsed urethra, a paraurethral cyst, hydro(metro)colpos, and rhabdomyosarcoma of the vagina (botryoid sarcoma). A prolapsed ectopic ureterocele occurs in white girls as a smooth mass which protrudes from the urethral meatus so that urine exits circumferentially. A prolapsed urethra occurs in black girls and resembles a donut with the urethral meatus in the center. A paraurethral cyst is smaller and displaces the meatus, so that the urinary stream is eccentric. Hydro(metro)colpos from hymenal imperforation presents as a smooth mass that fills the vaginal introitus, as opposed to the introital grapelike cluster of masses of botryoid sarcoma. Recommendations for efficient imaging are presented.
Effects of Reduced Compression in Digital Breast Tomosynthesis on Pain, Anxiety, and Image Quality
Abdullah Suhaimi, Siti Aishah; Mohamed, Afifah; Ahmad, Mahadir; Chelliah, Kanaga Kumari
2015-01-01
Background Most women are reluctant to undergo breast cancer screenings due to the pain and anxiety they experience. Sectional three-dimensional (3-D) breasttomosynthesis was introduced to improve cancer detection, but breast compression is still used for the acquisition of images. This study was conducted to investigate the effects of reduced compression force on pain, anxiety and image quality in digital breast tomosynthesis (DBT). Methods A total of 130 women underwent screening mammography using convenience sampling with standard and reduced compression force at the breast clinic. A validated questionnaire of 20 items on the state anxiety level and a 4-point verbal rating scale on the pain level were conducted after the mammography. Craniocaudal (CC) and mediolateral oblique (MLO) projections were performed with standard compression, but only the CC view was performed with reduced compression. Two independent radiologists evaluated the images using image criteria scores (ICS) and the Breast Imaging-Reporting and Data System (BI-RADS). Results Standard compression exhibited significantly increased scores for pain and anxiety levels compared with reduced compression (P < 0.001). Both radiologists scored the standard and reduced compression images as equal, with scores of 87.5% and 92.5% for ICS and BI-RADS scoring, respectively. Conclusions Reduced compression force in DBT reduces anxiety and pain levels without compromising image quality. PMID:28223884
Effect of Breast Compression on Lesion Characteristic Visibility with Diffraction-Enhanced Imaging
Faulconer, L.; Parham, C; Connor, D; Kuzmiak, C; Koomen, M; Lee, Y; Cho, K; Rafoth, J; Livasy, C; et al.
2010-01-01
Conventional mammography can not distinguish between transmitted, scattered, or refracted x-rays, thus requiring breast compression to decrease tissue depth and separate overlapping structures. Diffraction-enhanced imaging (DEI) uses monochromatic x-rays and perfect crystal diffraction to generate images with contrast based on absorption, refraction, or scatter. Because DEI possesses inherently superior contrast mechanisms, the current study assesses the effect of breast compression on lesion characteristic visibility with DEI imaging of breast specimens. Eleven breast tissue specimens, containing a total of 21 regions of interest, were imaged by DEI uncompressed, half-compressed, or fully compressed. A fully compressed DEI image was displayed on a soft-copy mammography review workstation, next to a DEI image acquired with reduced compression, maintaining all other imaging parameters. Five breast imaging radiologists scored image quality metrics considering known lesion pathology, ranking their findings on a 7-point Likert scale. When fully compressed DEI images were compared to those acquired with approximately a 25% difference in tissue thickness, there was no difference in scoring of lesion feature visibility. For fully compressed DEI images compared to those acquired with approximately a 50% difference in tissue thickness, across the five readers, there was a difference in scoring of lesion feature visibility. The scores for this difference in tissue thickness were significantly different at one rocking curve position and for benign lesion characterizations. These results should be verified in a larger study because when evaluating the radiologist scores overall, we detected a significant difference between the scores reported by the five radiologists. Reducing the need for breast compression might increase patient comfort during mammography. Our results suggest that DEI may allow a reduction in compression without substantially compromising clinical image
NASA Astrophysics Data System (ADS)
Schmanske, Brian M.; Loew, Murray H.
2003-05-01
A technique for assessing the impact of lossy wavelet-based image compression on signal detection tasks is presented. A medical image"s value is based on its ability to support clinical decisions such as detecting and diagnosing abnormalities. Image quality of compressed images is, however, often stated in terms of mathematical metrics such as mean square error. The presented technique provides a more suitable measure of image degradation by building on the channelized Hotelling observer model, which has been shown to predict human performance of signal detection tasks in noise-limited images. The technique first decomposes an image into its constituent wavelet subband coefficient bit-planes. Channel responses for the individual subband bit-planes are computed, combined,and processed with a Hotelling observer model to provide a measure of signal detectability versus compression ratio. This allows a user to determine how much compression can be tolerated before signal detectability drops below a certain threshold.
Research on application for integer wavelet transform for lossless compression of medical image
NASA Astrophysics Data System (ADS)
Zhou, Zude; Li, Quan; Long, Quan
2003-09-01
This paper proposes an approach based on using lifting scheme to construct integer wavelet transform whose purpose is to realize the lossless compression of images. Then researches on application of medical image, software simulation of corresponding algorithm and experiment result are presented in this paper. Experiment shows that this method could improve the compression ration and resolution.
Chen, Jing; Wang, Yongtian; Wu, Hanxiao
2012-10-29
In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.
The wavelet/scalar quantization compression standard for digital fingerprint images
Bradley, J.N.; Brislawn, C.M.
1994-04-01
A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.
NASA Astrophysics Data System (ADS)
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-10-01
The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.
Rapid MR spectroscopic imaging of lactate using compressed sensing
NASA Astrophysics Data System (ADS)
Vidya Shankar, Rohini; Agarwal, Shubhangi; Geethanath, Sairam; Kodibagkar, Vikram D.
2015-03-01
Imaging lactate metabolism in vivo may improve cancer targeting and therapeutics due to its key role in the development, maintenance, and metastasis of cancer. The long acquisition times associated with magnetic resonance spectroscopic imaging (MRSI), which is a useful technique for assessing metabolic concentrations, are a deterrent to its routine clinical use. The objective of this study was to combine spectral editing and prospective compressed sensing (CS) acquisitions to enable precise and high-speed imaging of the lactate resonance. A MRSI pulse sequence with two key modifications was developed: (1) spectral editing components for selective detection of lactate, and (2) a variable density sampling mask for pseudo-random under-sampling of the k-space `on the fly'. The developed sequence was tested on phantoms and in vivo in rodent models of cancer. Datasets corresponding to the 1X (fully-sampled), 2X, 3X, 4X, 5X, and 10X accelerations were acquired. The under-sampled datasets were reconstructed using a custom-built algorithm in MatlabTM, and the fidelity of the CS reconstructions was assessed in terms of the peak amplitudes, SNR, and total acquisition time. The accelerated reconstructions demonstrate a reduction in the scan time by up to 90% in vitro and up to 80% in vivo, with negligible loss of information when compared with the fully-sampled dataset. The proposed unique combination of spectral editing and CS facilitated rapid mapping of the spatial distribution of lactate at high temporal resolution. This technique could potentially be translated to the clinic for the routine assessment of lactate changes in solid tumors.
Method for low-light-level image compression based on wavelet transform
NASA Astrophysics Data System (ADS)
Sun, Shaoyuan; Zhang, Baomin; Wang, Liping; Bai, Lianfa
2001-10-01
Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.
High dynamic range image compression by optimizing tone mapped image quality index.
Ma, Kede; Yeganeh, Hojatollah; Zeng, Kai; Wang, Zhou
2015-10-01
Tone mapping operators (TMOs) aim to compress high dynamic range (HDR) images to low dynamic range (LDR) ones so as to visualize HDR images on standard displays. Most existing TMOs were demonstrated on specific examples without being thoroughly evaluated using well-designed and subject-validated image quality assessment models. A recently proposed tone mapped image quality index (TMQI) made one of the first attempts on objective quality assessment of tone mapped images. Here, we propose a substantially different approach to design TMO. Instead of using any predefined systematic computational structure for tone mapping (such as analytic image transformations and/or explicit contrast/edge enhancement), we directly navigate in the space of all images, searching for the image that optimizes an improved TMQI. In particular, we first improve the two building blocks in TMQI—structural fidelity and statistical naturalness components—leading to a TMQI-II metric. We then propose an iterative algorithm that alternatively improves the structural fidelity and statistical naturalness of the resulting image. Numerical and subjective experiments demonstrate that the proposed algorithm consistently produces better quality tone mapped images even when the initial images of the iteration are created by the most competitive TMOs. Meanwhile, these results also validate the superiority of TMQI-II over TMQI.
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
NASA Technical Reports Server (NTRS)
Novik, Dmitry A.; Tilton, James C.
1993-01-01
The compression, or efficient coding, of single band or multispectral still images is becoming an increasingly important topic. While lossy compression approaches can produce reconstructions that are visually close to the original, many scientific and engineering applications require exact (lossless) reconstructions. However, the most popular and efficient lossless compression techniques do not fully exploit the two-dimensional structural links existing in the image data. We describe here a general approach to lossless data compression that effectively exploits two-dimensional structural links of any length. After describing in detail two main variants on this scheme, we discuss experimental results.
Low-Rank Decomposition Based Restoration of Compressed Images via Adaptive Noise Estimation.
Zhang, Xinfeng; Lin, Weisi; Xiong, Ruiqin; Liu, Xianming; Ma, Siwei; Gao, Wen
2016-07-07
Images coded at low bit rates in real-world applications usually suffer from significant compression noise, which significantly degrades the visual quality. Traditional denoising methods are not suitable for the content-dependent compression noise, which usually assume that noise is independent and with identical distribution. In this paper, we propose a unified framework of content-adaptive estimation and reduction for compression noise via low-rank decomposition of similar image patches. We first formulate the framework of compression noise reduction based upon low-rank decomposition. Compression noises are removed by soft-thresholding the singular values in singular value decomposition (SVD) of every group of similar image patches. For each group of similar patches, the thresholds are adaptively determined according to compression noise levels and singular values. We analyze the relationship of image statistical characteristics in spatial and transform domains, and estimate compression noise level for every group of similar patches from statistics in both domains jointly with quantization steps. Finally, quantization constraint is applied to estimated images to avoid over-smoothing. Extensive experimental results show that the proposed method not only improves the quality of compressed images obviously for post-processing, but are also helpful for computer vision tasks as a pre-processing method.
Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images
NASA Astrophysics Data System (ADS)
Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.
2014-03-01
Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.
Prediction of coefficients for lossless compression of multispectral images
NASA Astrophysics Data System (ADS)
Ruedin, Ana M. C.; Acevedo, Daniel G.
2005-08-01
We present a lossless compressor for multispectral Landsat images that exploits interband and intraband correlations. The compressor operates on blocks of 256 x 256 pixels, and performs two kinds of predictions. For bands 1, 2, 3, 4, 5, 6.2 and 7, the compressor performs an integer-to-integer wavelet transform, which is applied to each block separately. The wavelet coefficients that have not yet been encoded are predicted by means of a linear combination of already coded coefficients that belong to the same orientation and spatial location in the same band, and coefficients of the same location from other spectral bands. A fast block classification is performed in order to use the best weights for each landscape. The prediction errors or differences are finally coded with an entropy - based coder. For band 6.1, we do not use wavelet transforms, instead, a median edge detector is applied to predict a pixel, with the information of the neighbouring pixels and the equalized pixel from band 6.2. This technique exploits better the great similarity between histograms of bands 6.1 and 6.2. The prediction differences are finally coded with a context-based entropy coder. The two kinds of predictions used reduce both spatial and spectral correlations, increasing the compression rates. Our compressor has shown to be superior to the lossless compressors Winzip, LOCO-I, PNG and JPEG2000.
Low-complexity wavelet filter design for image compression
NASA Technical Reports Server (NTRS)
Majani, E.
1994-01-01
Image compression algorithms based on the wavelet transform are an increasingly attractive and flexible alternative to other algorithms based on block orthogonal transforms. While the design of orthogonal wavelet filters has been studied in significant depth, the design of nonorthogonal wavelet filters, such as linear-phase (LP) filters, has not yet reached that point. Of particular interest are wavelet transforms with low complexity at the encoder. In this article, we present known and new parameterizations of the two families of LP perfect reconstruction (PR) filters. The first family is that of all PR LP filters with finite impulse response (FIR), with equal complexity at the encoder and decoder. The second family is one of LP PR filters, which are FIR at the encoder and infinite impulse response (IIR) at the decoder, i.e., with controllable encoder complexity. These parameterizations are used to optimize the subband/wavelet transform coding gain, as defined for nonorthogonal wavelet transforms. Optimal LP wavelet filters are given for low levels of encoder complexity, as well as their corresponding integer approximations, to allow for applications limited to using integer arithmetic. These optimal LP filters yield larger coding gains than orthogonal filters with an equivalent complexity. The parameterizations described in this article can be used for the optimization of any other appropriate objective function.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters
Pornographic image recognition and filtering using incremental learning in compressed domain
NASA Astrophysics Data System (ADS)
Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao
2015-11-01
With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.
NASA Astrophysics Data System (ADS)
Yao, Juncai; Liu, Guizhong
2017-03-01
In order to achieve higher image compression ratio and improve visual perception of the decompressed image, a novel color image compression scheme based on the contrast sensitivity characteristics of the human visual system (HVS) is proposed. In the proposed scheme, firstly the image is converted into the YCrCb color space and divided into sub-blocks. Afterwards, the discrete cosine transform is carried out for each sub-block, and three quantization matrices are built to quantize the frequency spectrum coefficients of the images by combining the contrast sensitivity characteristics of HVS. The Huffman algorithm is used to encode the quantized data. The inverse process involves decompression and matching to reconstruct the decompressed color image. And simulations are carried out for two color images. The results show that the average structural similarity index measurement (SSIM) and peak signal to noise ratio (PSNR) under the approximate compression ratio could be increased by 2.78% and 5.48%, respectively, compared with the joint photographic experts group (JPEG) compression. The results indicate that the proposed compression algorithm in the text is feasible and effective to achieve higher compression ratio under ensuring the encoding and image quality, which can fully meet the needs of storage and transmission of color images in daily life.
NASA Astrophysics Data System (ADS)
Lin, Cheng-Shian; Tsay, Jyh-Jong
2016-05-01
Passive forgery detection aims to detect traces of image tampering without the need for prior information. With the increasing demand for image content protection, passive detection methods able to identify image tampering areas are increasingly needed. However, most current passive approaches either work only for image-level JPEG compression detection and cannot localize region-level forgery, or suffer from high-false detection rates in localizing altered regions. This paper proposes an effective approach based on discrete cosine transform coefficient analysis for the detection and localization of altered regions of JPEG compressed images. This approach can also work with altered JPEG images resaved in JPEG compressed format with different quality factors. Experiments with various tampering methods such as copy-and-paste, image completion, and composite tampering, show that the proposed approach is able to effectively detect and localize altered areas and is not sensitive to image contents such as edges and textures.
A Lossless hybrid wavelet-fractal compression for welding radiographic images.
Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud
2016-01-01
In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.
Revisiting the Recommended Geometry for the Diametrally Compressed Ceramic C-Ring Specimen
Jadaan, Osama M.; Wereszczak, Andrew A
2009-04-01
A study conducted several years ago found that a stated allowable width/thickness (b/t) ratio in ASTM C1323 (Standard Test Method for Ultimate Strength of Advanced Ceramics with Diametrally Compressed C-Ring Specimens at Ambient Temperature) could ultimately cause the prediction of a non-conservative probability of survival when the measured C-ring strength was scaled to a different size. Because of that problem, this study sought to reevaluate the stress state and geometry of the C-ring specimen and suggest changes to ASTM C1323 that would resolve that issue. Elasticity, mechanics of materials, and finite element solutions were revisited with the C ring geometry. To avoid the introduction of more than 2% error, it was determined that the C ring width/thickness (b/t) ratio should range between 1-3 and that its inner radius/outer radius (ri/ro) ratio should range between 0.50-0.95. ASTM C1323 presently allows for b/t to be as large as 4 so that ratio should be reduced to 3.
2D-pattern matching image and video compression: theory, algorithms, and experiments.
Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth
2002-01-01
In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.
Optimization of block size for DCT-based medical image compression.
Singh, S; Kumar, V; Verma, H K
2007-01-01
In view of the increasing importance of medical imaging in healthcare and the large amount of image data to be transmitted/stored, the need for development of an efficient medical image compression method, which would preserve the critical diagnostic information at higher compression, is growing. Discrete cosine transform (DCT) is a popular transform used in many practical image/video compression systems because of its high compression performance and good computational efficiency. As the computational burden of full frame DCT would be heavy, the image is usually divided into non-overlapping sub-images, or blocks, for processing. This paper aims to identify the optimum size of the block, in reference to compression of CT, ultrasound and X-ray images. Three conflicting requirements are considered, namely processing time, compression ratio and the quality of the reconstructed image. The quantitative comparison of various block sizes has been carried out on the basis of benefit-to-cost ratio (BCR) and reconstruction quality score (RQS). Experimental results are presented that verify the optimality of the 16 x 16 block size.
Thompson, J F; Winterborn, R J; Bays, S; White, H; Kinsella, D C; Watkinson, A F
2011-10-01
Paget Schroetter syndrome, or effort thrombosis of the axillosubclavian venous system, is distinct from other forms of upper limb deep vein thrombosis. It occurs in younger patients and often is secondary to competitive sport, music, or strenuous occupation. If untreated, there is a higher incidence of disabling venous hypertension than was previously appreciated. Anticoagulation alone or in combination with thrombolysis leads to a high rate of rethrombosis. We have established a multidisciplinary protocol over 15 years, based on careful patient selection and a combination of lysis, decompressive surgery, and postoperative percutaneous venoplasty. During the past 10 years, a total of 232 decompression procedures have been performed. This article reviews the literature and presents the Exeter Protocol along with practical recommendations for management.
Thompson, J. F. Winterborn, R. J.; Bays, S.; White, H.; Kinsella, D. C.; Watkinson, A. F.
2011-10-15
Paget Schroetter syndrome, or effort thrombosis of the axillosubclavian venous system, is distinct from other forms of upper limb deep vein thrombosis. It occurs in younger patients and often is secondary to competitive sport, music, or strenuous occupation. If untreated, there is a higher incidence of disabling venous hypertension than was previously appreciated. Anticoagulation alone or in combination with thrombolysis leads to a high rate of rethrombosis. We have established a multidisciplinary protocol over 15 years, based on careful patient selection and a combination of lysis, decompressive surgery, and postoperative percutaneous venoplasty. During the past 10 years, a total of 232 decompression procedures have been performed. This article reviews the literature and presents the Exeter Protocol along with practical recommendations for management.
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
Image data compression using a new floating-point digital signal processor.
Siegel, E L; Templeton, A W; Hensley, K L; McFadden, M A; Baxter, K G; Murphey, M D; Cronin, P E; Gesell, R G; Dwyer, S J
1991-08-01
A new dual-ported, floating-point, digital signal processor has been evaluated for compressing 512 and 1,024 digital radiographic images using a full-frame, two-dimensional, discrete cosine transform (2D-DCT). The floating point digital signal processor operates at 49.5 million floating point instructions per second (MFLOPS). The level of compression can be changed by varying four parameters in the lossy compression algorithm. Throughput times were measured for both 2D-DCT compression and decompression. For a 1,024 x 1,024 x 10-bit image with a compression ratio of 316:1, the throughput was 75.73 seconds (compression plus decompression throughput). For a digital fluorography 1,024 x 1,024 x 8-bit image and a compression ratio of 26:1, the total throughput time was 63.23 seconds. For a computed tomography image of 512 x 512 x 12 bits and a compression ratio of 10:1 the throughput time was 19.65 seconds.
A method of image compression based on lifting wavelet transform and modified SPIHT
NASA Astrophysics Data System (ADS)
Lv, Shiliang; Wang, Xiaoqian; Liu, Jinguo
2016-11-01
In order to improve the efficiency of remote sensing image data storage and transmission we present a method of the image compression based on lifting scheme and modified SPIHT(set partitioning in hierarchical trees) by the design of FPGA program, which realized to improve SPIHT and enhance the wavelet transform image compression. The lifting Discrete Wavelet Transform (DWT) architecture has been selected for exploiting the correlation among the image pixels. In addition, we provide a study on what storage elements are required for the wavelet coefficients. We present lena's image using the 3/5 lifting scheme.
Introduction of heat map to fidelity assessment of compressed CT images
Lee, Hyunna; Kim, Bohyoung; Seo, Jinwook; Park, Seongjin; Shin, Yeong-Gil; Kim, Kil Joong; Lee, Kyoung Ho
2011-08-15
Purpose: This study aimed to introduce heat map, a graphical data presentation method widely used in gene expression experiments, to the presentation and interpretation of image fidelity assessment data of compressed computed tomography (CT) images. Methods: The authors used actual assessment data that consisted of five radiologists' responses to 720 computed tomography images compressed using both Joint Photographic Experts Group 2000 (JPEG2000) 2D and JPEG2000 3D compressions. They additionally created data of two artificial radiologists, which were generated by partly modifying the data from two human radiologists. Results: For each compression, the entire data set, including the variations among radiologists and among images, could be compacted into a small color-coded grid matrix of the heat map. A difference heat map depicted the advantage of 3D compression over 2D compression. Dendrograms showing hierarchical agglomerative clustering results were added to the heat maps to illustrate the similarities in the data patterns among radiologists and among images. The dendrograms were used to identify two artificial radiologists as outliers, whose data were created by partly modifying the responses of two human radiologists. Conclusions: The heat map can illustrate a quick visual extract of the overall data as well as the entirety of large complex data in a compact space while visualizing the variations among observers and among images. The heat map with the dendrograms can be used to identify outliers or to classify observers and images based on the degree of similarity in the response patterns.
NASA Astrophysics Data System (ADS)
Sánchez, Sergio; Plaza, Antonio
2012-06-01
Hyperspectral image compression is an important task in remotely sensed Earth Observation as the dimensionality of this kind of image data is ever increasing. This requires on-board compression in order to optimize the donwlink connection when sending the data to Earth. A successful algorithm to perform lossy compression of remotely sensed hyperspectral data is the iterative error analysis (IEA) algorithm, which applies an iterative process which allows controlling the amount of information loss and compression ratio depending on the number of iterations. This algorithm, which is based on spectral unmixing concepts, can be computationally expensive for hyperspectral images with high dimensionality. In this paper, we develop a new parallel implementation of the IEA algorithm for hyperspectral image compression on graphics processing units (GPUs). The proposed implementation is tested on several different GPUs from NVidia, and is shown to exhibit real-time performance in the analysis of an Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS) data sets collected over different locations. The proposed algorithm and its parallel GPU implementation represent a significant advance towards real-time onboard (lossy) compression of hyperspectral data where the quality of the compression can be also adjusted in real-time.
OARSI Clinical Trials Recommendations: Hip imaging in clinical trials in osteoarthritis.
Gold, G E; Cicuttini, F; Crema, M D; Eckstein, F; Guermazi, A; Kijowski, R; Link, T M; Maheu, E; Martel-Pelletier, J; Miller, C G; Pelletier, J-P; Peterfy, C G; Potter, H G; Roemer, F W; Hunter, D J
2015-05-01
Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/techniques (including guidance on positioning for radiography, sequence/protocol recommendations/hardware for magnetic resonance imaging (MRI)); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations.
Lossy compression of floating point high-dynamic range images using JPEG2000
NASA Astrophysics Data System (ADS)
Springer, Dominic; Kaup, Andre
2009-01-01
In recent years, a new technique called High Dynamic Range (HDR) has gained attention in the image processing field. By representing pixel values with floating point numbers, recorded images can hold significantly more luminance information than ordinary integer images. This paper focuses on the realization of a lossy compression scheme for HDR images. The JPEG2000 standard is used as a basic component and is efficiently integrated into the compression chain. Based on a detailed analysis of the floating point format and the human visual system, a concept for lossy compression is worked out and thoroughly optimized. Our scheme outperforms all other existing lossy HDR compression schemes and shows superior performance both at low and high bitrates.
Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor)
2015-01-01
A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.
Improved successive refinement for wavelet-based embedded image compression
NASA Astrophysics Data System (ADS)
Creusere, Charles D.
1999-10-01
In this paper we consider a new form of successive coefficient refinement which can be used in conjunction with embedded compression algorithms like Shapiro's EZW (Embedded Zerotree Wavelet) and Said & Pearlman's SPIHT (Set Partitioning in Hierarchical Trees). Using the conventional refinement process, the approximation of a coefficient that was earlier determined to be significantly is refined by transmitting one of two symbols--an `up' symbol if the actual coefficient value is in the top half of the current uncertainty interval or a `down' symbol if it is the bottom half. In the modified scheme developed here, we transmit one of 3 symbols instead--`up', `down', or `exact'. The new `exact' symbol tells the decoder that its current approximation of a wavelet coefficient is `exact' to the level of precision desired. By applying this scheme in earlier work to lossless embedded compression (also called lossy/lossless compression), we achieved significant reductions in encoder and decoder execution times with no adverse impact on compression efficiency. These excellent results for lossless systems have inspired us to adapt this refinement approach to lossy embedded compression. Unfortunately, the results we have achieved thus far for lossy compression are not as good.
A VLSI Processor Design of Real-Time Data Compression for High-Resolution Imaging Radar
NASA Technical Reports Server (NTRS)
Fang, W.
1994-01-01
For the high-resolution imaging radar systems, real-time data compression of raw imaging data is required to accomplish the science requirements and satisfy the given communication and storage constraints. The Block Adaptive Quantizer (BAQ) algorithm and its associated VLSI processor design have been developed to provide a real-time data compressor for high-resolution imaging radar systems.
Understanding and controlling the effect of lossy raw data compression on CT images.
Wang, Adam S; Pelc, Norbert J
2009-08-01
The requirements for raw data transmission through a CT scanner slip ring, through the computation system, and for storage of raw CT data can be quite challenging as scanners continue to increase in speed and to collect more data per rotation. Although lossy compression greatly mitigates this problem, users must be cautious about how errors introduced manifest themselves in the reconstructed images. This paper describes two simple yet effective methods for controlling the effect of errors in raw data compression and describe the impact of each stage on the image errors. A CT system simulator (CATSIM, GE Global Research Center, Niskayuna, NY) was used to generate raw CT datasets that simulate different regions of human anatomy. The raw data are digitized by a 20-bit ADC and companded by a log compander. Lossy compression is performed by quantization and is followed by JPEG-LS (lossless), which takes advantage of the correlations between neighboring measurements in the sinogram. Error feedback, a previously proposed method that controls the spatial distribution of reconstructed image errors, and projection filtering, a newly proposed method that takes advantage of the filtered backprojection reconstruction process, are applied independently (and combined) to study their intended impact on the control and behavior of the additional noise due to the compression methods used. The log compander and the projection filtering method considerably reduce image error levels, while error feedback pushes image errors toward the periphery of the field of view. The results for the images are a compression ratio (CR) of 3 that keeps peak compression errors under 1 HU and a CR of 9 that increases image noise by only 1 HU in common CT applications. Lossy compression can substantially reduce raw CT data size at low computational cost. The proposed methods have the flexibility to operate at a wide range of compression ratios and produce predictable, object-independent, and often
NASA Technical Reports Server (NTRS)
Hartfield, Roy J., Jr.; Abbitt, John D., III; Mcdaniel, James C.
1989-01-01
A technique is described for imaging the injectant mole-fraction distribution in nonreacting compressible mixing flow fields. Planar fluorescence from iodine, seeded into air, is induced by a broadband argon-ion laser and collected using an intensified charge-injection-device array camera. The technique eliminates the thermodynamic dependence of the iodine fluorescence in the compressible flow field by taking the ratio of two images collected with identical thermodynamic flow conditions but different iodine seeding conditions.
Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu
2016-12-20
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
NASA Astrophysics Data System (ADS)
Martinez-Uriegas, Eugenio; Peters, John D.; Crane, Hewitt D.
1994-05-01
SRI International has developed a new technique for compression of digital color images on the basis of its research of multiplexing processes in human color vision. The technique can be used independently, or in combination with standard JPEG or any other monochrome procedure, to produce color image compression systems that are simpler than conventional implementations. Specific applications are currently being developed within four areas: (1) simplification of processing in systems that compress RGB digital images, (2) economic upgrading of black and white image capturing systems to full color, (3) triplication of spatial resolution of high-end image capturing systems currently designed for 3-plane color capture, and (4) even greater simplification of processing in systems for dynamic images.
Compressed Sensing for Millimeter-wave Ground Based SAR/ISAR Imaging
NASA Astrophysics Data System (ADS)
Yiğit, Enes
2014-11-01
Millimeter-wave (MMW) ground based (GB) synthetic aperture radar (SAR) and inverse SAR (ISAR) imaging are the powerful tools for the detection of foreign object debris (FOD) and concealed objects that requires wide bandwidths and highly frequent samplings in both slow-time and fast-time domains according to Shannon/Nyquist sampling theorem. However, thanks to the compressive sensing (CS) theory GB-SAR/ISAR data can be reconstructed by much fewer random samples than the Nyquist rate. In this paper, the impact of both random frequency sampling and random spatial domain data collection of a SAR/ISAR sensor on reconstruction quality of a scene of interest was studied. To investigate the feasibility of using proposed CS framework, different experiments for various FOD-like and concealed object-like targets were carried out at the Ka and W band frequencies of the MMW. The robustness and effectiveness of the recommend CS-based reconstruction configurations were verified through a comparison among each other by using integrated side lobe ratios (ISLR) of the images.
NASA Astrophysics Data System (ADS)
Re, C.; Simioni, E.; Cremonese, G.; Roncella, R.; Forlani, G.; Langevin, Y.; Da Deppo, V.; Naletto, G.; Salemi, G.
2016-06-01
The great amount of data that will be produced during the imaging of Mercury by the stereo camera (STC) of the BepiColombo mission needs a compromise with the restrictions imposed by the band downlink that could drastically reduce the duration and frequency of the observations. The implementation of an on-board real time data compression strategy preserving as much information as possible is therefore mandatory. The degradation that image compression might cause to the DTM accuracy is worth to be investigated. During the stereo-validation procedure of the innovative STC imaging system, several image pairs of an anorthosite sample and a modelled piece of concrete have been acquired under different illumination angles. This set of images has been used to test the effects of the compression algorithm (Langevin and Forni, 2000) on the accuracy of the DTM produced by dense image matching. Different configurations taking in account at the same time both the illumination of the surface and the compression ratio, have been considered. The accuracy of the DTMs is evaluated by comparison with a high resolution laser-scan acquisition of the same targets. The error assessment included also an analysis on the image plane indicating the influence of the compression procedure on the image measurements.
NASA Astrophysics Data System (ADS)
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
High capacity image steganography method based on framelet and compressive sensing
NASA Astrophysics Data System (ADS)
Xiao, Moyan; He, Zhibiao
2015-12-01
To improve the capacity and imperceptibility of image steganography, a novel high capacity and imperceptibility image steganography method based on a combination of framelet and compressive sensing (CS) is put forward. Firstly, SVD (Singular Value Decomposition) transform to measurement values obtained by compressive sensing technique to the secret data. Then the singular values in turn embed into the low frequency coarse subbands of framelet transform to the blocks of the cover image which is divided into non-overlapping blocks. Finally, use inverse framelet transforms and combine to obtain the stego image. The experimental results show that the proposed steganography method has a good performance in hiding capacity, security and imperceptibility.
NASA Astrophysics Data System (ADS)
Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.
2016-02-01
We applied compressed ultrafast photography (CUP), a computational imaging technique, to acquire three-dimensional (3D) images. The approach unites image encryption, compression, and acquisition in a single measurement, thereby allowing efficient and secure data transmission. By leveraging the time-of-flight (ToF) information of pulsed light reflected by the object, we can reconstruct a volumetric image (150 mm×150 mm×1050 mm, x × y × z) from a single camera snapshot. Furthermore, we demonstrated high-speed 3D videography of a moving object at 75 frames per second using the ToF-CUP camera.
Electromagnetic Scattered Field Evaluation and Data Compression Using Imaging Techniques
NASA Technical Reports Server (NTRS)
Gupta, I. J.; Burnside, W. D.
1996-01-01
This is the final report on Project #727625 between The Ohio State University and NASA, Lewis Research Center, Cleveland, Ohio. Under this project, a data compression technique for scattered field data of electrically large targets is developed. The technique was applied to the scattered fields of two targets of interest. The backscattered fields of the scale models of these targets were measured in a ra compact range. For one of the targets, the backscattered fields were also calculated using XPATCH computer code. Using the technique all scattered field data sets were compressed successfully. A compression ratio of the order 40 was achieved. In this report, the technique is described briefly and some sample results are included.
Applications of compressed sensing to coherent radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Qian
Although meteoroids fragmentation has been observed and studied in the optical meteor community since the 1950s, no definitive fragmentation mechanisms for the relatively small meteoroids (mass .10.4 kg) have been proposed. This is in part due to the lack of observations to constrain physical mechanisms of the fragmentation process. While it is challenging to record fragmentation in faint optical meteors, observing meteors using HPLA (High-Power, Large- Aperture) radars can yield considerable information especially when employing coherent radar imaging (CRI). CRI can potentially resolve the fragmentation process in three spatial dimensions by monitoring the evolution of the plasma in the meteor head-echo, flare-echo, and trail-echo regions. On the other hand, the emerging field of compressed sensing (CS) provides a novel paradigm for signal acquisition and processing. Furthermore, it has been, and continues to be, applied with great success in radar systems, offering various benefits such as better resolution compared to traditional techniques, reduced resource requirements, and so forth. In this dissertation, we examine how CS can be incorporated to improve the performance of CRI using HPLA radars. We propose a single CS-based formalism that enables the threedimensions (3D).the range, Doppler frequency, and cross range (represented by the direction cosines) domain.coherent imaging. We show that the CS-based CRI can not only reduce the system costs and decrease the needed number of baselines by spatial sparse sampling, which can be much less than the number required by Nyquist-Shannon sampling criterion, but also achieve high resolution for target detection. We implement the CS-based CRI for meteor studies with observations conducted at the Jicamarca Radio Observatory (JRO) in Peru. We present the unprecedented resolved details of meteoroids fragmentation, including both along and transverse to the trajectory spreading of the developing plasma, apparently caused by
An improved image compression algorithm using binary space partition scheme and geometric wavelets.
Chopra, Garima; Pal, A K
2011-01-01
Geometric wavelet is a recent development in the field of multivariate nonlinear piecewise polynomials approximation. The present study improves the geometric wavelet (GW) image coding method by using the slope intercept representation of the straight line in the binary space partition scheme. The performance of the proposed algorithm is compared with the wavelet transform-based compression methods such as the embedded zerotree wavelet (EZW), the set partitioning in hierarchical trees (SPIHT) and the embedded block coding with optimized truncation (EBCOT), and other recently developed "sparse geometric representation" based compression algorithms. The proposed image compression algorithm outperforms the EZW, the Bandelets and the GW algorithm. The presented algorithm reports a gain of 0.22 dB over the GW method at the compression ratio of 64 for the Cameraman test image.
NASA Astrophysics Data System (ADS)
Krishnan, Sundar Rajan; Srinivasan, Kalyan Kumar; Stegmeir, Matthew
2015-11-01
Direct-injection compression ignition combustion of diesel and gasoline were studied in a rapid compression-expansion machine (RCEM) using high-speed OH* chemiluminescence imaging. The RCEM (bore = 84 mm, stroke = 110-250 mm) was used to simulate engine-like operating conditions at the start of fuel injection. The fuels were supplied by a high-pressure fuel cart with an air-over-fuel pressure amplification system capable of providing fuel injection pressures up to 2000 bar. A production diesel fuel injector was modified to provide a single fuel spray for both diesel and gasoline operation. Time-resolved combustion pressure in the RCEM was measured using a Kistler piezoelectric pressure transducer mounted on the cylinder head and the instantaneous piston displacement was measured using an inductive linear displacement sensor (0.05 mm resolution). Time-resolved, line-of-sight OH* chemiluminescence images were obtained using a Phantom V611 CMOS camera (20.9 kHz @ 512 x 512 pixel resolution, ~ 48 μs time resolution) coupled with a short wave pass filter (cut-off ~ 348 nm). The instantaneous OH* distributions, which indicate high temperature flame regions within the combustion chamber, were used to discern the characteristic differences between diesel and gasoline compression ignition combustion. The authors gratefully acknowledge facilities support for the present work from the Energy Institute at Mississippi State University.
Compressive optical image encryption with two-step-only quadrature phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Li, Jun; Li, Hongbing; Li, Jiaosheng; Pan, Yangyang; Li, Rong
2015-06-01
An image encryption method which combines two-step-only quadrature phase-shifting digital holography with compressive sensing (CS) has been proposed in the fully optical domain. An object image is firstly encrypted to two on-axis quadrature-phase holograms using the two random phase masks in the Mach-Zehnder interferometer. Then, the two encrypted images are highly compressed to a one-dimensional signal using the single-pixel compressive holographic imaging in the optical domain. At the receiving terminal, the two compressive encrypted holograms are exactly reconstructed from much less than the Nyquist sampling number of observations by solving an optimization problem, and the original image can be decrypted with only two reconstructed holograms and the correct keys. This method largely decreases holograms data volume for the current optical image encryption system, and it is also suitable for some special optical imaging cases such as different wavelengths imaging and weak light imaging. Numerical simulation is performed to demonstrate the feasibility and validity of this novel image encryption method.
NASA Astrophysics Data System (ADS)
Kang, Ho-Hyun; Lee, Jung-Woo; Shin, Dong-Hak; Kim, Eun-Soo
2010-02-01
This paper addresses the efficient compression scheme of elemental image array (EIA) generated from the moving array lenslet technique (MALT) based on MPEG-4. The EIAs are picked-up by MALT controlling the spatial ray sampling of ray and which produces few EIAs that the positions of the lenslet arrays are rapidly vibrated in the lateral directions within the retention time of the afterimage of human eye. To enhance the similarity in each EIA picked-up by MALT, several EIAs obtained from MALT are regenerated by the collection of an elemental image occupied at the same position in each EIA. The newly generated each EIA has high similarity among adjacent elemental images. To illustrate the feasibility of the proposed scheme, some experiments are carried out to show the increased compression efficiency and we obtained the improved compression ratio of 12% compared to the unhandled compression scheme.
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.
2017-01-01
LAPAN-A3/IPB satellite is the latest Indonesian experimental microsatellite with remote sensing and earth surveillance missions. The satellite has three optical payloads, which are multispectral push-broom imager, digital matrix camera and video camera. To increase data transmission efficiency, the multispectral imager data can be compressed using either lossy or lossless compression method. This paper aims to analyze Differential Pulse Code Modulation (DPCM) method and Huffman coding that are used in LAPAN-IPB satellite image lossless compression. Based on several simulation and analysis that have been done, current LAPAN-IPB lossless compression algorithm has moderate performance. There are several aspects that can be improved from current configuration, which are the type of DPCM code used, the type of Huffman entropy-coding scheme, and the use of sub-image compression method. The key result of this research shows that at least two neighboring pixels should be used for DPCM calculation to increase compression performance. Meanwhile, varying Huffman tables with sub-image approach could also increase the performance if on-board computer can support for more complicated algorithm. These results can be used as references in designing Payload Data Handling System (PDHS) for an upcoming LAPAN-A4 satellite.
Hyperspectral images lossless compression using the 3D binary EZW algorithm
NASA Astrophysics Data System (ADS)
Cheng, Kai-jen; Dill, Jeffrey
2013-02-01
This paper presents a transform based lossless compression for hyperspectral images which is inspired by Shapiro (1993)'s EZW algorithm. The proposed compression method uses a hybrid transform which includes an integer Karhunrn-Loeve transform (KLT) and integer discrete wavelet transform (DWT). The integer KLT is employed to eliminate the presence of correlations among the bands of the hyperspectral image. The integer 2D discrete wavelet transform (DWT) is applied to eliminate the correlations in the spatial dimensions and produce wavelet coefficients. These coefficients are then coded by a proposed binary EZW algorithm. The binary EZW eliminates the subordinate pass of conventional EZW by coding residual values, and produces binary sequences. The binary EZW algorithm combines the merits of well-known EZW and SPIHT algorithms, and it is computationally simpler for lossless compression. The proposed method was applied to AVIRIS images and compared to other state-of-the-art image compression techniques. The results show that the proposed lossless image compression is more efficient and it also has higher compression ratio than other algorithms.
Power- and space-efficient image computation with compressive processing: I. Background and theory
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2000-11-01
Surveillance imaging applications on small autonomous imaging platforms present challenges of highly constrained power supply and form factor, with potentially demanding specifications for target detection and recognition. Absent of significant advances in image processing hardware, such power and space restrictions can imply severely limited computational capabilities. This holds especially for compute-intensive algorithms with high-precision fixed- or floating- point operations in deep pipelines that process large data streams. Such algorithms tend not to be amenable to small or simplified architectures involving (for example) reduced precision, reconfigurable logic, low-power gates, or energy recycling schemes. In this series of two papers, a technique of reduced-power computing called compressive processing (CXP) is presented and applied to several low- and mid-level computer vision operations. CXP computes over compressed data without resorting to intermediate decompression steps. As a result of fewer data due to compression, fewer operations are required by CXP than are required by computing over the corresponding uncompressed image. In several cases, CXP techniques yield speedups on the order of the compression ratio. Where lossy high-compression transforms are employed, it is often possible to use approximations to derive CXP operations to yield increased computational efficiency via a simplified mix of operations. The reduced work requirement, which follows directly from the presence of fewer data, also implies a reduced power requirement, especially if simpler operations are involved in compressive versus noncompressive operations. Several image processing algorithms (edge detection, morphological operations, and component labeling) are analyzed in the context of three compression transforms: vector quantization (VQ), visual pattern image coding (VPIC), and EBLAST. The latter is a lossy high-compression transformation developed for underwater
An introduction to video image compression and authentication technology for safeguards applications
Johnson, C.S.
1995-07-01
Verification of a video image has been a major problem for safeguards for several years. Various verification schemes have been tried on analog video signals ever since the mid-1970`s. These schemes have provided a measure of protection but have never been widely adopted. The development of reasonably priced complex video processing integrated circuits makes it possible to digitize a video image and then compress the resulting digital file into a smaller file without noticeable loss of resolution. Authentication and/or encryption algorithms can be more easily applied to digital video files that have been compressed. The compressed video files require less time for algorithm processing and image transmission. An important safeguards application for authenticated, compressed, digital video images is in unattended video surveillance systems and remote monitoring systems. The use of digital images in the surveillance system makes it possible to develop remote monitoring systems that send images over narrow bandwidth channels such as the common telephone line. This paper discusses the video compression process, authentication algorithm, and data format selected to transmit and store the authenticated images.
Student Images of Agriculture: Survey Highlights and Recommendations.
ERIC Educational Resources Information Center
Mallory, Mary E.; Sommer, Robert
1986-01-01
The high school students studied were unaware of the range of opportunities in agricultural careers. It was recommended that the University of California, Davis initiate a public relations campaign, with television advertising, movies, and/or public service announcements focusing on exciting, high-tech agricultural research and enterprise. (CT)
A No-Reference Adaptive Blockiness Measure for JPEG Compressed Images
Tang, Chaoying; Wang, Biao
2016-01-01
Digital images have been extensively used in education, research, and entertainment. Many of these images, taken by consumer cameras, are compressed by the JPEG algorithm for effective storage and transmission. Blocking artifact is a well-known problem caused by this algorithm. Effective measurement of blocking artifacts plays an important role in the design, optimization, and evaluation of image compression algorithms. In this paper, we propose a no-reference objective blockiness measure, which is adaptive to high frequency component in an image. Difference of entropies across blocks and variation of block boundary pixel values in edge images are adopted to calculate the blockiness level in areas with low and high frequency component, respectively. Extensive experimental results prove that the proposed measure is effective and stable across a wide variety of images. It is robust to image noise and can be used for real-world image quality monitoring and control. Index Terms—JPEG, no-reference, blockiness measure PMID:27832092
A Complete Image Compression Scheme Based on Overlapped Block Transform with Post-Processing
NASA Astrophysics Data System (ADS)
Kwan, C.; Li, B.; Xu, R.; Li, X.; Tran, T.; Nguyen, T.
2006-12-01
A complete system was built for high-performance image compression based on overlapped block transform. Extensive simulations and comparative studies were carried out for still image compression including benchmark images (Lena and Barbara), synthetic aperture radar (SAR) images, and color images. We have achieved consistently better results than three commercial products in the market (a Summus wavelet codec, a baseline JPEG codec, and a JPEG-2000 codec) for most images that we used in this study. Included in the system are two post-processing techniques based on morphological and median filters for enhancing the perceptual quality of the reconstructed images. The proposed system also supports the enhancement of a small region of interest within an image, which is of interest in various applications such as target recognition and medical diagnosis
Informational Analysis for Compressive Sampling in Radar Imaging
Zhang, Jingxiong; Yang, Ke
2015-01-01
Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation. PMID:25811226
2015-03-26
medical imaging , e.g., magnetic resonance imaging (MRI). Since the early 1980s, MRI has granted doctors the ability to distinguish between healthy tissue...chemical composition of a star. Conventional hyperspectral cameras are slow. Different methods of hyperspectral imaging either require time to process ...Recent Advances in Compressed Sensing: Discrete Uncertainty Principles and Fast Hyperspectral Imaging THESIS MARCH 2015 Megan E. Lewis, Second
A joint image encryption and watermarking algorithm based on compressive sensing and chaotic map
NASA Astrophysics Data System (ADS)
Xiao, Di; Cai, Hong-Kun; Zheng, Hong-Ying
2015-06-01
In this paper, a compressive sensing (CS) and chaotic map-based joint image encryption and watermarking algorithm is proposed. The transform domain coefficients of the original image are scrambled by Arnold map firstly. Then the watermark is adhered to the scrambled data. By compressive sensing, a set of watermarked measurements is obtained as the watermarked cipher image. In this algorithm, watermark embedding and data compression can be performed without knowing the original image; similarly, watermark extraction will not interfere with decryption. Due to the characteristics of CS, this algorithm features compressible cipher image size, flexible watermark capacity, and lossless watermark extraction from the compressed cipher image as well as robustness against packet loss. Simulation results and analyses show that the algorithm achieves good performance in the sense of security, watermark capacity, extraction accuracy, reconstruction, robustness, etc. Project supported by the Open Research Fund of Chongqing Key Laboratory of Emergency Communications, China (Grant No. CQKLEC, 20140504), the National Natural Science Foundation of China (Grant Nos. 61173178, 61302161, and 61472464), and the Fundamental Research Funds for the Central Universities, China (Grant Nos. 106112013CDJZR180005 and 106112014CDJZR185501).
Region of interest extraction for lossless compression of bone X-ray images.
Kazeminia, S; Karimi, N; Soroushmehr, S M R; Samavi, S; Derksen, H; Najarian, K
2015-01-01
For few decades digital X-ray imaging has been one of the most important tools for medical diagnosis. With the advent of distance medicine and the use of big data in this respect, the need for efficient storage and online transmission of these images is becoming an essential feature. Limited storage space and limited transmission bandwidth are the main challenges. Efficient image compression methods are lossy while the information of medical images should be preserved with no change. Hence, lossless compression methods are necessary for this purpose. In this paper, a novel method has been proposed to eliminate the non-ROI data from bone X-ray images. Background pixels do not contain any valuable medical information. The proposed method is based on the histogram dispersion method. ROI is separated from the background and it is compressed with a lossless compression method to preserve medical information of the image. Compression ratios of the implemented results show that the proposed algorithm is capable of effective reduction of the statistical and spatial redundancies.
Hardware Implementation of Lossless Adaptive Compression of Data From a Hyperspectral Imager
NASA Technical Reports Server (NTRS)
Keymeulen, Didlier; Aranki, Nazeeh I.; Klimesh, Matthew A.; Bakhshi, Alireza
2012-01-01
Efficient onboard data compression can reduce the data volume from hyperspectral imagers on NASA and DoD spacecraft in order to return as much imagery as possible through constrained downlink channels. Lossless compression is important for signature extraction, object recognition, and feature classification capabilities. To provide onboard data compression, a hardware implementation of a lossless hyperspectral compression algorithm was developed using a field programmable gate array (FPGA). The underlying algorithm is the Fast Lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral- Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), p. 26 with the modification reported in Lossless, Multi-Spectral Data Comressor for Improved Compression for Pushbroom-Type Instruments (NPO-45473), NASA Tech Briefs, Vol. 32, No. 7 (July 2008) p. 63, which provides improved compression performance for data from pushbroom-type imagers. An FPGA implementation of the unmodified FL algorithm was previously developed and reported in Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System (NPO-46867), NASA Tech Briefs, Vol. 36, No. 5 (May 2012) p. 42. The essence of the FL algorithm is adaptive linear predictive compression using the sign algorithm for filter adaption. The FL compressor achieves a combination of low complexity and compression effectiveness that exceeds that of stateof- the-art techniques currently in use. The modification changes the predictor structure to tolerate differences in sensitivity of different detector elements, as occurs in pushbroom-type imagers, which are suitable for spacecraft use. The FPGA implementation offers a low-cost, flexible solution compared to traditional ASIC (application specific integrated circuit) and can be integrated as an intellectual property (IP) for part of, e.g., a design that manages the instrument interface. The FPGA implementation was benchmarked on the Xilinx
Quantization techniques for the compression of chest images by JPEG-type algorithms
NASA Astrophysics Data System (ADS)
Good, Walter F.; Gur, David
1992-06-01
The Joint Photographic Expert Group (JPEG) compression standard specifies a quantization procedure but does not specify a particular quantization table. In addition, there are quantization procedures which are effectively compatible with the standard but do not adhere to the simple quantization scheme described therein. These are important considerations, since it is the quantization procedure that primarily determines the compression ratio as well as the kind of information lost or artifacts introduced. A study has been conducted of issues related to the design of quantization techniques tailored for the compression of 12-bit chest images in radiology. Psycho-physical based quantization alone may not be optimal for images that are to be compressed and then used for primary diagnosis. Two specific examples of auxiliary techniques which can be used in conjunction with JPEG compression are presented here. In particular, preprocessing of the source image is shown to be advantageous under certain circumstances. In contrast, a proposed quantization technique in which isolated nonzero coefficients are removed has been shown to be generally detrimental. Image quality here is primarily measured by mean square error (MSE), although this study is in anticipation of more relevant reader performance studies of compression.
Medical image compression using cubic spline interpolation with bit-plane compensation
NASA Astrophysics Data System (ADS)
Truong, Trieu-Kien; Chen, Shi-Huang; Lin, Tsung-Ching
2007-03-01
In this paper, a modified medical image compression algorithm using cubic spline interpolation (CSI) is presented for telemedicine applications. The CSI is developed in order to subsample image data with minimal distortion and to achieve compression. It has been shown in literatures that the CSI can be combined with the JPEG algorithms to develop a modified JPEG codec, which obtains a higher compression ratio and a better quality of reconstructed image than the standard JPEG. However, this modified JPEG codec will lose some high-frequency components of medical images during compression process. To minimize the drawback arose from loss of these high-frequency components, this paper further makes use of bit-plane compensation to the modified JPEG codec. The bit-plane compensation algorithm used in this paper is modified from JBIG2 standard. Experimental results show that the proposed scheme can increase 20~30% compression ratio of original JPEG medical data compression system with similar visual quality. This system can reduce the loading of telecommunication networks and is quite suitable for low bit-rate telemedicine applications.
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.
2014-07-01
We describe an image-comparison technique of Heidemann and Ritter (2008a, b), which uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coal beds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. The current system is not directly intended for mapping and novelty detection of a second field site based on image-compression analysis of an image database from a first field site, although our current system could be further developed towards this end. Furthermore, the image-comparison technique is an unsupervised technique that is not capable of directly classifying an image as containing a particular geological feature; labelling of such geological features is done post facto by human geologists associated with this study, for the purpose of analysing the system's performance. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy
Vector-lifting schemes based on sorting techniques for lossless compression of multispectral images
NASA Astrophysics Data System (ADS)
Benazza-Benyahia, Amel; Pesquet, Jean-Christophe
2003-01-01
In this paper, we introduce vector-lifting schemes which allow to generate very compact multiresolution representations, suitable for lossless and progressive coding of multispectral images. These new decomposition schemes exploit simultaneously the spatial and the spectral redundancies contained in multispectral images. When the spectral bands have different dynamic ranges, we improve dramatically the performances of the proposed schemes by a reversible histogram modification based on sorting permutations. Simulation tests carried out on real images allow to evaluate the performances of this new compression method. They indicate that the achieved compression ratios are higher than those obtained with currently used lossless coders.
Evaluation of algorithms for lossless compression of continuous-tone images
NASA Astrophysics Data System (ADS)
Savakis, Andreas E.
2002-01-01
Lossless image compression algorithms for continuous-tone images have received a great deal of attention in recent years. However, reports on benchmarking their performance have been limited. In this paper, we present a comparative study of the following algorithms: UNIX compress, gzip, LZW, Group 3, Group 4, JBIG, old lossless JPEG, JPEG-LS based on LOCO, CALIC, FELICS, S + P transform, and PNG. The test images consist of two sets of eight bits/pixel continuous-tone images: one set contains nine pictorial images, and another set contains eight document images, obtained from the standard set of CCITT images that were scanned and printed using eight bits/pixel at 200 dpi. In cases where the algorithm under consideration could only be applied to binary data, the bitplanes of the gray scale image were decomposed, with and without Gray encoding, and the compression was applied to individual bit planes. The results show that the best compression is obtained using the CALIC and JPEG--LS algorithms.
Context-dependent JPEG backward-compatible high-dynamic range image compression
NASA Astrophysics Data System (ADS)
Korshunov, Pavel; Ebrahimi, Touradj
2013-10-01
High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.
Compressive Optical Imaging Systems - Theory, Devices and Implementation
2009-04-01
Pitsianis, J. R Guo, A. Portnoy, and M. Fiddy, "Compressive optical montage photography ," in SPIE: Photonic Devices and Algorithms for Computing VII, vol...Susstrunk, "Virtual sensor design," in Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V, vol. 5301, pp. 408...pp. 10-12, 1996. (57] D. Schroeder, Astronomical Optics. Academic Press, 1987. [58] S. S. Chen, D. L. Donoho, and M. A. Saunders, "Atomic
Compressive Optical Imaging Systems -- Theory, Devices and Implementation
2009-04-15
1758, 2000. [10] D. J. Brady, M. Feldman, N. Pitsianis, J. P. Guo, A. Portnoy, and M. Fiddy, “Compressive optical montage photography ,” in SPIE...Constantini and S. Susstrunk, “Virtual sensor design,” in Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications V, vol...Schroeder, Astronomical Optics. Academic Press, 1987. [58] S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit
All-optical image processing and compression based on Haar wavelet transform.
Parca, Giorgia; Teixeira, Pedro; Teixeira, Antonio
2013-04-20
Fast data processing and compression methods based on wavelet transform are fundamental tools in the area of real-time 2D data/image analysis, enabling high definition applications and redundant data reduction. The need for information processing at high data rates motivates the efforts on exploiting the speed and the parallelism of the light for data analysis and compression. Among several schemes for optical wavelet transform implementation, the Haar transform offers simple design and fast computation, plus it can be easily implemented by optical planar interferometry. We present an all optical scheme based on an asymmetric couplers network for achieving fast image processing and compression in the optical domain. The implementation of Haar wavelet transform through a 3D passive structure is supported by theoretical formulation and simulations results. Asymmetrical coupler 3D network design and optimization are reported and Haar wavelet transform, including compression, was achieved, thus demonstrating the feasibility of our approach.
The Cyborg Astrobiologist: Image Compression for Geological Mapping and Novelty Detection
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Bonnici, A.; Bruner, K. R.; Gross, C.; Ormö, J.; Smosna, R. A.; Walter, S.; Wendt, L.
2013-09-01
We describe an image-comparison technique of Heidemann and Ritter [4,5] that uses image compression, and is capable of: (i) detecting novel textures in a series of images, as well as of: (ii) alerting the user to the similarity of a new image to a previously-observed texture. This image-comparison technique has been implemented and tested using our Astrobiology Phone-cam system, which employs Bluetooth communication to send images to a local laptop server in the field for the image-compression analysis. We tested the system in a field site displaying a heterogeneous suite of sandstones, limestones, mudstones and coalbeds. Some of the rocks are partly covered with lichen. The image-matching procedure of this system performed very well with data obtained through our field test, grouping all images of yellow lichens together and grouping all images of a coal bed together, and giving a 91% accuracy for similarity detection. Such similarity detection could be employed to make maps of different geological units. The novelty-detection performance of our system was also rather good (a 64% accuracy). Such novelty detection may become valuable in searching for new geological units, which could be of astrobiological interest. By providing more advanced capabilities for similarity detection and novelty detection, this image-compression technique could be useful in giving more scientific autonomy to robotic planetary rovers, and in assisting human astronauts in their geological exploration.
Super-resolution total-variation decoding of JPEG-compressed image data
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Komatsu, Takashi
2007-02-01
In a digital camera, its output image is sometimes corrupted by additive noise heavily and its noisy image is often compressed with the JPEG encoder. When the coding rate of the JPEG encoder is not high enough, in a JPEG-decoded image there appear noticeable artifacts such as the blocking, the ringing, and the false color artifacts. In the high ISOsensitivity case, even if the coding rate is very high, camera's noise will produce noticeably annoying artifacts in a JPEG-decoded image. This paper presents a restoration-type decoding approach that recovers a quality-improved image from the JPEG-compressed data, while not only suppressing the occurrence of the coding artifacts particular to the JPEG compression but also removing the camera's noise to some extent. This decoding approach is a kind of superresolution image-restoration approach based on the TV (Total Variation) regularization; to reduce the ringing artifacts near sharp edges it selectively restores the DCT coefficients truncated by the JPEG compression, whereas in an originally smooth image region it flattens unnecessary signal variations to eliminate the blocking artifacts and the camera's noise. Extending the standard ROF (Rudin-Osher-Fetami) framework of the TV image restoration, in this paper we construct the super-resolution approach to the JPEG decoding. By introducing the JPEG-compressed data into the fidelity term of the energy functional and adopting a nonlinear cost function constrained by the JPEG-compressed data softly, we define a new energy functional whose minimization gives the super-resolution JPEG decoding.
A new multi-resolution hybrid wavelet for analysis and image compression
NASA Astrophysics Data System (ADS)
Kekre, Hemant B.; Sarode, Tanuja K.; Vig, Rekha
2015-12-01
Most of the current image- and video-related applications require higher resolution of images and higher data rates during transmission, better compression techniques are constantly being sought after. This paper proposes a new and unique hybrid wavelet technique which has been used for image analysis and compression. The proposed hybrid wavelet combines the properties of existing orthogonal transforms in the most desirable way and also provides for multi-resolution analysis. These wavelets have unique properties that they can be generated for various sizes and types by using different component transforms and varying the number of components at each level of resolution. These hybrid wavelets have been applied to various standard images like Lena (512 × 512), Cameraman (256 × 256) and the values of peak signal to noise ratio (PSNR) obtained are compared with those obtained using some standard existing compression techniques. Considerable improvement in the values of PSNR, as much as 5.95 dB higher than the standard methods, has been observed, which shows that hybrid wavelet gives better compression. Images of various sizes like Scenery (200 × 200), Fruit (375 × 375) and Barbara (112 × 224) have also been compressed using these wavelets to demonstrate their use for different sizes and shapes.
Evaluating Texture Compression Masking Effects Using Objective Image Quality Assessment Metrics.
Griffin, Wesley; Olano, Marc
2015-08-01
Texture compression is widely used in real-time rendering to reduce storage and bandwidth requirements. Recent research in compression algorithms has explored both reduced fixed bit rate and variable bit rate algorithms. The results are evaluated at the individual texture level using mean square error, peak signal-to-noise ratio, or visual image inspection. We argue this is the wrong evaluation approach. Compression artifacts in individual textures are likely visually masked in final rendered images and this masking is not accounted for when evaluating individual textures. This masking comes from both geometric mapping of textures onto models and the effects of combining different textures on the same model such as diffuse, gloss, and bump maps. We evaluate final rendered images using rigorous perceptual error metrics. Our method samples the space of viewpoints in a scene, renders the scene from each viewpoint using variations of compressed textures, and then compares each to a ground truth using uncompressed textures from the same viewpoint. We show that masking has a significant effect on final rendered image quality, masking effects and perceptual sensitivity to masking varies by the type of texture, graphics hardware compression algorithms are too conservative, and reduced bit rates are possible while maintaining final rendered image quality.
A Novel Image Compression Algorithm for High Resolution 3D Reconstruction
NASA Astrophysics Data System (ADS)
Siddeq, M. M.; Rodrigues, M. A.
2014-06-01
This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.
New image compression algorithm based on improved reversible biorthogonal integer wavelet transform
NASA Astrophysics Data System (ADS)
Zhang, Libao; Yu, Xianchuan
2012-10-01
The low computational complexity and high coding efficiency are the most significant requirements for image compression and transmission. Reversible biorthogonal integer wavelet transform (RB-IWT) supports the low computational complexity by lifting scheme (LS) and allows both lossy and lossless decoding using a single bitstream. However, RB-IWT degrades the performances and peak signal noise ratio (PSNR) of the image coding for image compression. In this paper, a new IWT-based compression scheme based on optimal RB-IWT and improved SPECK is presented. In this new algorithm, the scaling parameter of each subband is chosen for optimizing the transform coefficient. During coding, all image coefficients are encoding using simple, efficient quadtree partitioning method. This scheme is similar to the SPECK, but the new method uses a single quadtree partitioning instead of set partitioning and octave band partitioning of original SPECK, which reduces the coding complexity. Experiment results show that the new algorithm not only obtains low computational complexity, but also provides the peak signal-noise ratio (PSNR) performance of lossy coding to be comparable to the SPIHT algorithm using RB-IWT filters, and better than the SPECK algorithm. Additionally, the new algorithm supports both efficiently lossy and lossless compression using a single bitstream. This presented algorithm is valuable for future remote sensing image compression.
Lossless compression of hyperspectral images using C-DPCM-APL with reference bands selection
NASA Astrophysics Data System (ADS)
Wang, Keyan; Liao, Huilin; Li, Yunsong; Zhang, Shanshan; Wu, Xianyun
2014-05-01
The availability of hyperspectral images has increased in recent years, which is used in military and civilian applications, such as target recognition, surveillance, geological mapping and environmental monitoring. Because of its abundant data quantity and special importance, now it exists lossless compression methods of hyperspectral images mainly exploiting the strong spatial or spectral correlation. C-DPCM-APL is a method that achieves highest lossless compression ratio on the CCSDS hyperspectral images acquired in 2006 but consuming longest processing time among existing lossless compression methods to determine the optimal prediction length for each band. C-DPCM-APL gets best compression performance mainly via using optimal prediction length but ignoring the correlationship between reference bands and the current band which is a crucial factor that influences the precision of prediction. Considering this, we propose a method that selects reference bands according to the atmospheric absorption characteristic of hyperspectral images. Experiments on CCSDS 2006 images data set show that the proposed reduces the computation complexity heavily without decaying its lossless compression performance when compared to C-DPCM-APL.
Otazo, Ricardo; Kim, Daniel; Axel, Leon; Sodickson, Daniel K.
2010-01-01
First-pass cardiac perfusion MRI is a natural candidate for compressed sensing acceleration since its representation in the combined temporal Fourier and spatial domain is sparse and the required incoherence can be effectively accomplished by k-t random undersampling. However, the required number of samples in practice (three to five times the number of sparse coefficients) limits the acceleration for compressed sensing alone. Parallel imaging may also be used to accelerate cardiac perfusion MRI, with acceleration factors ultimately limited by noise amplification. In this work, compressed sensing and parallel imaging are combined by merging the k-t SPARSE technique with SENSE reconstruction to substantially increase the acceleration rate for perfusion imaging. We also present a new theoretical framework for understanding the combination of k-t SPARSE with SENSE based on distributed compressed sensing theory. This framework, which identifies parallel imaging as a distributed multisensor implementation of compressed sensing, enables an estimate of feasible acceleration for the combined approach. We demonstrate feasibility of 8-fold acceleration in vivo with whole-heart coverage and high spatial and temporal resolution using standard coil arrays. The method is relatively insensitive to respiratory motion artifacts and presents similar temporal fidelity and image quality when compared to GRAPPA with 2-fold acceleration. PMID:20535813
Lamard, Mathieu; Daccache, Wissam; Cazuguel, Guy; Roux, Christian; Cochener, Beatrice
2005-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in diabetic retinopathy. We characterize images without extracting significant features, and use histograms obtained from the compressed images in JPEG-2000 wavelet scheme to build signatures. The research is carried out by calculating signature distances between the query and database images. A weighted distance between histograms is used. Retrieval efficiency is given for different standard types of JPEG-2000 wavelets, and for different values of histogram weights. A classified diabetic retinopathy image database is built allowing algorithms tests. On this image database, results are promising: the retrieval efficiency is higher than 70% for some lesion types.
NASA Astrophysics Data System (ADS)
Gao, Fang; Guo, Shuxu
2016-01-01
An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.
Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.
Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre
2008-12-01
Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
Image compression with embedded wavelet coding via vector quantization
NASA Astrophysics Data System (ADS)
Katsavounidis, Ioannis; Kuo, C.-C. Jay
1995-09-01
In this research, we improve Shapiro's EZW algorithm by performing the vector quantization (VQ) of the wavelet transform coefficients. The proposed VQ scheme uses different vector dimensions for different wavelet subbands and also different codebook sizes so that more bits are assigned to those subbands that have more energy. Another feature is that the vector codebooks used are tree-structured to maintain the embedding property. Finally, the energy of these vectors is used as a prediction parameter between different scales to improve the performance. We investigate the performance of the proposed method together with the 7 - 9 tap bi-orthogonal wavelet basis, and look into ways to incorporate loseless compression techniques.
Compressive Passive Millimeter Wave Imaging with Extended Depth of Field
2012-01-01
Over the past several years, imaging using millimeter wave ( mmW ) and terahertz technology has gained a lot of interest [1], [2], [3]. This interest...weapons are clearly detected in the mmW image. Recently, in [3], Mait et al. presented a computational imaging method to extend the depth-of-field of a...passive mmW imaging sys- tem. The method uses a cubic phase element in the pupil plane of the system to render system operation relatively insensitive
Method and apparatus for optical encoding with compressible imaging
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2006-01-01
The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.
Software Compression for Partially Parallel Imaging with Multi-channels.
Huang, Feng; Vijayakumar, Sathya; Akao, James
2005-01-01
In magnetic resonance imaging, multi-channel phased array coils enjoy a high signal to noise ratio (SNR) and better parallel imaging performance. But with the increase in number of channels, the reconstruction time and requirement for computer memory become inevitable problems. In this work, principle component analysis is applied to reduce the size of data and protect the performance of parallel imaging. Clinical data collected using a 32-channel cardiac coil are used in the experiments. Experimental results show that the proposed method dramatically reduces the processing time without much damage to the reconstructed image.
Dual domain watermarking for authentication and compression of cultural heritage images.
Zhao, Yang; Campisi, Patrizio; Kundur, Deepa
2004-03-01
This paper proposes an approach for the combined image authentication and compression of color images by making use of a digital watermarking and data hiding framework. The digital watermark is comprised of two components: a soft-authenticator watermark for authentication and tamper assessment of the given image, and a chrominance watermark employed to improve the efficiency of compression. The multipurpose watermark is designed by exploiting the orthogonality of various domains used for authentication, color decomposition and watermark insertion. The approach is implemented as a DCT-DWT dual domain algorithm and is applied for the protection and compression of cultural heritage imagery. Analysis is provided to characterize the behavior of the scheme under ideal conditions. Simulations and comparisons of the proposed approach with state-of-the-art existing work demonstrate the potential of the overall scheme.
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Lossless image compression with projection-based and adaptive reversible integer wavelet transforms.
Deever, Aaron T; Hemami, Sheila S
2003-01-01
Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Multispectral image compression methods for improvement of both colorimetric and spectral accuracy
NASA Astrophysics Data System (ADS)
Liang, Wei; Zeng, Ping; Xiao, Zhaolin; Xie, Kun
2016-07-01
We propose that both colorimetric and spectral distortion in compressed multispectral images can be reduced by a composite model, named OLCP(W)-X (OptimalLeaders_Color clustering-PCA-W weighted-X coding). In the model, first the spectral-colorimetric clustering is designed for sparse equivalent representation by generating spatial basis. Principal component analysis (PCA) is subsequently used in the manipulation of spatial basis for spectral redundancy removal. Then error compensation mechanism is presented to produce predicted difference image, and finally combined with visual characteristic matrix W, and the created image is compressed by traditional multispectral image coding schemes. We introduce four model-based algorithms to explain their validity. The first two algorithms are OLCPWKWS (OLC-PCA-W-KLT-WT-SPIHT) and OLCPKWS, in which Karhunen-Loeve transform, wavelet transform, and set partitioning in hierarchical trees coding are applied for the created image compression. And the latter two methods are OLCPW-JPEG2000-MCT and OLCP-JPEG2000-MCT. Experimental results show that, compared with the corresponding traditional coding, the proposed OLCPW-X schemes can significantly improve the colorimetric accuracy of rebuilding images under various illumination conditions and generally achieve satisfactory peak signal-to-noise ratio under the same compression ratio. And OLCP-X methods could always ensure superior spectrum reconstruction. Furthermore, our model has excellent performance on user interaction.
OARSI Clinical Trials Recommendations: Knee imaging in clinical trials in osteoarthritis.
Hunter, D J; Altman, R D; Cicuttini, F; Crema, M D; Duryea, J; Eckstein, F; Guermazi, A; Kijowski, R; Link, T M; Martel-Pelletier, J; Miller, C G; Mosher, T J; Ochoa-Albíztegui, R E; Pelletier, J-P; Peterfy, C; Raynauld, J-P; Roemer, F W; Totterman, S M; Gold, G E
2015-05-01
Significant advances have occurred in our understanding of the pathogenesis of knee osteoarthritis (OA) and some recent trials have demonstrated the potential for modification of the disease course. The purpose of this expert opinion, consensus driven exercise is to provide detail on how one might use and apply knee imaging in knee OA trials. It includes information on acquisition methods/techniques (including guidance on positioning for radiography, sequence/protocol recommendations/hardware for magnetic resonance imaging (MRI)); commonly encountered problems (including positioning, hardware and coil failures, sequences artifacts); quality assurance (QA)/control procedures; measurement methods; measurement performance (reliability, responsiveness, validity); recommendations for trials; and research recommendations.
Assessment of low-contrast detectability for compressed digital chest images
NASA Astrophysics Data System (ADS)
Cook, Larry T.; Insana, Michael F.; McFadden, Michael A.; Hall, Timothy J.; Cox, Glendon G.
1994-04-01
The ability of human observers to detect low-contrast targets in screen-film (SF) images, computed radiographic (CR) images, and compressed CR images was measured using contrast detail (CD) analysis. The results of these studies were used to design a two- alternative forced-choice (2AFC) experiment to investigate the detectability of nodules in adult chest radiographs. CD curves for a common screen-film system were compared with CR images compressed up to 125:1. Data from clinical chest exams were used to define a CD region of clinical interest that sufficiently challenged the observer. From that data, simulated lesions were introduced into 100 normal CR chest films, and forced-choice observer performance studies were performed. CR images were compressed using a full-frame discrete cosine transform (FDCT) technique, where the 2D Fourier space was divided into four areas of different quantization depending on the cumulative power spectrum (energy) of each image. The characteristic curve of the CR images was adjusted so that optical densities matched those of the SF system. The CD curves for SF and uncompressed CR systems were statistically equivalent. The slope of the CD curve for each was - 1.0 as predicted by the Rose model. There was a significant degradation in detection found for CR images compressed to 125:1. Furthermore, contrast-detail analysis demonstrated that many pulmonary nodules encountered in clinical practice are significantly above the average observer threshold for detection. We designed a 2AFC observer study using simulated 1-cm lesions introduced into normal CR chest radiographs. Detectability was reduced for all compressed CR radiographs.
Reduction of blocking effects for the JPEG baseline image compression standard
NASA Technical Reports Server (NTRS)
Zweigle, Gregary C.; Bamberger, Roberto H.
1992-01-01
Transform coding has been chosen for still image compression in the Joint Photographic Experts Group (JPEG) standard. Although transform coding performs superior to many other image compression methods and has fast algorithms for implementation, it is limited by a blocking effect at low bit rates. The blocking effect is inherent in all nonoverlapping transforms. This paper presents a technique for reducing blocking while remaining compatible with the JPEG standard. Simulations show that the system results in subjective performance improvements, sacrificing only a marginal increase in bit rate.
Application of adaptive wavelet transforms via lifting in image data compression
NASA Astrophysics Data System (ADS)
Ye, Shujiang; Zhang, Ye; Liu, Baisen
2008-10-01
The adaptive wavelet transforms via lifting is proposed. In the transform, update filter is selected by the signal's character. Perfect reconstruction is possible without any overhead cost. To make sure the system's stability, in the lifting scheme of adaptive wavelet, update step is placed before prediction step. The Adaptive wavelet transforms via lifting is benefit for the image compression, because of the high stability, the small coefficients of high frequency parts, and the perfect reconstruction. With the adaptive wavelet transforms via lifting and the SPIHT, the image compression is realized in this paper, and the result is pleasant.
High-quality correspondence imaging based on sorting and compressive sensing technique
NASA Astrophysics Data System (ADS)
Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Luo, Chunling; Ge, Peng
2016-11-01
We propose a high-quality imaging method based on correspondence imaging (CI) using a sorting and compressive sensing (CS) technique. Unlike the traditional CI, the positive and negative (PN) subsets are created by a sorting method, and the image of an object is then recovered from the PN subsets using a CS technique. We compare the performance of the proposed method with different ghost imaging (GI) algorithms using the data from a single-detector computational GI system. The results demonstrate that our method enjoys excellent imaging and anti-interference capabilities, and can further reduce the measurement numbers compared with the direct use of CS in GI.
Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder
August, Isaac; Oiknine, Yaniv; AbuLeil, Marwan; Abdulhalim, Ibrahim; Stern, Adrian
2016-01-01
Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems. PMID:27004447
NASA Astrophysics Data System (ADS)
Babel, Marie; Parrein, Benoit; Deforges, Olivier; Normand, Nicolas; Guedon, Jean-Pierre; Ronsin, Joseph
2005-01-01
Within the framework of telemedicine, the amount of images leads first to use efficient lossless compression methods for the aim of storing information. Furthermore, multiresolution scheme including Region of Interest (ROI) processing is an important feature for a remote access to medical images. What is more, the securization of sensitive data (e.g. metadata from DICOM images) constitutes one more expected functionality: indeed the lost of IP packets could have tragic effects on a given diagnosis. For this purpose, we present in this paper an original scalable image compression technique (LAR method) used in association with a channel coding method based on the Mojette Transform, so that a hierarchical priority encoding system is elaborated. This system provides a solution for secured transmission of medical images through low-bandwidth networks such as the Internet.
NASA Astrophysics Data System (ADS)
Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Ronsin, Joseph
2004-12-01
Within the framework of telemedicine, the amount of images leads first to use efficient lossless compression methods for the aim of storing information. Furthermore, multiresolution scheme including Region of Interest (ROI) processing is an important feature for a remote access to medical images. What is more, the securization of sensitive data (e.g. metadata from DICOM images) constitutes one more expected functionality: indeed the lost of IP packets could have tragic effects on a given diagnosis. For this purpose, we present in this paper an original scalable image compression technique (LAR method) used in association with a channel coding method based on the Mojette Transform, so that a hierarchical priority encoding system is elaborated. This system provides a solution for secured transmission of medical images through low-bandwidth networks such as the Internet.
Region segmentation techniques for object-based image compression: a review
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
2004-10-01
Image compression based on transform coding appears to be approaching an asymptotic bit rate limit for application-specific distortion levels. However, a new compression technology, called object-based compression (OBC) promises improved rate-distortion performance at higher compression ratios. OBC involves segmentation of image regions, followed by efficient encoding of each region"s content and boundary. Advantages of OBC include efficient representation of commonly occurring textures and shapes in terms of pointers into a compact codebook of region contents and boundary primitives. This facilitates fast decompression via substitution, at the cost of codebook search in the compression step. Segmentation cose and error are significant disadvantages in current OBC implementations. Several innovative techniques have been developed for region segmentation, including (a) moment-based analysis, (b) texture representation in terms of a syntactic grammar, and (c) transform coding approaches such as wavelet based compression used in MPEG-7 or JPEG-2000. Region-based characterization with variance templates is better understood, but lacks the locality of wavelet representations. In practice, tradeoffs are made between representational fidelity, computational cost, and storage requirement. This paper overviews current techniques for automatic region segmentation and representation, especially those that employ wavelet classification and region growing techniques. Implementational discussion focuses on complexity measures and performance metrics such as segmentation error and computational cost.
JP3D compression of solar data-cubes: Photospheric imaging and spectropolarimetry
NASA Astrophysics Data System (ADS)
Del Moro, Dario; Giovannelli, Luca; Pietropaolo, Ermanno; Berrilli, Francesco
2017-02-01
Hyperspectral imaging is an ubiquitous technique in solar physics observations and the recent advances in solar instrumentation enabled us to acquire and record data at an unprecedented rate. The huge amount of data which will be archived in the upcoming solar observatories press us to compress the data in order to reduce the storage space and transfer times. The correlation present over all dimensions, spatial, temporal and spectral, of solar data-sets suggests the use of a 3D base wavelet decomposition, to achieve higher compression rates. In this work, we evaluate the performance of the recent JPEG2000 Part 10 standard, known as JP3D, for the lossless compression of several types of solar data-cubes. We explore the differences in: a) The compressibility of broad-band or narrow-band time-sequence; I or V Stokes profiles in spectropolarimetric data-sets; b) Compressing data in [x,y, λ] packages at different times or data in [x,y,t] packages of different wavelength; c) Compressing a single large data-cube or several smaller data-cubes; d) Compressing data which is under-sampled or super-sampled with respect to the diffraction cut-off.
Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm
NASA Astrophysics Data System (ADS)
Sarika, G.; Unnithan, Harikuttan; Peter, Smitha
2011-10-01
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.
Modeling log-compressed ultrasound images for radio frequency signal recovery.
Seabra, José; Sanches, João
2008-01-01
This paper presents an algorithm for recovering the radio frequency (RF) signal provided by the ultrasound probe from the log-compressed ultrasound images displayed in ultrasound equipment. Commercial ecographs perform nonlinear image compression to reduce the dynamic range of the Ultrasound (US) signal in order to improve image visualization. Moreover, the clinician may adjust other parameters, such as brightness, gain and contrast, to improve image quality of a given anatomical detail. These operations significantly change the statistical distribution of the original RF raw signal, which is assumed, based on physical considerations on the signal formation process, to be Rayleigh distributed. Therefore, the image pixels are no longer Rayleigh distributed and the RF signal is not usually available in the common ultrasound equipment. For statistical data processing purposes, more important than having "good looking" images, it is important to have realistic models to describe the data. In this paper, a nonlinear compression parametric function is used to model the pre-processed image in order to recover the original RF image as well the contrast and brightness parameters. Tests using synthetic and real data and statistical measures such as the Kolmogorov-Smirnov and Kullback-Leibler divergences are used to assess the results. It is shown that the proposed estimation model clearly represents better the observed data than by taking the general assumption of the data being modeled by a Rayleigh distribution.
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
Novel lossless FMRI image compression based on motion compensation and customized entropy coding.
Sanchez, Victor; Nasiopoulos, Panos; Abugharbieh, Rafeef
2009-07-01
We recently proposed a method for lossless compression of 4-D medical images based on the advanced video coding standard (H.264/AVC). In this paper, we present two major contributions that enhance our previous work for compression of functional MRI (fMRI) data: 1) a new multiframe motion compensation process that employs 4-D search, variable-size block matching, and bidirectional prediction; and 2) a new context-based adaptive binary arithmetic coder designed for lossless compression of the residual and motion vector data. We validate our method on real fMRI sequences of various resolutions and compare the performance to two state-of-the-art methods: 4D-JPEG2000 and H.264/AVC. Quantitative results demonstrate that our proposed technique significantly outperforms current state of the art with an average compression ratio improvement of 13%.
Joint pattern recognition/data compression concept for ERTS multispectral imaging
NASA Technical Reports Server (NTRS)
Hilbert, E. E.
1975-01-01
This paper describes a new technique which jointly applies clustering and source encoding concepts to obtain data compression. The cluster compression technique basically uses clustering to extract features from the measurement data set which are used to describe characteristics of the entire data set. In addition, the features may be used to approximate each individual measurement vector by forming a sequence of scalar numbers which define each measurement vector in terms of the cluster features. This sequence, called the feature map, is then efficiently represented by using source encoding concepts. A description of a practical cluster compression algorithm is given and experimental results are presented to show trade-offs and characteristics of various implementations. Examples are provided which demonstrate the application of cluster compression to multispectral image data of the Earth Resources Technology Satellite.
Automatic measurement of compression wood cell attributes in fluorescence microscopy images.
Selig, B; Luengo Hendriks, C L; Bardage, S; Daniel, G; Borgefors, G
2012-06-01
This paper presents a new automated method for analyzing compression wood fibers in fluorescence microscopy. Abnormal wood known as compression wood is present in almost every softwood tree harvested. Compression wood fibers show a different cell wall morphology and chemistry compared to normal wood fibers, and their mechanical and physical characteristics are considered detrimental for both construction wood and pulp and paper purposes. Currently there is the need for improved methodologies for characterization of lignin distribution in wood cell walls, such as from compression wood fibers, that will allow for a better understanding of fiber mechanical properties. Traditionally, analysis of fluorescence microscopy images of fiber cross-sections has been done manually, which is time consuming and subjective. Here, we present an automatic method, using digital image analysis, that detects and delineates softwood fibers in fluorescence microscopy images, dividing them into cell lumen, normal and highly lignified areas. It also quantifies the different areas, as well as measures cell wall thickness. The method is evaluated by comparing the automatic with a manual delineation. While the boundaries between the various fiber wall regions are detected using the automatic method with precision similar to inter and intra expert variability, the position of the boundary between lumen and the cell wall has a systematic shift that can be corrected. Our method allows for transverse structural characterization of compression wood fibers, which may allow for improved understanding of the micro-mechanical modeling of wood and pulp fibers.
Orthogonal wavelets for image transmission and compression schemes: implementation and results
NASA Astrophysics Data System (ADS)
Ahmadian, Alireza; Bharath, Anil A.
1996-10-01
Diagnostic quality medical images consume vast amounts of network time, system bandwidth and disk storage in current computer architectures. There are many ways in which the use of system and network resources may be optimize without compromising diagnostic image quality. One of these is in the choice of image representation, both for storage and transfer. In this paper, we show how a particularly flexible method of image representation, based on Mallat's algorithm, leads to efficient methods of both lossy image compression and progressive image transmission. We illustrate the application of a progressive transmission scheme to medical images, and provide some examples of image refinement in a multiscale fashion. We show how thumbnail images created by a multiscale orthogonal decomposition can be optimally interpolated, in a minimum square error sense, based on a generalized Moore-Penrose inverse operator. In the final part of this paper, we show that the representation can provide a framework for lossy image compression, with signal/noise ratios far superior to those provided by a standard JPEG algorithm. The approach can also accommodate precision based progressive coding. We show the results of increasing the priority of encoding a selected region of interest in a bit-stream describing a multiresolution image representation.
Tissue cartography: compressing bio-image data by dimensional reduction.
Heemskerk, Idse; Streichan, Sebastian J
2015-12-01
The high volumes of data produced by state-of-the-art optical microscopes encumber research. We developed a method that reduces data size and processing time by orders of magnitude while disentangling signal by taking advantage of the laminar structure of many biological specimens. Our Image Surface Analysis Environment automatically constructs an atlas of 2D images for arbitrarily shaped, dynamic and possibly multilayered surfaces of interest. Built-in correction for cartographic distortion ensures that no information on the surface is lost, making the method suitable for quantitative analysis. We applied our approach to 4D imaging of a range of samples, including a Drosophila melanogaster embryo and a Danio rerio beating heart.
Argon X-ray line imaging - A compression diagnostic for inertial confinement fusion targets
Koppel, L.N.
1980-01-01
The paper describes argon X-ray line imaging which measures the compressed fuel volume directly by forming one-dimensional images of X-rays from argon gas seeded into the D-T fuel. The photon energies of the X-rays are recorded on the film of a diffraction-crystal spectrograph. Neutron activation, which detects activated nuclei produced by the interaction of 14-MeV neutrons with the selected materials of the target, allows to calculate the final compressed fuel density using a hydrodynamics simulation code and the knowledge of the total number of activated nuclei and the neutron yield. Argon X-ray appears to be a valid fuel-compression diagnostic for final fuel densities in the range of 10 to 50 times liquid D-T density.
A JPEG-like algorithm for compression of single-sensor camera image
NASA Astrophysics Data System (ADS)
Benahmed Daho, Omar; Larabi, Mohamed-Chaker; Mukhopadhyay, Jayanta
2011-01-01
This paper presents a JPEG-like coder for image compression of single-sensor camera images using a Bayer Color Filter Array (CFA). The originality of the method is a joint scheme of compression.demosaicking in the DCT domain. In this method, the captured CFA raw data is first separated in four distinct components and then converted to YCbCr. A JPEG compression scheme is then applied. At the decoding level, the bitstream is decompressed until reaching the DCT coefficients. These latter are used for the interpolation stage. The obtained results are better than those obtained by the conventional JPEG in terms of CPSNR, ΔE2000 and SSIM. The obtained JPEG-like scheme is also less complex.
NASA Astrophysics Data System (ADS)
Han, Tao; Chen, Lingyun; Lai, Chao-Jen; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Shaw, Chris C.
2009-02-01
Images of mastectomy breast specimens have been acquired with a bench top experimental Cone beam CT (CBCT) system. The resulting images have been segmented to model an uncompressed breast for simulation of various CBCT techniques. To further simulate conventional or tomosynthesis mammographic imaging for comparison with the CBCT technique, a deformation technique was developed to convert the CT data for an uncompressed breast to a compressed breast without altering the breast volume or regional breast density. With this technique, 3D breast deformation is separated into two 2D deformations in coronal and axial views. To preserve the total breast volume and regional tissue composition, each 2D deformation step was achieved by altering the square pixels into rectangular ones with the pixel areas unchanged and resampling with the original square pixels using bilinear interpolation. The compression was modeled by first stretching the breast in the superior-inferior direction in the coronal view. The image data were first deformed by distorting the voxels with a uniform distortion ratio. These deformed data were then deformed again using distortion ratios varying with the breast thickness and re-sampled. The deformation procedures were applied in the axial view to stretch the breast in the chest wall to nipple direction while shrinking it in the mediolateral to lateral direction re-sampled and converted into data for uniform cubic voxels. Threshold segmentation was applied to the final deformed image data to obtain the 3D compressed breast model. Our results show that the original segmented CBCT image data were successfully converted into those for a compressed breast with the same volume and regional density preserved. Using this compressed breast model, conventional and tomosynthesis mammograms were simulated for comparison with CBCT.
Grid-Independent Compressive Imaging and Fourier Phase Retrieval
ERIC Educational Resources Information Center
Liao, Wenjing
2013-01-01
This dissertation is composed of two parts. In the first part techniques of band exclusion(BE) and local optimization(LO) are proposed to solve linear continuum inverse problems independently of the grid spacing. The second part is devoted to the Fourier phase retrieval problem. Many situations in optics, medical imaging and signal processing call…
NASA Astrophysics Data System (ADS)
Li, Jiaosheng; Zhong, Liyun; Zhang, Qinnan; Zhou, Yunfei; Xiong, Jiaxiang; Tian, Jindong; Lu, Xiaoxu
2017-01-01
We propose an optical image hiding method based on dual-channel simultaneous phase-shifting interferometry (DCSPSI) and compressive sensing (CS) in all-optical domain. In the DCSPSI architecture, a secret image is firstly embedded in the host image without destroying the original host's form, and a pair of interferograms with the phase shifts of π/2 is simultaneously generated by the polarization components and captured by two CCDs. Then, the holograms are further compressed sampling to the less data by CS. The proposed strategy will provide a useful solution for the real-time optical image security transmission and largely reducing data volume of interferogram. The experimental result demonstrates the validity and feasibility of the proposed method.
Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.
2015-01-01
Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium. PMID:26503834
NASA Astrophysics Data System (ADS)
Liang, Jinyang; Gao, Liang; Hai, Pengfei; Li, Chiye; Wang, Lihong V.
2015-10-01
Compressed ultrafast photography (CUP), a computational imaging technique, is synchronized with short-pulsed laser illumination to enable dynamic three-dimensional (3D) imaging. By leveraging the time-of-flight (ToF) information of pulsed light backscattered by the object, ToF-CUP can reconstruct a volumetric image from a single camera snapshot. In addition, the approach unites the encryption of depth data with the compressed acquisition of 3D data in a single snapshot measurement, thereby allowing efficient and secure data storage and transmission. We demonstrated high-speed 3D videography of moving objects at up to 75 volumes per second. The ToF-CUP camera was applied to track the 3D position of a live comet goldfish. We have also imaged a moving object obscured by a scattering medium.
Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform.
Lai, Zongying; Qu, Xiaobo; Liu, Yunsong; Guo, Di; Ye, Jing; Zhan, Zhifang; Chen, Zhong
2016-01-01
Compressed sensing magnetic resonance imaging has shown great capacity for accelerating magnetic resonance imaging if an image can be sparsely represented. How the image is sparsified seriously affects its reconstruction quality. In the present study, a graph-based redundant wavelet transform is introduced to sparsely represent magnetic resonance images in iterative image reconstructions. With this transform, image patches is viewed as vertices and their differences as edges, and the shortest path on the graph minimizes the total difference of all image patches. Using the l1 norm regularized formulation of the problem solved by an alternating-direction minimization with continuation algorithm, the experimental results demonstrate that the proposed method outperforms several state-of-the-art reconstruction methods in removing artifacts and achieves fewer reconstruction errors on the tested datasets.
Application of Compressed Sensing to 2-D Ultrasonic Propagation Imaging System data
Mascarenas, David D.; Farrar, Charles R.; Chong, See Yenn; Lee, J.R.; Park, Gyu Hae; Flynn, Eric B.
2012-06-29
The Ultrasonic Propagation Imaging (UPI) System is a unique, non-contact, laser-based ultrasonic excitation and measurement system developed for structural health monitoring applications. The UPI system imparts laser-induced ultrasonic excitations at user-defined locations on a structure of interest. The response of these excitations is then measured by piezoelectric transducers. By using appropriate data reconstruction techniques, a time-evolving image of the response can be generated. A representative measurement of a plate might contain 800x800 spatial data measurement locations and each measurement location might be sampled at 500 instances in time. The result is a total of 640,000 measurement locations and 320,000,000 unique measurements. This is clearly a very large set of data to collect, store in memory and process. The value of these ultrasonic response images for structural health monitoring applications makes tackling these challenges worthwhile. Recently compressed sensing has presented itself as a candidate solution for directly collecting relevant information from sparse, high-dimensional measurements. The main idea behind compressed sensing is that by directly collecting a relatively small number of coefficients it is possible to reconstruct the original measurement. The coefficients are obtained from linear combinations of (what would have been the original direct) measurements. Often compressed sensing research is simulated by generating compressed coefficients from conventionally collected measurements. The simulation approach is necessary because the direct collection of compressed coefficients often requires compressed sensing analog front-ends that are currently not commercially available. The ability of the UPI system to make measurements at user-defined locations presents a unique capability on which compressed measurement techniques may be directly applied. The application of compressed sensing techniques on this data holds the potential to
DSP accelerator for the wavelet compression/decompression of high- resolution images
Hunt, M.A.; Gleason, S.S.; Jatko, W.B.
1993-07-23
A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
A new anti-forensic scheme--hiding the single JPEG compression trace for digital image.
Cao, Yanjun; Gao, Tiegang; Sheng, Guorui; Fan, Li; Gao, Lin
2015-01-01
To prevent image forgeries, a number of forensic techniques for digital image have been developed that can detect an image's origin, trace its processing history, and can also locate the position of tampering. Especially, the statistical footprint left by JPEG compression operation can be a valuable source of information for the forensic analyst, and some image forensic algorithm have been raised based on the image statistics in the DCT domain. Recently, it has been shown that footprints can be removed by adding a suitable anti-forensic dithering signal to the image in the DCT domain, this results in invalid for some image forensic algorithms. In this paper, a novel anti-forensic algorithm is proposed, which is capable of concealing the quantization artifacts that left in the single JPEG compressed image. In the scheme, a chaos-based dither is added to an image's DCT coefficients to remove such artifacts. Effectiveness of both the scheme and the loss of image quality are evaluated through the experiments. The simulation results show that the proposed anti-forensic scheme can verify the reliability of the JPEG forensic tools.
The Effects of Signal and Image Compression of Sar Data on Change Detection Algorithms
2007-09-01
2.3 Change Detection For a compression research effort to be effective, a suitable benchmark has to be selected . While SAR is a valuable tool, its...series of images into the same globally positioned coordinates. When registering images, a suitable spatial transformation must be selected in order...All time scale wavelets are derived from a single “ mother wavelet ” by scaling and translating the original function. Translation occurs over the time
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Morioka, Craig A.; Whiting, James S.; Eigler, Neal L.
1995-04-01
Image quality associated with image compression has been either arbitrarily evaluated through visual inspection, loosely defined in terms of some subjective criteria such as image sharpness or blockiness, or measured by arbitrary measures such as the mean square error between the uncompressed and compressed image. The present paper psychophysically evaluated the effect of three different compression algorithms (JPEG, full-frame, and wavelet) on human visual detection of computer-simulated low-contrast lesions embedded in real medical image noise from patient coronary angiogram. Performance identifying the signal present location as measure by d' index of detectability decreased for all three algorithms by approximately 30% and 62% for the 16:1 and 30:1 compression rations respectively. We evaluated the ability of two previously proposed measures of image quality, mean square error (MSE) and normalized nearest neighbor difference (NNND), to determine the best compression algorithm. The MSE predicted significantly higher image quality for the JPEG algorithm in the 16:1 compression ratio and for both JPEG and full-frame for the 30:1 compression ratio. The NNND predicted significantly high image quality for the full-frame algorithm for both compassion rations. These findings suggest that these two measures of image quality may lead to erroneous conclusions in evaluations and/or optimizations if image compression algorithms.
Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk
2013-01-01
In an ultrasound image-guided High Intensity Focused Ultrasound (HIFU) surgery, reflected HIFU waves received by an imaging transducer should be suppressed for real-time simultaneous imaging and therapy. In this paper, we investigate the feasibility of pulse compression scheme combined with notch filtering in order to minimize these HIFU interference signals. A chirp signal modulated by the Dolph-Chebyshev window with 3–9 MHz frequency sweep range is used for B-mode imaging and 4 MHz continuous wave is used for HIFU. The second order infinite impulse response notch filters are employed to suppress reflected HIFU waves whose center frequencies are 4 MHz and 8 MHz. The prototype integrated HIFU/imaging transducer that composed of three rectangular elements with a spherically con-focused aperture was fabricated. The center element has the ability to transmit and receive 6 MHz imaging signals and two outer elements are only used for transmitting 4 MHz continuous HIFU wave. When the chirp signal and 4 MHz HIFU wave are simultaneously transmitted to the target, the reflected chirp signals mixed with 4 MHz and 8 MHz HIFU waves are detected by the imaging transducer. After the application of notch filtering with pulse compression process, HIFU interference waves in this mixed signal are significantly reduced while maintaining original imaging signal. In the single scanline test using a strong reflector, the amplitude of the reflected HIFU wave is reduced to −45 dB. In vitro test, with a sliced porcine muscle shows that the speckle pattern of the restored B-mode image is close to that of the original image. These preliminary results demonstrate the potential for the pulse compression scheme with notch filtering to achieve real-time ultrasound image-guided HIFU surgery. PMID:22356771
VLSI-based Video Event Triggering for Image Data Compression
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1994-01-01
Long-duration, on-orbit microgravity experiments require a combination of high resolution and high frame rate video data acquisition. The digitized high-rate video stream presents a difficult data storage problem. Data produced at rates of several hundred million bytes per second may require a total mission video data storage requirement exceeding one terabyte. A NASA-designed, VLSI-based, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term (DC-like) or short term (AC-like) changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pre-trigger and post-trigger storage techniques are then adaptable to archiving only the significant video images.
Compressed Sensing (CS) Imaging with Wide FOV and Dynamic Magnification
2011-03-14
shows another set of optical sectioning imaging results captured using the experimental setup. In this case, a biological specimen ( pollen grain ) was...can see that as the optical section moves in the depth direction, different information of the pollen grain specimen in the depth direction was...result of a pollen grain specimen. between adjacent optical sections in the depth direction is 1/xrn. The distance $Q$0 + n, where n is zero-mean
Toward prediction of hyperspectral target detection performance after lossy image compression
NASA Astrophysics Data System (ADS)
Kaufman, Jason R.; Vongsy, Karmon M.; Dill, Jeffrey C.
2016-05-01
Hyperspectral imagery (HSI) offers numerous advantages over traditional sensing modalities with its high spectral content that allows for classification, anomaly detection, target discrimination, and change detection. However, this imaging modality produces a huge amount of data, which requires transmission, processing, and storage resources; hyperspectral compression is a viable solution to these challenges. It is well known that lossy compression of hyperspectral imagery can impact hyperspectral target detection. Here we examine lossy compressed hyperspectral imagery from data-centric and target-centric perspectives. The compression ratio (CR), root mean square error (RMSE), the signal to noise ratio (SNR), and the correlation coeﬃcient are computed directly from the imagery and provide insight to how the imagery has been affected by the lossy compression process. With targets present in the imagery, we perform target detection with the spectral angle mapper (SAM) and adaptive coherence estimator (ACE) and evaluate the change in target detection performance by examining receiver operating characteristic (ROC) curves and the target signal-to-clutter ratio (SCR). Finally, we observe relationships between the data- and target-centric metrics for selected visible/near-infrared to shortwave infrared (VNIR/SWIR) HSI data, targets, and backgrounds that motivate potential prediction of change in target detection performance as a function of compression ratio.
McClymont, Darryl; Teh, Irvin; Whittington, Hannah J.; Grau, Vicente
2015-01-01
Purpose Diffusion MRI requires acquisition of multiple diffusion‐weighted images, resulting in long scan times. Here, we investigate combining compressed sensing and a fast imaging sequence to dramatically reduce acquisition times in cardiac diffusion MRI. Methods Fully sampled and prospectively undersampled diffusion tensor imaging data were acquired in five rat hearts at acceleration factors of between two and six using a fast spin echo (FSE) sequence. Images were reconstructed using a compressed sensing framework, enforcing sparsity by means of decomposition by adaptive dictionaries. A tensor was fit to the reconstructed images and fiber tractography was performed. Results Acceleration factors of up to six were achieved, with a modest increase in root mean square error of mean apparent diffusion coefficient (ADC), fractional anisotropy (FA), and helix angle. At an acceleration factor of six, mean values of ADC and FA were within 2.5% and 5% of the ground truth, respectively. Marginal differences were observed in the fiber tracts. Conclusion We developed a new k‐space sampling strategy for acquiring prospectively undersampled diffusion‐weighted data, and validated a novel compressed sensing reconstruction algorithm based on adaptive dictionaries. The k‐space undersampling and FSE acquisition each reduced acquisition times by up to 6× and 8×, respectively, as compared to fully sampled spin echo imaging. Magn Reson Med 76:248–258, 2016. © 2015 Wiley Periodicals, Inc. PMID:26302363
Toeplitz block circulant matrix optimized with particle swarm optimization for compressive imaging
NASA Astrophysics Data System (ADS)
Tao, Huifeng; Yin, Songfeng; Tang, Cong
2016-10-01
Compressive imaging is an imaging way based on the compressive sensing theory, which could achieve to capture the high resolution image through a small set of measurements. As the core of the compressive imaging, the design of the measurement matrix is sufficient to ensure that the image can be recovered from the measurements. Due to the fast computing capacity and the characteristic of easy hardware implementation, The Toeplitz block circulant matrix is proposed to realize the encoded samples. The measurement matrix is usually optimized for improving the image reconstruction quality. However, the existing optimization methods can destroy the matrix structure easily when applied to the Toeplitz block circulant matrix optimization process, and the deterministic iterative processes of them are inflexible, because of requiring the task optimized to need to satisfy some certain mathematical property. To overcome this problem, a novel method of optimizing the Toeplitz block circulant matrix based on the particle swarm optimization intelligent algorithm is proposed in this paper. The objective function is established by the way of approaching the target matrix that is the Gram matrix truncated by the Welch threshold. The optimized object is the vector composed by the free entries instead of the Gram matrix. The experimental results indicate that the Toeplitz block circulant measurement matrix can be optimized while preserving the matrix structure by our method, and result in the reconstruction quality improvement.
Adaptive lifting scheme of wavelet transforms for image compression
NASA Astrophysics Data System (ADS)
Wu, Yu; Wang, Guoyin; Nie, Neng
2001-03-01
Aiming at the demand of adaptive wavelet transforms via lifting, a three-stage lifting scheme (predict-update-adapt) is proposed according to common two-stage lifting scheme (predict-update) in this paper. The second stage is updating stage. The third is adaptive predicting stage. Our scheme is an update-then-predict scheme that can detect jumps in image from the updated data and it needs not any more additional information. The first stage is the key in our scheme. It is the interim of updating. Its coefficient can be adjusted to adapt to data to achieve a better result. In the adaptive predicting stage, we use symmetric prediction filters in the smooth area of image, while asymmetric prediction filters at the edge of jumps to reduce predicting errors. We design these filters using spatial method directly. The inherent relationships between the coefficients of the first stage and the other stages are found and presented by equations. Thus, the design result is a class of filters with coefficient that are no longer invariant. Simulation result of image coding with our scheme is good.
Consensus recommendations for a standardized Brain Tumor Imaging Protocol in clinical trials
Ellingson, Benjamin M.; Bendszus, Martin; Boxerman, Jerrold; Barboriak, Daniel; Erickson, Bradley J.; Smits, Marion; Nelson, Sarah J.; Gerstner, Elizabeth; Alexander, Brian; Goldmacher, Gregory; Wick, Wolfgang; Vogelbaum, Michael; Weller, Michael; Galanis, Evanthia; Kalpathy-Cramer, Jayashree; Shankar, Lalitha; Jacobs, Paula; Pope, Whitney B.; Yang, Dewen; Chung, Caroline; Knopp, Michael V.; Cha, Soonme; van den Bent, Martin J.; Chang, Susan; Al Yung, W.K.; Cloughesy, Timothy F.; Wen, Patrick Y.; Gilbert, Mark R.
2015-01-01
A recent joint meeting was held on January 30, 2014, with the US Food and Drug Administration (FDA), National Cancer Institute (NCI), clinical scientists, imaging experts, pharmaceutical and biotech companies, clinical trials cooperative groups, and patient advocate groups to discuss imaging endpoints for clinical trials in glioblastoma. This workshop developed a set of priorities and action items including the creation of a standardized MRI protocol for multicenter studies. The current document outlines consensus recommendations for a standardized Brain Tumor Imaging Protocol (BTIP), along with the scientific and practical justifications for these recommendations, resulting from a series of discussions between various experts involved in aspects of neuro-oncology neuroimaging for clinical trials. The minimum recommended sequences include: (i) parameter-matched precontrast and postcontrast inversion recovery-prepared, isotropic 3D T1-weighted gradient-recalled echo; (ii) axial 2D T2-weighted turbo spin-echo acquired after contrast injection and before postcontrast 3D T1-weighted images to control timing of images after contrast administration; (iii) precontrast, axial 2D T2-weighted fluid-attenuated inversion recovery; and (iv) precontrast, axial 2D, 3-directional diffusion-weighted images. Recommended ranges of sequence parameters are provided for both 1.5 T and 3 T MR systems. PMID:26250565
Welker, K.; Boxerman, J.; Kalnin, A.; Kaufmann, T.; Shiroishi, M.; Wintermark, M.
2016-01-01
SUMMARY MR perfusion imaging is becoming an increasingly common means of evaluating a variety of cerebral pathologies, including tumors and ischemia. In particular, there has been great interest in the use of MR perfusion imaging for both assessing brain tumor grade and for monitoring for tumor recurrence in previously treated patients. Of the various techniques devised for evaluating cerebral perfusion imaging, the dynamic susceptibility contrast method has been employed most widely among clinical MR imaging practitioners. However, when implementing DSC MR perfusion imaging in a contemporary radiology practice, a neuroradiologist is confronted with a large number of decisions. These include choices surrounding appropriate patient selection, scan-acquisition parameters, data-postprocessing methods, image interpretation, and reporting. Throughout the imaging literature, there is conflicting advice on these issues. In an effort to provide guidance to neuroradiologists struggling to implement DSC perfusion imaging in their MR imaging practice, the Clinical Practice Committee of the American Society of Functional Neuroradiology has provided the following recommendations. This guidance is based on review of the literature coupled with the practice experience of the authors. While the ASFNR acknowledges that alternate means of carrying out DSC perfusion imaging may yield clinically acceptable results, the following recommendations should provide a framework for achieving routine success in this complicated-but-rewarding aspect of neuroradiology MR imaging practice. PMID:25907520
OARSI Clinical Trials Recommendations: Hand imaging in clinical trials in osteoarthritis.
Hunter, D J; Arden, N; Cicuttini, F; Crema, M D; Dardzinski, B; Duryea, J; Guermazi, A; Haugen, I K; Kloppenburg, M; Maheu, E; Miller, C G; Martel-Pelletier, J; Ochoa-Albíztegui, R E; Pelletier, J-P; Peterfy, C; Roemer, F; Gold, G E
2015-05-01
Tremendous advances have occurred in our understanding of the pathogenesis of hand osteoarthritis (OA) and these are beginning to be applied to trials targeted at modification of the disease course. The purpose of this expert opinion, consensus driven exercise is to provide detail on how one might use and apply hand imaging assessments in disease modifying clinical trials. It includes information on acquisition methods/techniques (including guidance on positioning for radiography, sequence/protocol recommendations/hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, sequences artifacts); quality assurance/control procedures; measurement methods; measurement performance (reliability, responsiveness, validity); recommendations for trials; and research recommendations.
NASA Astrophysics Data System (ADS)
Thoma, George R.; Pipkin, Ryan; Mitra, Sunanda
1997-10-01
This paper reports the compression ratio performance of the RGB, YIQ, and HSV color plane models for the lossless coding of the National Library of Medicine's Visible Human (VH) color data set. In a previous study the correlation between adjacent VH slices was exploited using the RGB color plane model. The results of that study suggested an investigation into possible improvements using the other two color planes, and alternative differencing methods. YIQ and HSV, also know a HSI, both represent the image by separating the intensity from the color information, and we anticipated higher correlation between the intensity components of adjacent VH slices. However the compression ratio did not improve by the transformation from RGB into the other color plane models, since in order to maintain lossless performance, YIQ and HSV both require more bits to store each pixel. This increase in file size is not offset by the increase in compression due to the higher correlation of the intensity value, the best performance being achieved with the RGB color plane model. This study also explored three methods of differencing: average reference image, alternating reference image, and cascaded difference from single reference. The best method proved to be the first iteration of the cascaded difference from single reference. In this method, a single reference image is chosen, and the difference between it and its neighbor is calculated. Then the difference between the neighbor and its next neighbor is calculated. This method requires that all preceding images up to the reference image be reconstructed before the target image is available. The compression ratios obtained from this method are significantly better than the competing methods.
Luminance-model-based DCT quantization for color image compression
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Peterson, Heidi A.
1992-01-01
A model is developed to approximate visibility thresholds for discrete cosine transform (DCT) coefficient quantization error based on the peak-to-peak luminance of the error image. Experimentally measured visibility thresholds for R, G, and B DCT basis functions can be predicted by a simple luminance-based detection model. This model allows DCT coefficient quantization matrices to be designed for display conditions other than those of the experimental measurements: other display luminances, other veiling luminances, and other spatial frequencies (different pixel spacings, viewing distances, and aspect ratios).
Compressive Sampling for Non-Imaging Remote Classification
2013-10-22
spectroscopy with denoising through sparsity. Optics Express, 2012. 20(3): p. 2297-2309. 6. Marks, D.L. and D.J. Brady, Wide-field astronomical ...multiscale cameras. Astronomical Journal, 2013. 145(5). 7. Tsai, T.H. and D.J. Brady, Coded aperture snapshot spectral polarization imaging. Applied...Applied Optics 50, 4417-‐4435 (2011). 3 Brady, D. J. et al. Multiscale gigapixel photography . Nature
A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images
NASA Astrophysics Data System (ADS)
Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo
2007-03-01
Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Wu, Zhensen; Wu, Chengke
2006-02-01
We present a three-dimensional (3-D) hyperspectral image compression algorithm based on zero-block coding and wavelet transforms. An efficient asymmetric 3-D wavelet transform (AT) based on the lifting technique and packet transform is used to reduce redundancies in both the spectral and spatial dimensions. The implementation via 3-D integer lifting scheme enables us to map integer-to-integer values, enabling lossy and lossless decompression from the same bit stream. To encode these coefficients after the AT, a modified 3DSPECK algorithm-asymmetric transform 3-D set-partitioning embedded block (AT-3DSPECK) is proposed. According to the distribution of energy of the transformed coefficients, the 3DSPECK's 3-D set partitioning block algorithm and the 3-D octave band partitioning scheme are efficiently combined in the proposed AT-3DSPECK algorithm. Several AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) images are used to evaluate the compression performance. Compared with the JPEG2000, AT-3DSPIHT, and 3DSPECK lossless compression techniques, the AT-3DSPECK achieves the best lossless performance. In lossy mode, the AT-3DSPECK algorithm outperforms AT-3DSPIHT and 3DSPECK at all rates. Besides the high compression performance, AT-3DSPECK supports progressive transmission. Clearly, the proposed AT-3DSPECK algorithm is a better candidate than several conventional methods.
Visually Lossless Data Compression for Real-Time Frame/Pushbroom Space Science Imagers
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Venbrux, Jack; Bhatia, Prakash; Miller, Warner H.
2000-01-01
A visually lossless data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also applicable to frame based imaging and is error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on a block transform of a hybrid of modulated lapped transform (MLT) and discrete cosine transform (DCT), or a 2-dimensional lapped transform, followed by bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate as desired by the user. The approach requires no unique table to maximize its performance. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Flight qualified hardware implementations are in development; a functional chip set is expected by the end of 2001. The chip set is being designed to compress data in excess of 20 Msamples/sec and support quantizations from 2 to 16 bits.
Bakker, Olaf J; Go, Peter M N Y H; Puylaert, Julien B C M; Kazemier, Geert; Heij, Hugo A
2010-01-01
Every year, over 2500 unnecessary appendectomies are carried out in the Netherlands. At the initiative of the Dutch College of Surgeons, the evidence-based guideline on the diagnosis and treatment of acute appendicitis was developed. This guideline recommends that appendectomy should not be carried out without prior imaging. Ultrasonography is the recommended imaging technique in patients with suspected appendicitis. After negative or inconclusive ultrasonography, a CT scan can be carried out. Appendectomy is the standard treatment for acute appendicitis; this can be done either by open or laparoscopic surgery. The first choice treatment of appendicular infiltrate is conservative treatment.
A survey of quality measures for gray-scale image compression
NASA Technical Reports Server (NTRS)
Eskicioglu, Ahmet M.; Fisher, Paul S.
1993-01-01
Although a variety of techniques are available today for gray-scale image compression, a complete evaluation of these techniques cannot be made as there is no single reliable objective criterion for measuring the error in compressed images. The traditional subjective criteria are burdensome, and usually inaccurate or inconsistent. On the other hand, being the most common objective criterion, the mean square error (MSE) does not have a good correlation with the viewer's response. It is now understood that in order to have a reliable quality measure, a representative model of the complex human visual system is required. In this paper, we survey and give a classification of the criteria for the evaluation of monochrome image quality.
Boulgouris, N V; Tzovaras, D; Strintzis, M G
2001-01-01
The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.
Guided compressive sensing single-pixel imaging technique based on hierarchical model
NASA Astrophysics Data System (ADS)
Peng, Yang; Liu, Yu; Ren, Weiya; Tan, Shuren; Zhang, Maojun
2016-04-01
Single-pixel imaging has emerged a decade ago as an imaging technique that exploits the theory of compressive sensing. In this research, the problem of optimizing the measurement matrix in compressive sensing framework was addressed. Thus far, random measurement matrices are widely used because they provide small coherence. However, recent reports claim that measurement matrix can be optimized, thereby improving its performance. Based on such proposition, this study proposed an alternative approach of optimizing the measurement matrix in a hierarchical model. In particular, this study constructed the hierarchical model based on an increasing resolution grade by exploiting the guided information and the adaptive step size method. An image with a demanded resolution was then obtained using the l1-norm method. Subsequently, the performance of the introduced method was verified and compared with those of existing approaches via several experiments. Results of the tests indicated that the reconstruction quality of optimizing the measurement matrix was improved when the proposed method was used.
Independent transmission of sign language interpreter in DVB: assessment of image compression
NASA Astrophysics Data System (ADS)
Zatloukal, Petr; Bernas, Martin; Dvořák, LukáÅ.¡
2015-02-01
Sign language on television provides information to deaf that they cannot get from the audio content. If we consider the transmission of the sign language interpreter over an independent data stream, the aim is to ensure sufficient intelligibility and subjective image quality of the interpreter with minimum bit rate. The work deals with the ROI-based video compression of Czech sign language interpreter implemented to the x264 open source library. The results of this approach are verified in subjective tests with the deaf. They examine the intelligibility of sign language expressions containing minimal pairs for different levels of compression and various resolution of image with interpreter and evaluate the subjective quality of the final image for a good viewing experience.
Predicting the fidelity of JPEG2000 compressed CT images using DICOM header information
Kim, Kil Joong; Kim, Bohyoung; Lee, Hyunna; Choi, Hosik; Jeon, Jong-June; Ahn, Jeong-Hwan; Lee, Kyoung Ho
2011-12-15
Purpose: To propose multiple logistic regression (MLR) and artificial neural network (ANN) models constructed using digital imaging and communications in medicine (DICOM) header information in predicting the fidelity of Joint Photographic Experts Group (JPEG) 2000 compressed abdomen computed tomography (CT) images. Methods: Our institutional review board approved this study and waived informed patient consent. Using a JPEG2000 algorithm, 360 abdomen CT images were compressed reversibly (n = 48, as negative control) or irreversibly (n = 312) to one of different compression ratios (CRs) ranging from 4:1 to 10:1. Five radiologists independently determined whether the original and compressed images were distinguishable or indistinguishable. The 312 irreversibly compressed images were divided randomly into training (n = 156) and testing (n = 156) sets. The MLR and ANN models were constructed regarding the DICOM header information as independent variables and the pooled radiologists' responses as dependent variable. As independent variables, we selected the CR (DICOM tag number: 0028, 2112), effective tube current-time product (0018, 9332), section thickness (0018, 0050), and field of view (0018, 0090) among the DICOM tags. Using the training set, an optimal subset of independent variables was determined by backward stepwise selection in a four-fold cross-validation scheme. The MLR and ANN models were constructed with the determined independent variables using the training set. The models were then evaluated on the testing set by using receiver-operating-characteristic (ROC) analysis regarding the radiologists' pooled responses as the reference standard and by measuring Spearman rank correlation between the model prediction and the number of radiologists who rated the two images as distinguishable. Results: The CR and section thickness were determined as the optimal independent variables. The areas under the ROC curve for the MLR and ANN predictions were 0.91 (95% CI; 0
Compressive Sampling based Image Coding for Resource-deficient Visual Communication.
Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Zhang, Xinfeng; Zhao, Debin; Gao, Wen
2016-04-14
In this paper, a new compressive sampling based image coding scheme is developed to achieve competitive coding efficiency at lower encoder computational complexity, while supporting error resilience. This technique is particularly suitable for visual communication with resource-deficient devices. At the encoder, compact image representation is produced, which is a polyphase down-sampled version of the input image; but the conventional low-pass filter prior to down-sampling is replaced by a local random binary convolution kernel. The pixels of the resulting down-sampled pre-filtered image are local random measurements and placed in the original spatial configuration. The advantages of local random measurements are two folds: 1) preserve high-frequency image features that are otherwise discarded by low-pass filtering; 2) remain a conventional image and can therefore be coded by any standardized codec to remove statistical redundancy of larger scales. Moreover, measurements generated by different kernels can be considered as multiple descriptions of the original image and therefore the proposed scheme has the advantage of multiple description coding. At the decoder, a unified sparsity-based soft-decoding technique is developed to recover the original image from received measurements in a framework of compressive sensing. Experimental results demonstrate that the proposed scheme is competitive compared with existing methods, with a unique strength of recovering fine details and sharp edges at low bit-rates.
NASA Astrophysics Data System (ADS)
Kim, Kil Joong; Mantiuk, Rafal; Lee, Kyoung Ho; Heidrich, Wolfgang
2010-02-01
Many visual difference predictors (VDPs) have used basic psychophysical data (such as ModelFest) to calibrate the algorithm parameters and to validate their performances. However, the basic psychophysical data often do not contain sufficient number of stimuli and its variations to test more complex components of a VDP. In this paper we calibrate the Visual Difference Predictor for High Dynamic Range images (HDR-VDP) using radiologists' experimental data for JPEG2000 compressed CT images which contain complex structures. Then we validate the HDR-VDP in predicting the presence of perceptible compression artifacts. 240 CT-scan images were encoded and decoded using JPEG2000 compression at four compression ratios (CRs). Five radiologists participated to independently determine if each image pair (original and compressed images) was indistinguishable or distinguishable. A threshold CR for each image, at which 50% of radiologists would detect compression artifacts, was estimated by fitting a psychometric function. The CT images compressed at the threshold CRs were used to calibrate the HDR-VDP parameters and to validate its prediction accuracy. Our results showed that the HDR-VDP calibrated for the CT image data gave much better predictions than the HDR-VDP calibrated to the basic psychophysical data (ModelFest + contrast masking data for sine gratings).
FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
NASA Astrophysics Data System (ADS)
Bradley, Jonathan N.; Brislawn, Christopher M.; Hopper, Thomas
1993-08-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite- length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
Bradley, J.N.; Brislawn, C.M. ); Hopper, T. )
1993-01-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.
The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression
Bradley, J.N.; Brislawn, C.M.; Hopper, T.
1993-05-01
The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.
New Compressed Sensing ISAR Imaging Algorithm Based on Log-Sum Minimization
NASA Astrophysics Data System (ADS)
Ping, Cheng; Jiaqun, Zhao
2016-12-01
To improve the performance of inverse synthetic aperture radar (ISAR) imaging based on compressed sensing (CS), a new algorithm based on log-sum minimization is proposed. A new interpretation of the algorithm is also provided. Compared with the conventional algorithm, the new algorithm can recover signals based on fewer measurements, in looser sparsity condition, with smaller recovery error, and it has obtained better sinusoidal signal spectrum and imaging result for real ISAR data. Therefore, the proposed algorithm is a promising imaging algorithm in CS ISAR.
An infrared image super-resolution reconstruction method based on compressive sensing
NASA Astrophysics Data System (ADS)
Mao, Yuxing; Wang, Yan; Zhou, Jintao; Jia, Haiwei
2016-05-01
Limited by the properties of infrared detector and camera lens, infrared images are often detail missing and indistinct in vision. The spatial resolution needs to be improved to satisfy the requirements of practical application. Based on compressive sensing (CS) theory, this thesis presents a single image super-resolution reconstruction (SRR) method. With synthetically adopting image degradation model, difference operation-based sparse transformation method and orthogonal matching pursuit (OMP) algorithm, the image SRR problem is transformed into a sparse signal reconstruction issue in CS theory. In our work, the sparse transformation matrix is obtained through difference operation to image, and, the measurement matrix is achieved analytically from the imaging principle of infrared camera. Therefore, the time consumption can be decreased compared with the redundant dictionary obtained by sample training such as K-SVD. The experimental results show that our method can achieve favorable performance and good stability with low algorithm complexity.
NASA Astrophysics Data System (ADS)
Zhu, Liren; Chen, Yujia; Liang, Jinyang; Gao, Liang; Ma, Cheng; Wang, Lihong V.
2016-03-01
The single-shot compressed ultrafast photography (CUP) camera is the fastest receive-only camera in the world. In this work, we introduce an external CCD camera and a space- and intensity-constrained (SIC) reconstruction algorithm to improve the image quality of CUP. The CCD camera takes a time-unsheared image of the dynamic scene. Unlike the previously used unconstrained algorithm, the proposed algorithm incorporates both spatial and intensity constraints, based on the additional prior information provided by the external CCD camera. First, a spatial mask is extracted from the time-unsheared image to define the zone of action. Second, an intensity threshold constraint is determined based on the similarity between the temporally projected image of the reconstructed datacube and the time-unsheared image taken by the external CCD. Both simulation and experimental studies showed that the SIC reconstruction improves the spatial resolution, contrast, and general quality of the reconstructed image.
ERIC Educational Resources Information Center
Ritzhaupt, Albert Dieter; Barron, Ann
2008-01-01
The purpose of this study was to investigate the effect of time-compressed narration and representational adjunct images on a learner's ability to recall and recognize information. The experiment was a 4 Audio Speeds (1.0 = normal vs. 1.5 = moderate vs. 2.0 = fast vs. 2.5 = fastest rate) x Adjunct Image (Image Present vs. Image Absent) factorial…
High dynamic range compression and detail enhancement of infrared images in the gradient domain
NASA Astrophysics Data System (ADS)
Zhang, Feifei; Xie, Wei; Ma, Guorui; Qin, Qianqing
2014-11-01
To find the trade-off between providing an accurate perception of the global scene and improving the visibility of details without excessively distorting radiometric infrared information, a novel gradient-domain-based visualization method for high dynamic range infrared images is proposed in this study. The proposed method adopts an energy function which includes a data constraint term and a gradient constraint term. In the data constraint term, the classical histogram projection method is used to perform the initial dynamic range compression to obtain the desired pixel values and preserve the global contrast. In the gradient constraint term, the moment matching method is adopted to obtain the normalized image; then a gradient gain factor function is designed to adjust the magnitudes of the normalized image gradients and obtain the desired gradient field. Lastly, the low dynamic range image is solved from the proposed energy function. The final image is obtained by linearly mapping the low dynamic range image to the 8-bit display range. The effectiveness and robustness of the proposed method are analyzed using the infrared images obtained from different operating conditions. Compared with other well-established methods, our method shows a significant performance in terms of dynamic range compression, while enhancing the details and avoiding the common artifacts, such as halo, gradient reversal, hazy or saturation.
Multiple-image encryption based on compressive holography using a multiple-beam interferometer
NASA Astrophysics Data System (ADS)
Wan, Yuhong; Wu, Fan; Yang, Jinghuan; Man, Tianlong
2015-05-01
Multiple-image encryption techniques not only improve the encryption capacity but also facilitate the transmission and storage of the ciphertext. We present a new method of multiple-image encryption based on compressive holography with enhanced data security using a multiple-beam interferometer. By modifying the Mach-Zehnder interferometer, the interference of multiple object beams and unique reference beam is implemented for encrypting multiple images simultaneously into one hologram. The original images modulated with the random phase masks are put in different positions with different distance away from the CCD camera. Each image plays the role of secret key for other images to realize the mutual encryption. Four-step phase shifting technique is combined with the holographic recording. The holographic recording is treated as a compressive sensing process, thus the decryption process is inverted as a minimization problem and the two-step iterative shrinkage/thresholding algorithm (TwIST) is employed to solve this optimization problem. The simulated results about multiple binary and grayscale images encryption are demonstrated to verify the validity and robustness of our proposed method.
Faster techniques to evolve wavelet coefficients for better fingerprint image compression
NASA Astrophysics Data System (ADS)
Shanavaz, K. T.; Mythili, P.
2013-05-01
In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 × 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.
GFG-Based Compression and Retrieval of Document Images in Indian Scripts
NASA Astrophysics Data System (ADS)
Harit, Gaurav; Chaudhury, Santanu; Garg, Ritu
Indexing and retrieval of Indian language documents is an important problem. We present an interactive access scheme for Indian language document collection using techniques for word-image-based search. The compression and retrieval paradigm we propose is applicable even for those Indian scripts for which reliable OCR technology is not available. Our technique for word spotting is based on exploiting the geometrical features of the word image. The word image features are represented in the form of a graph called geometric feature graph (GFG). The GFG is encoded as a string which serves as a compressed representation of the word image skeleton. We have also augmented the GFG-based word image spotting with latent semantic analysis for more effective retrieval. The query is specified as a set of word images and the documents that best match with the query representation in the latent semantic space are retrieved. The retrieval paradigm is further enhanced to the conceptual level with the use of document image content-domain knowledge specified in the form of an ontology.
Study on the application of embedded zero-tree wavelet algorithm in still images compression
NASA Astrophysics Data System (ADS)
Zhang, Jing; Lu, Yanhe; Li, Taifu; Lei, Gang
2005-12-01
An image has directional selection capability with high frequency through wavelet transformation. It is coincident with the visual characteristics of human eyes. The most important visual characteristic in human eyes is the visual covering effect. The embedded Zero-tree Wavelet (EZW) coding method completes the same level coding for a whole image. In an image, important regions (regions of interest) and background regions (indifference regions) are coded through the same levels. On the basis of studying the human visual characteristics, that is, the visual covering effect, this paper employs an image-compressing method with regions of interest, i.e., an algorithm of Embedded Zero-tree Wavelet with Regions of Interest (EZWROI Algorism) to encode the regions of interest and regions of non-interest separately. In this way, the lost important information in the image is much less. It makes full use of channel resource and memory space, and improves the image quality in the regions of interest. Experimental study showed that a resumed image using an EZW_ROI algorithm is better in visual effects than that of EZW on condition of high compression ratio.
Research on lossless compression of true color RGB image with low time and space complexity
NASA Astrophysics Data System (ADS)
Pan, ShuLin; Xie, ChengJun; Xu, Lin
2008-12-01
Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.
A mosaic approach for unmanned airship remote sensing images based on compressive sensing
NASA Astrophysics Data System (ADS)
Yang, Jilian; Zhang, Aiwu; Sun, Weidong
2011-12-01
The recently-emerged compressive sensing (CS) theory goes against the Nyquist-Shannon (NS) sampling theory and shows that signals can be recovered from far fewer samples than what the NS sampling theorem states. In this paper, to solve the problems in image fusion step of the full-scene image mosaic for the multiple images acquired by a low-altitude unmanned airship, a novel information mutual complement (IMC) model based on CS theory is proposed. IMC model rests on a similar concept that was termed as the joint sparsity models (JSMs) in distributed compressive sensing (DCS) theory, but the measurement matrix in our IMC model is rearranged in order for the multiple images to be reconstructed as one combination. The experimental results of the BP and TSW-CS algorithm with our IMC model certified the effectiveness and adaptability of this proposed approach, and demonstrated that it is possible to substantially reduce the measurement rates of the signal ensemble with good performance in the compressive domain.
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Wu, Zhensen; Wu, Chengke
2005-02-01
In this paper, we present a three-dimensional (3D) hyperspectral image compression algorithm based on zeroblock coding and wavelet transforms. An efficient Asymmetric 3D wavelet Transform (AT) based on the lifting technique and packet transform is used to reduce redundancies in both the spectral and spatial dimensions. The implementation via 3D integer lifting scheme allows to map integer-to-integer values, enabling lossy and lossless decompression from the same bit stream. To encode these coefficients after Asymmetric 3D wavelet transform, a modified 3DSPECK algorithm - Asymmetric Transform 3D Set Partitioning Embedded bloCK (AT-3DSPECK) is proposed. According to the distribution of energy of the transformed coefficients, the 3DSPECK's 3D set partitioning block algorithm and the 3D octave band partitioning scheme are efficiently combined in the proposed AT-3DSPECK algorithm. Several AVIRIS images are used to evaluate the compression performance. Compared with the JPEG2000, AT-3DSPIHT and 3DSPECK lossless compression techniques, the AT-3DSPECK achieves the best lossless performance. In lossy mode, the AT-3DSPECK algorithm outperforms AT-3DSPIHT and 3DSPECK at all rates. Besides the high compression performance, AT-3DSPECK supports progressive transmission. Clearly, the proposed AT-3DSPECK algorithm is a better candidate than several conventional methods.
NASA Astrophysics Data System (ADS)
Lim, Se Hoon
Compressive holography estimates images from incomplete data by using sparsity priors. Compressive holography combines digital holography and compressive sensing. Digital holography consists of computational image estimation from data captured by an electronic focal plane array. Compressive sensing enables accurate data reconstruction by prior knowledge on desired signal. Computational and optical co-design optimally supports compressive holography in the joint computational and optical domain. This dissertation explores two examples of compressive holography: estimation of 3D tomographic images from 2D data and estimation of images from under sampled apertures. Compressive holography achieves single shot holographic tomography using decompressive inference. In general, 3D image reconstruction suffers from underdetermined measurements with a 2D detector. Specifically, single shot holographic tomography shows the uniqueness problem in the axial direction because the inversion is ill-posed. Compressive sensing alleviates the ill-posed problem by enforcing some sparsity constraints. Holographic tomography is applied for video-rate microscopic imaging and diffuse object imaging. In diffuse object imaging, sparsity priors are not valid in coherent image basis due to speckle. So incoherent image estimation is designed to hold the sparsity in incoherent image basis by support of multiple speckle realizations. High pixel count holography achieves high resolution and wide field-of-view imaging. Coherent aperture synthesis can be one method to increase the aperture size of a detector. Scanning-based synthetic aperture confronts a multivariable global optimization problem due to time-space measurement errors. A hierarchical estimation strategy divides the global problem into multiple local problems with support of computational and optical co-design. Compressive sparse aperture holography can be another method. Compressive sparse sampling collects most of significant field
Optical image encryption based on compressive sensing and chaos in the fractional Fourier domain
NASA Astrophysics Data System (ADS)
Liu, Xingbin; Mei, Wenbo; Du, Huiqian
2014-11-01
We propose a novel image encryption algorithm based on compressive sensing (CS) and chaos in the fractional Fourier domain. The original image is dimensionality reduction measured using CS. The measured values are then encrypted using chaotic-based double-random-phase encoding technique in the fractional Fourier transform domain. The measurement matrix and the random-phase masks used in the encryption process are formed from pseudo-random sequences generated by the chaotic map. In this proposed algorithm, the final result is compressed and encrypted. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys for distribution simultaneously. Numerical experiments verify the validity and security of the proposed algorithm.
Invertible update-then-predict integer lifting wavelet for lossless image compression
NASA Astrophysics Data System (ADS)
Chen, Dong; Li, Yanjuan; Zhang, Haiying; Gao, Wenpeng
2017-01-01
This paper presents a new wavelet family for lossless image compression by re-factoring the channel representation of the update-then-predict lifting wavelet, introduced by Claypoole, Davis, Sweldens and Baraniuk, into lifting steps. We name the new wavelet family as invertible update-then-predict integer lifting wavelets (IUPILWs for short). To build IUPILWs, we investigate some central issues such as normalization, invertibility, integer structure, and scaling lifting. The channel representation of the previous update-then-predict lifting wavelet with normalization is given and the invertibility is discussed firstly. To guarantee the invertibility, we re-factor the channel representation into lifting steps. Then the integer structure and scaling lifting of the invertible update-then-predict wavelet are given and the IUPILWs are built. Experiments show that comparing with the integer lifting structure of 5/3 wavelet, 9/7 wavelet, and iDTT, IUPILW results in the lower bit-rates for lossless image compression.
Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.
Lan, Cuiling; Shi, Guangming; Wu, Feng
2010-04-01
Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.
Combining nonlinear multiresolution system and vector quantization for still image compression
NASA Astrophysics Data System (ADS)
Wong, Yiu-fai
1994-05-01
It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge- preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized in the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.
Combining nonlinear multiresolution system and vector quantization for still image compression
Wong, Y.
1993-12-17
It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.
Effect of image compression for model and human observers in signal-known-statistically tasks
NASA Astrophysics Data System (ADS)
Eckstein, Miguel P.; Pham, Binh; Abbey, Craig K.
2002-04-01
Previous studies have shown that model observers can be used for automated evaluation and optimization of image compression with respect to human visual performance in a task where the signal does not vary and is known a priori by the observer (signal known exactly, SKE). Here, we extend previous work to two tasks that are intended to more realistically represent the day-to-day visual diagnostic decision in the clinical setting. In the signal known exactly but variable task (SKEV), the signal varies from trial to trial (e.g., size, shape, etc) but is known to the observer. In the signal known statistically task (SKS) the signal varies from trial to trial and the observer does not have knowledge of which signal is present in that trial. We compare SKEV/SKS human and model observer performance detecting simulated arterial filling defects embedded in real coronary angiographic backgrounds in images that have undergone different amounts of JPEG and JPEG 2000 image compression. Our results show that both human and model performance at low compression ratios is better for the JPEG algorithm than the JPEG 2000 algorithm. Metrics of image quality such as the root mean square error (or the related peak signal to noise ratio) incorrectly predict a JPEG 2000 superiority. Results also show that although model and to a lesser extent human performance improves with the trial to trial knowledge of the signal present (SKEV vs. SKS task), conclusions about which compression algorithm is better (JPEG vs. JPEG 2000) for the current task would not change whether one used an SKEV or SKS task. These findings might suggest that the computationally more tractable SKEV models could be used as a good first approximation for automated evaluation of the more clinically realistic SKS task.
The wavelet transform and the suppression theory of binocular vision for stereo image compression
Reynolds, W.D. Jr; Kenyon, R.V.
1996-08-01
In this paper a method for compression of stereo images. The proposed scheme is a frequency domain approach based on the suppression theory of binocular vision. By using the information in the frequency domain, complex disparity estimation techniques can be avoided. The wavelet transform is used to obtain a multiresolution analysis of the stereo pair by which the subbands convey the necessary frequency domain information.
NASA Astrophysics Data System (ADS)
Arias, Fernando X.; Sierra, Heidy; Rajadhyaksha, Milind; Arzuaga, Emmanuel
2016-03-01
Compressive Sensing (CS)-based technologies have shown potential to improve the efficiency of acquisition, manipulation, analysis and storage processes on signals and imagery with slight discernible loss in data performance. The CS framework relies on the reconstruction of signals that are presumed sparse in some domain, from a significantly small data collection of linear projections of the signal of interest. As a result, a solution to the underdetermined linear system resulting from this paradigm makes it possible to estimate the original signal with high accuracy. One common approach to solve the linear system is based on methods that minimize the L1-norm. Several fast algorithms have been developed for this purpose. This paper presents a study on the use of CS in high-resolution reflectance confocal microscopy (RCM) images of the skin. RCM offers a cell resolution level similar to that used in histology to identify cellular patterns for diagnosis of skin diseases. However, imaging of large areas (required for effective clinical evaluation) at such high-resolution can turn image capturing, processing and storage processes into a time consuming procedure, which may pose a limitation for use in clinical settings. We present an analysis on the compression ratio that may allow for a simpler capturing approach while reconstructing the required cellular resolution for clinical use. We provide a comparative study in compressive sensing and estimate its effectiveness in terms of compression ratio vs. image reconstruction accuracy. Preliminary results show that by using as little as 25% of the original number of samples, cellular resolution may be reconstructed with high accuracy.
Improved compressed sensing-based cone-beam CT reconstruction using adaptive prior image constraints
NASA Astrophysics Data System (ADS)
Lee, Ho; Xing, Lei; Davidi, Ran; Li, Ruijiang; Qian, Jianguo; Lee, Rena
2012-04-01
Volumetric cone-beam CT (CBCT) images are acquired repeatedly during a course of radiation therapy and a natural question to ask is whether CBCT images obtained earlier in the process can be utilized as prior knowledge to reduce patient imaging dose in subsequent scans. The purpose of this work is to develop an adaptive prior image constrained compressed sensing (APICCS) method to solve this problem. Reconstructed images using full projections are taken on the first day of radiation therapy treatment and are used as prior images. The subsequent scans are acquired using a protocol of sparse projections. In the proposed APICCS algorithm, the prior images are utilized as an initial guess and are incorporated into the objective function in the compressed sensing (CS)-based iterative reconstruction process. Furthermore, the prior information is employed to detect any possible mismatched regions between the prior and current images for improved reconstruction. For this purpose, the prior images and the reconstructed images are classified into three anatomical regions: air, soft tissue and bone. Mismatched regions are identified by local differences of the corresponding groups in the two classified sets of images. A distance transformation is then introduced to convert the information into an adaptive voxel-dependent relaxation map. In constructing the relaxation map, the matched regions (unchanged anatomy) between the prior and current images are assigned with smaller weight values, which are translated into less influence on the CS iterative reconstruction process. On the other hand, the mismatched regions (changed anatomy) are associated with larger values and the regions are updated more by the new projection data, thus avoiding any possible adverse effects of prior images. The APICCS approach was systematically assessed by using patient data acquired under standard and low-dose protocols for qualitative and quantitative comparisons. The APICCS method provides an
Design of a receiver operating characteristic (ROC) study of 10:1 lossy image compression
NASA Astrophysics Data System (ADS)
Collins, Cary A.; Lane, David; Frank, Mark S.; Hardy, Michael E.; Haynor, David R.; Smith, Donald V.; Parker, James E.; Bender, Gregory N.; Kim, Yongmin
1994-04-01
The digital archiving system at Madigan Army Medical Center (MAMC) uses a 10:1 lossy data compression algorithm for most forms of computed radiography. A systematic study on the potential effect of lossy image compression on patient care has been initiated with a series of studies focused on specific diagnostic tasks. The studies are based upon the receiver operating characteristic (ROC) method of analysis for diagnostic systems. The null hypothesis is that observer performance with approximately 10:1 compressed and decompressed images is not different from using original, uncompressed images for detecting subtle pathologic findings seen on computed radiographs of bone, chest, or abdomen, when viewed on a high-resolution monitor. Our design involves collecting cases from eight pathologic categories. Truth is determined by committee using confirmatory studies performed during routine clinical practice whenever possible. Software has been developed to aid in case collection and to allow reading of the cases for the study using stand-alone Siemens Litebox workstations. Data analysis uses two methods, ROC analysis and free-response ROC (FROC) methods. This study will be one of the largest ROC/FROC studies of its kind and could benefit clinical radiology practice using PACS technology. The study design and results from a pilot FROC study are presented.
Application of a special JBIG processor to compression of Earth observation images
NASA Astrophysics Data System (ADS)
Chirco, Piero L.; Evangelisti, Pietro; Zanarini, Martina
1999-09-01
The large size is a typical feature of earth observation images. The increasing number of bands simultaneously available on satellites and the launch of system able to achieve a ground resolution as low as one meter is making this problem more a nd more hot. Indeed, the large size of typical images dramatically reduces the possibility to distribute them to a wide number of interest parties. In the wide majority of cases, this problem is currently tackled either by using lossy compression schemes such as basic versions of JPEG or by narrowing the ground extension of the pictures. Of course, both of them are unsatisfactory. Indeed, the partial loss of data may be acceptable only for non-quantitative analysis while narrower pictures may not carry all the needed informant. An alternate possibility is the use of an efficient lossless algorithm. Among others, JBIG has been preferred for this purpose because it achieves very high compression rate, overperforming the well known ZIP algorithm in the wide majority of cases. Further advantages are due to its progressive nature and to its availability as an ITU international standard. In order to have a very performant syste, this algorithm has been implemented by the development of an application specific integrated circuit designed to compress/decompress large volumes of data with throughput rate greater than 1 Gb/min. The performances achieved by the system when dealing with typical visual and radar satellite images and the perspective applications are described.
Comparison of wavelet scalar quantization and JPEG for fingerprint image compression
NASA Astrophysics Data System (ADS)
Kidd, Robert C.
1995-01-01
An overview of the wavelet scalar quantization (WSQ) and Joint Photographic Experts Group (JPEG) image compression algorithms is given. Results of application of both algorithms to a database of 60 fingerprint images are then discussed. Signal-to-noise ratio (SNR) results for WSQ, JPEG with quantization matrix (QM) optimization, and JPEG with standard QM scaling are given at several average bit rates. In all cases, optimized-QM JPEG is equal or superior to WSQ in SNR performance. At 0.48 bit/pixel, which is in the operating range proposed by the Federal Bureau of Investigation (FBI), WSQ and QM-optimized JPEG exhibit nearly identical SNR performance. In addition, neither was subjectively preferred on average by human viewers in a forced-choice image-quality experiment. Although WSQ was chosen by the FBI as the national standard for compression of digital fingerprint images on the basis of image quality that was ostensibly superior to that of existing international standard JPEG, it appears likely that this superiority was due more to lack of optimization of JPEG parameters than to inherent superiority of the WSQ algorithm. Furthermore, substantial worldwide support for JPEG has developed due to its status as an international standard, and WSQ is significantly slower than JPEG in software implementation. Taken together, these facts suggest a decision different from the one that was made by the FBI with regard to its fingerprint image compression standard. Still, it is possible that WSQ enhanced with an optimal quantizer-design algorithm could outperform JPEG. This is a topic for future research.
Christensen, Gary E.; Song, Joo Hyun; Lu, Wei; Naqa, Issam El; Low, Daniel A.
2007-06-15
Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log
Christensen, Gary E; Song, Joo Hyun; Lu, Wei; El Naqa, Issam; Low, Daniel A
2007-06-01
Breathing motion is one of the major limiting factors for reducing dose and irradiation of normal tissue for conventional conformal radiotherapy. This paper describes a relationship between tracking lung motion using spirometry data and image registration of consecutive CT image volumes collected from a multislice CT scanner over multiple breathing periods. Temporal CT sequences from 5 individuals were analyzed in this study. The couch was moved from 11 to 14 different positions to image the entire lung. At each couch position, 15 image volumes were collected over approximately 3 breathing periods. It is assumed that the expansion and contraction of lung tissue can be modeled as an elastic material. Furthermore, it is assumed that the deformation of the lung is small over one-fifth of a breathing period and therefore the motion of the lung can be adequately modeled using a small deformation linear elastic model. The small deformation inverse consistent linear elastic image registration algorithm is therefore well suited for this problem and was used to register consecutive image scans. The pointwise expansion and compression of lung tissue was measured by computing the Jacobian of the transformations used to register the images. The logarithm of the Jacobian was computed so that expansion and compression of the lung were scaled equally. The log-Jacobian was computed at each voxel in the volume to produce a map of the local expansion and compression of the lung during the breathing period. These log-Jacobian images demonstrate that the lung does not expand uniformly during the breathing period, but rather expands and contracts locally at different rates during inhalation and exhalation. The log-Jacobian numbers were averaged over a cross section of the lung to produce an estimate of the average expansion or compression from one time point to the next and compared to the air flow rate measured by spirometry. In four out of five individuals, the average log
Vector quantizer based on brightness maps for image compression with the polynomial transform
NASA Astrophysics Data System (ADS)
Escalante-Ramirez, Boris; Moreno-Gutierrez, Mauricio; Silvan-Cardenas, Jose L.
2002-11-01
We present a vector quantization scheme acting on brightness fields based on distance/distortion criteria correspondent with psycho-visual aspects. These criteria quantify sensorial distortion between vectors that represent either portions of a digital image or alternatively, coefficients of a transform-based coding system. In the latter case, we use an image representation model, namely the Hermite transform, that is based on some of the main perceptual characteristics of the human vision system (HVS) and in their response to light stimulus. Energy coding in the brightness domain, determination of local structure, code-book training and local orientation analysis are all obtained by means of the Hermite transform. This paper, for thematic reasons, is divided in four sections. The first one will shortly highlight the importance of having newer and better compression algorithms. This section will also serve to explain briefly the most relevant characteristics of the HVS, advantages and disadvantages related with the behavior of our vision in front of ocular stimulus. The second section shall go through a quick review of vector quantization techniques, focusing their performance on image treatment, as a preview for the image vector quantizer compressor actually constructed in section 5. Third chapter was chosen to concentrate the most important data gathered on brightness models. The building of this so-called brightness maps (quantification of the human perception on the visible objects reflectance), in a bi-dimensional model, will be addressed here. The Hermite transform, a special case of polynomial transforms, and its usefulness, will be treated, in an applicable discrete form, in the fourth chapter. As we have learned from previous works 1, Hermite transform has showed to be a useful and practical solution to efficiently code the energy within an image block, deciding which kind of quantization is to be used upon them (whether scalar or vector). It will also be
Numerical implementation of the multiple image optical compression and encryption technique
NASA Astrophysics Data System (ADS)
Ouerhani, Y.; Aldossari, M.; Alfalou, A.; Brosseau, C.
2015-03-01
In this study, we propose a numerical implementation (using a GPU) of an optimized multiple image compression and encryption technique. We first introduce the double optimization procedure for spectrally multiplexing multiple images. This technique is adapted, for a numerical implementation, from a recently proposed optical setup implementing the Fourier transform (FT)1. The new analysis technique is a combination of a spectral fusion based on the properties of FT, a specific spectral filtering, and a quantization of the remaining encoded frequencies using an optimal number of bits. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. Spectrally multiplexing multiple images defines a first level of encryption. A second level of encryption based on a real key image is used to reinforce encryption. Additionally, we are concerned with optimizing the compression rate by adapting the size of the spectral block to each target image and decreasing the number of bits required to encode each block. This size adaptation is realized by means of the root-mean-square (RMS) time-frequency criterion2. We have found that this size adaptation provides a good trade-off between bandwidth of spectral plane and number of reconstructed output images3. Secondly, the encryption rate is improved by using a real biometric key and randomly changing the rotation angle of
Fast algorithm of byte-to-byte wavelet transform for image compression applications
NASA Astrophysics Data System (ADS)
Pogrebnyak, Oleksiy B.; Sossa Azuela, Juan H.; Ramirez, Pablo M.
2002-11-01
A new fast algorithm of 2D DWT transform is presented. The algorithm operates on byte represented images and performs image transformation with the Cohen-Daubechies-Feauveau wavelet of the second order. It uses the lifting scheme for the calculations. The proposed algorithm is based on the "checkerboard" computation scheme for non-separable 2D wavelet. The problem of data extension near the image borders is resolved computing 1D Haar wavelet in the vicinity of the borders. With the checkerboard splitting, at each level of decomposition only one detail image is produced that simplify the further analysis for data compression. The calculations are rather simple, without any floating point operation allowing the implementation of the designed algorithm in fixed point DSP processors for fast, near real time processing. The proposed algorithm does not possesses perfect restoration of the processed data because of rounding that is introduced at each level of decomposition/restoration to perform operations with byte represented data. The designed algorithm was tested on different images. The criterion to estimate quantitatively the quality of the restored images was the well known PSNR. For the visual quality estimation the error maps between original and restored images were calculated. The obtained simulation results show that the visual and quantitative quality of the restored images is degraded with number of decomposition level increasing but is sufficiently high even after 6 levels. The introduced distortion are concentrated in the vicinity of high spatial activity details and are absent in the homogeneous regions. The designed algorithm can be used for image lossy compression and in noise suppression applications.
Lensless wide-field fluorescent imaging on a chip using compressive decoding of sparse objects.
Coskun, Ahmet F; Sencan, Ikbal; Su, Ting-Wei; Ozcan, Aydogan
2010-05-10
We demonstrate the use of a compressive sampling algorithm for on-chip fluorescent imaging of sparse objects over an ultra-large field-of-view (>8 cm(2)) without the need for any lenses or mechanical scanning. In this lensfree imaging technique, fluorescent samples placed on a chip are excited through a prism interface, where the pump light is filtered out by total internal reflection after exciting the entire sample volume. The emitted fluorescent light from the specimen is collected through an on-chip fiber-optic faceplate and is delivered to a wide field-of-view opto-electronic sensor array for lensless recording of fluorescent spots corresponding to the samples. A compressive sampling based optimization algorithm is then used to rapidly reconstruct the sparse distribution of fluorescent sources to achieve approximately 10 microm spatial resolution over the entire active region of the sensor-array, i.e., over an imaging field-of-view of >8 cm(2). Such a wide-field lensless fluorescent imaging platform could especially be significant for high-throughput imaging cytometry, rare cell analysis, as well as for micro-array research.
Adaptive Nonlocal Sparse Representation for Dual-Camera Compressive Hyperspectral Imaging.
Wang, Lizhi; Xiong, Zhiwei; Shi, Guangming; Wu, Feng; Zeng, Wenjun
2016-10-25
Leveraging the compressive sensing (CS) theory, coded aperture snapshot spectral imaging (CASSI) provides an efficient solution to recover 3D hyperspectral data from a 2D measurement. The dual-camera design of CASSI, by adding an uncoded panchromatic measurement, enhances the reconstruction fidelity while maintaining the snapshot advantage. In this paper, we propose an adaptive nonlocal sparse representation (ANSR) model to boost the performance of dualcamera compressive hyperspectral imaging (DCCHI). Specifically, the CS reconstruction problem is formulated as a 3D cube based sparse representation to make full use of the nonlocal similarity in both the spatial and spectral domains. Our key observation is that, the panchromatic image, besides playing the role of direct measurement, can be further exploited to help the nonlocal similarity estimation. Therefore, we design a joint similarity metric by adaptively combining the internal similarity within the reconstructed hyperspectral image and the external similarity within the panchromatic image. In this way, the fidelity of CS reconstruction is greatly enhanced. Both simulation and hardware experimental results show significant improvement of the proposed method over the state-of-the-art.
NASA Technical Reports Server (NTRS)
Duong, Tuan A.; Duong, Vu A.
2009-01-01
This paper presents the JPL-developed Sequential Principal Component Analysis (SPCA) algorithm for